Home Blog Page 505

Activities for All at OS Summit in Los Angeles: Mentoring, Yoga, Puppies, and More!

Open Source Summit North America is less than two months away! Join 2,000+ open source professionals Sept. 11-14 in Los Angeles, CA, for exciting keynotes and technical talks covering all things Linux, cloud, containers, networking, emerging open source technologies, and more.

Register now!

With your registration, you also get access to many special events throughout the four-day conference. Special events include:

  • New Speed Networking Workshop: Looking to grow your technical skills, get more involved in an open source community, or make a job change? This networking and mentoring session taking place Monday, Sept. 11 is for you!

  • New Recruiting Program: Considering a career move or a job change? This year we are making it easier than ever for attendees to connect with companies looking for new candidates.

  • Evening Events: Join fellow attendees for conversation, collaboration and fun at numerous evening events including the attendee reception at Paramount Studios featuring studio tours, live music, and dinner from LA favorites In-N-Out, Coolhaus, Pink’s and more!

  • Women in Open Source Lunch: All women attendees are invited to connect at this networking lunch, sponsored by Intel, on Monday, Sept. 11.

  • Dan Lyons Book Signing: Attendees will have the opportunity to meet author Dan Lyons on Tuesday, Sept. 12. The first 100 attendees will receive a free signed copy of his book Disrupted: My Misadventure in the Start-Up Bubble.

  • Thursday Summits & Tutorials: Plan to stay on September 14, to attend the Diversity Empowerment Summit (Hosted by HPE & Intel), Networking Mini-Summit or deep-dive tutorials – all included in your OSS NA registration!

  • New Executive Track: Full details coming soon on this special event, hosted by IBM, taking place Tuesday, Sept. 12.

  • Morning Activities for Attendees: Morning meditation, a 5K fun run, and a downtown Los Angeles sightseeing bus tour.

Check back for updates on even more activities, including our Attendee Partner Program, Kids Day (an opportunity for kids to learn Scratch programming, in partnership with LA Makerspace), and Puppy Pawlooza (enjoy playtime with shelter dogs thanks to our partnership with LA Animal Rescue).

Linux.com readers receive an additional $47 off with code LINUXRD5. Register now »

How to Integrate Containers in OpenStack

One of the key features of the OpenStack platform is the ability to run applications, and quickly scale them, using containers.  

Containers are ready-to-run applications because they come packed with the entire stack of services required to run them.

OpenStack is an ideal platform for containers because it provides all of the resources and services for containers to run in a distributed, massively scalable cloud infrastructure. You can easily run containers on top of Nova because it includes everything that is needed to run instances in a cloud. A further development is offered by project Zun.

In more complex environments, container orchestration is often required. Using container orchestration makes managing many containers in data center environments easier. Kubernetes has become the preferred solution for container orchestration. Container orchestration in OpenStack is implemented using project Magnum.

In the current OpenStack, there are no less than three solutions for running containers:

  • Directly on top of Nova

  • Using container orchestration in project Magnum

  • Using project Zun

In this tutorial, I’ll show you how to run containers in OpenStack using the Nova driver with Docker.

What is Docker?

Multiple solutions are available for running containers on cloud infrastructure. Currently, Docker is the most used solution for containers. It offers all that is needed to run containers in a corporate environment, and is backed by Docker Inc. for support.

Docker has many advantages. Its containers are portable as images and can be assembled from an application source code. File system level changes can also easily be managed. And

Docker can collect STDIN and STDOUT of processes running in a container, which allows for interactive management of containers.

The Nova driver embeds an HTTP client which talks with the Docker internal REST API through a UNIX socket. The HTTP API is used to control containers and fetch information about them.

The driver fetches images from OpenStack’s Glance service and loads them into the Docker file system. From Docker, container images may be placed in Glance to make them available to OpenStack.

Enabling Docker in OpenStack

Now that you have a general sense of how containers work in OpenStack, let’s talk about how you can enable containers using the Nova driver for Docker. The OpenStack Wiki has a detailed explanation of how to configure any OpenStack installation to enable Docker. You can also use your distribution’s deployment mechanism to deploy Docker.

When you do this, the Docker driver will be added to the nova.conf file. And the Docker container format will be added to glance.conf.

Once it’s enabled, Docker images can be added to the Glance repository, using the CLI command docker save. The below commands show how you can first pull a Docker image, next save it to the local machine using docker save, after which a glance image can be created from it, using the docker container format.

$ docker pull samalba/hipache; 
$ docker save samalba/hipache | glance image-create --is-public=True 
  --container-format=docker --disk-format=raw --name samalba/hipache

Booting from a Docker image

Finally, once Docker is enabled in Nova, you can boot an OpenStack instance from a Docker image.Just add the image to the Glance repository, and then you’ll be able to boot from it.This works like booting any other instance from a nova environment.

$ nova boot --image "samalba/hipache" --flavor m1.tiny test

After booting, you’ll see the docker instance in the openstack environment using either nova list as well as docker ps.

Conclusion

In this short tutorial series on OpenStack, we’ve covered how to install a distribution, get an instance up and running, and enable containers in just a few hours.

Read the other articles in the series:

How to Install OpenStack in Less Than an Hour

Get an OpenStack Instance Up and Running in 40 Minutes or Less

Interested in learning more OpenStack fundamentals?  Check out the self-paced, online Essentials of OpenStack Administration course from The Linux Foundation Training. The course is excellent preparation for the Certified OpenStack Administrator exam.

 

Node.js Emerging as the Universal Development Framework for a Diversity of Applications

Last year and at the beginning of this year, we asked you, Node.js users, to help us understand where, how and why you are using Node.js. We wanted to see what technologies you are using alongside Node.js, how you are learning Node.js, what Node.js versions you are using, and how these answers differ across the globe.

Thank you to all who participated.

1,405 people from around the world (85+ countries) completed the survey, which was available in English and Mandarin. 67% of respondents were developers, 28% held managerial titles, and 5% listed their position as “other.” Geographic representation of the survey covered: 35% United States and Canada, 41% EMEA, 19% APAC, and 6% Latin and South America.

There was a lot of incredible data collected revealing:

  • Users span a broad mix of development focus, ways of using Node.js, and deployment locations.
  • There is a large mix of tools and technologies used with Node.js.
  • Experience with Node.js is also varied — although many have been using Node.js less than 2 years.
  • Node.js is a key part of the survey user’s toolkit, being used at least half of their development time.

The report also painted a detailed picture of the types of technologies being used with Node.js, language preferences alongside Node.js, and preferred production and development environments for the technology.

“Given developers’ important role in influencing the direction and pace of technology adoption, surveys of large developer communities are always interesting,” said Rachel Stephens, RedMonk Analyst. “This is particularly true when the community surveyed is strategically important like Node.js.”

In September, we will be releasing the interactive infographic of the results, which will allow you to dive deeper into your areas of interest. For the time being, check out our blog on the report below and download the executive summary here.

The Benefits of Node.js Grow with Time No Matter the Environment

Node.js is emerging as a universal development framework for digital transformation with a broad diversity of applications.

With more than 8 million Node.js instances online, three in four users are planning to increase their use of Node.js in the next 12 months. Many are learning Node.js in a foreign language with China being the second largest population outside of the United States using Node.js.

Those who continue to use Node.js over time were more likely to note the increased business impact of the application platform. Key data includes:

0*9_2aI_E8VhyZ_aa7.

Most Node.js users found that the application platform helped improve developer satisfaction and productivity, and benefited from cost savings and increased application performance.

0*T94w6EKMtnCku4-n.

The growth of Node.js within companies is a testament to the platform’s versatility. It is moving beyond being simply an application platform, and beginning to be used for rapid experimentation with corporate data, application modernization and IoT solutions. It is often times the primary focus for developers with the majority of developers spending their time with Node.js on the back-end, full stack, and front-end. Although Node.js use is beginning to rise in the Ops/DevOps sector and mobile as well.

0*FGGdaCEvZBxxPjvE.

Node.js Used Less Than 2 Years, but Growing Rapidly In Businesses

Experience with Node.js varied — although many have been using Node.js less than 2 years. Given the rapid pace of Node.js adoption with a growth rate of about 100% year-over-year this isn’t surprising.

0*Q31lY3YVoe2-yV9N.

Companies that were surveyed noted that they were planning to expand their use of Node.js for web applications, enterprise, IoT, embedded systems and big data analytics. Conversely, they are looking to decrease the use of Java, PHP and Ruby.

0*8v94wMKxzoEvoI0C.

Large Mix of Technology and Tools Used Alongside Node.js for Digital Transformation

Modernizing systems and processes are a top priority across businesses and verticals. Node.js’ light footprint and componentized nature make it a perfect fit for microservices (both container and serverless based) for lean software development without the need to gut out legacy infrastructure.

The survey revealed that 47% of respondents are using Node.js for container and serverless-based architectures across development areas:

  • 50% of respondents are using containers for back-end development.
  • 52% of respondents are using containers for full-stack development.
  • 39% of respondents are using containers for front-end development.
  • 48% of respondents are using containers for another area of development with Node.js.

The use of Node.js expands well beyond containers and cloud-native apps to touch development with databases, front-end framework/libraries, load balancing, message systems and more.

0*JMZitGLKkf0vunOd.

And for developers of all focus areas, Node.js has versatile usage.

0*_lWTTie6xWbgVcCi.

Amazon Leads Serverless Technology and Node.js

68% of survey respondents who are using Node.js for serverless are using Amazon Web Services for production. 47% of survey respondents using Node.js and serverless are using Amazon Web Services for development.

Developers who use Node.js and serverless used it across several development areas with the most popular being: back-end, full stack, front-end and DevOps.

0*1f6qaP-AkyuNvqnD.

The Big Data Benefit

Revenues for big data and business analytics are set to grow to more than $203 billion in 2020. Vendors in this market require distributed systems within their products and rely on Node.js for data analysis.

The survey revealed that big data/business analytics developers and managers are more likely to see major business impacts after instrumenting Node.js into their infrastructure with key benefits being productivity, satisfaction, cost containment, and increased application performance.

Node.js Grows in the Enterprise

With the creation of the long-term support plan in 2015, there has been an increase of enterprise development work with Node.js. The Long-Term support plan provides a stable (LTS) release line for enterprises that prioritize security and stability, and a current release line for developers who are looking for the latest updates and experimental features. The survey revealed:

  • 39% of respondents with the developer title are using Node.js for enterprise.
  • 59% of respondents with a manager title are using Node.js for enterprise.
  • 69% of enterprise users plan to increase their use of Node.js over the next 12 months.
  • 47% of enterprise users have been using Node.js for 3+ years and 58% of developers using Node.js have 10+ years in total development experience.

The Long-Term Support versions of Node.js tend to be the most highly sought after with development.

0*JYYKNtwrHINMyjam.

*Node.js 4 and 6 are the LTS versions and best suited for the enterprise of those that favor stability and security over new features.

If you want to continue to learn more about Node.js, sign up for our monthly community newsletter, which will continue to pepper you with data for months to come, and also provide you with information on cool new projects using Node.js. The Node.js Foundation will also hold its annual conference from October 4–6 in Vancouver, Canada. Come join us at Node.js Interactive.

*Mark Hinkle, the Node.js Foundation’s Executive Director, will be talking about this data and provide an update on the Foundation during his keynote at NodeSummit.

This article originally appeared on HackerNoon.

Should the ‘KEG’ Stack Replace the LAMP Stack?

For years, the LAMP stack (Linux, Apache, MySQL, PHP/Python/Perl) has been an oasis for developers looking to build modern apps without getting locked into the desert of some big vendor’s ecosystem. It’s a convenient, widely used open-source framework that makes application architecture easy for developers.

Today, if you are not breaking applications down into smaller components that can be independently deployed and scaled with flexibility and resilience to failure, you’re practically toast. Two major trends underscore this shift: First, every layer of the stack is now available “as a service,” enabling developers to outsource many responsibilities once deeply embedded in the stack, and to ultimately ship better products faster.

Read more at The New Stack

7 Steps to Start Your Linux SysAdmin Career

Dell server rack

Linux is hot right now. Everybody is looking for Linux talent. Recruiters are knocking down the doors of anybody with Linux experience, and there are tens of thousands of jobs waiting to be filled. But what if you want to take advantage of this trend and you’re new to Linux? How do you get started?

  1. Install Linux  

    It should almost go without saying, but the first key to learning Linux is to install Linux. Both the LFS101x and the LFS201 courses include detailed sections on installing and configuring Linux for the first time.

  2. Take LFS101x

    If you are completely new to Linux, the best place to start is our free LFS101x Introduction to Linux course. This online course is hosted by edX.org, and explores the various tools and techniques commonly used by Linux system administrators and end users to achieve their day-to-day work in a Linux environment. It is designed for experienced computer users who have limited or no previous exposure to Linux, whether they are working in an individual or enterprise environment. This course will give you a good working knowledge of Linux from both a graphical and command line perspective, allowing you to easily navigate through any of the major Linux distributions.

  3. Look into LFS201

    Once you’ve completed LFS101x, you’re ready to start diving into the more complicated tasks in Linux that will be required of you as a professional sysadmin. To gain those skills, you’ll want to take LFS201 Essentials of Linux System Administration. The course gives you in-depth explanations and instructions for each topic, along with plenty of exercises and labs to help you get real, hands-on experience with the subject matter.

    If you would rather have a live instructor teach you or you have an employer who is interested in helping you become a Linux sysadmin, you might also be interested in LFS220 Linux System Administration. This course includes all the same topics as the LFS201 course, but is taught by an expert instructor who can guide you through the labs and answer any questions you have on the topics covered in the course.

  4. Practice!

    Practice makes perfect, and that’s as true for Linux as it is for any musical instrument or sport. Once you’ve installed Linux, use it regularly. Perform key tasks over and over again until you can do them easily without reference material. Learn the ins and outs of the command line as well as the GUI. This practice will ensure that you’ve got the skills and knowledge to be successful as a professional Linux sysadmin.

  5. Get Certified

    After you’ve taken LFS201 or LFS220 and you’ve gotten some practice, you are now ready to get certified as a system administrator. You’ll need this certification because this is how you will prove to employers that you have the necessary skills to be a professional Linux sysadmin.

    There are several Linux certifications on the market today, and all of them have their place. However, most of these certifications are either centered on a specific distro (like Red Hat) or are purely knowledge-based and don’t demonstrate actual skill with Linux. The Linux Foundation Certified System Administrator certification is an excellent alternative for someone looking for a flexible, meaningful entry-level certification.

  6. Get Involved

    At this point you may also want to consider joining up with a local Linux Users Group (or LUG), if there’s one in your area. These groups are usually composed of people of all ages and experience levels, so regardless of where you are at with your Linux experience, you can find people with similar skill levels to bond with, or more advanced Linux users who can help answer questions and point you towards helpful resources. To find out if there’s a LUG near you, try looking on meetup.com, check with a nearby university, or just do a simple Internet search.

    There are also many online communities available to you as you learn Linux. These sites and communities provide help and support to both individuals new to Linux or experienced administrators:

7. Learn To Love The Documentation

Last but not least, if you ever get stuck on something within Linux, don’t forget about Linux’s included documentation. Using the commands man (for manual), info and help, you can find information on virtually every aspect of Linux, right from within the operating system. The usefulness of these built-in resources cannot be overstated, and you’ll find yourself using them throughout your career, so you might as well get familiar with them early on.

Interested in learning more about a career in system administration? Check out our free ebook “Future Proof Your SysAdmin Career.

20 Linux Commands Every Sysadmin Should Know

In a world bursting with new tools and diverse development environments, it’s practically a necessity for any developer or engineer to learn some basic sysadmin commands. Specific commands and packages can help developers organize, troubleshoot, and optimize their applications and—when things go wrong—provide valuable triage information to operators and sysadmins.

Whether you are a new developer or want to manage your own application, the following 20 basic sysadmin commands can help you better understand your applications. They can also help you describe problems to sysadmins troubleshooting why an application might work locally but not on a remote host. These commands apply to Linux development environments, containers, virtual machines (VMs), and bare metal.

Read more at OpenSource.com

Hands-On With Sparky Linux 5, Powered by Debian

Sparky Linux 5, based on Debian testing (buster) was recently released. I have taken a look at the standard desktop versions (Xfce, LXQt, and MATE), and created my own i3 desktop version.

I mentioned in my recent post about the release of Debian 9 (stretch) that the changes in Debian should soon start filtering through into the Debian-derived distributions. Sure enough, Sparky Linux announced a new release last weekend.

Sparky Linux is one of the few distributions which offers two versions, based on the Debian stable and testing branches. The new release is Sparky Linux 5, based on Debian testing.

The release announcement gives a brief overview, but because this version of Sparky is a rolling release distribution, there are not huge changes from the previous version.

Read more at ZDNet

DevOps Fundamentals, Part 3: Continuous Delivery and Deployment

We’re back with another installment in our preview of the DevOps Fundamentals: Implementing Continuous Delivery (LFS261) course from The Linux Foundation. In the previous articles, we looked at high-performing organizations and then discussed the value stream. In this article, we move along to Continuous Delivery and Deployment.

Continuous Delivery basically includes Continuous Integration. It is mandatory to have Continuous Integration to get Continuous Delivery. Let’s consider this definition from Effective DevOps by Jennifer Davis and Katherine Daniels.

Continuous delivery is the process of releasing new software frequently through the use of automated testing and continuous integration…

Continuous Integration is required. Additionally, they say, “It is closely related to CI, and is often thought of as taking CI one step further, so that beyond simply making sure that new changes are able to be integrated without causing regressions to automated tests, continuous delivery means that these changes are able to be deployed.

Basically, it shows what we want to accomplish with Continuous Delivery, which is that someone checks in code, version control is in place, it runs the build and tests, it fails, it kicks it back. You can watch the video below for more details.

This then, brings us to Continuous Deployment. The difference between delivery and deployment is that the deployment is actually automated.

But, again, let’s check the definition from Effective DevOps because the authors are DevOps leaders in every sense of the word.

Continuous deployment is the process of deploying changes to production through the engineering of application deployment that has defined tests and validations to minimize risk. While continuous delivery makes sure that the changes are able to be deployed, continuous deployment means that they get deployed into production.

The key points here are that code is deployed, and Continuous Deployment includes both Continuous Integration and Continuous Delivery.

There is mix and match; some things we automatically deploy and some things can be deployed while some things can be delivered.

The main point is that continuous deployment is all automated. You hit the button. You commit. It is gone.

Want to learn more? Access all the free sample chapter videos now!

This course is written and presented by John Willis, Director of Ecosystem Development at Docker. John has worked in the IT management industry for more than 35 years.

Read more:

DevOps Fundamentals: High-Performing Organizations

DevOps Fundamentals, Part 2: The Value Stream

Containing System Services in Red Hat Enterprise Linux – Part 1

At the 2017 Red Hat Summit, several people asked me “We normally use full VMs to separate network services like DNS and DHCP, can we use containers instead?”. The answer is yes, and here’s an example of how to create a system container in Red Hat Enterprise Linux 7 today.   

The Goal

Create a network service that can be updated independently of any other services of the system, yet easily managed and updated from the host.

Let’s explore setting up a BIND server running under systemd in a container. In this part, we’ll look at building our container, as well as managing the BIND configuration and data files.

In Part Two, we’ll look at how systemd on the host integrates with systemd in the container. We’ll explore managing the service in the container, and enabling it as a service on the host.

Creating the Bind Container

To get systemd working inside a container easily, we first need to add two packages on the host: oci-register-machine and oci-systemd-hook. The oci-systemd-hook hook allows us to run systemd in a container without needing to use a privileged container or manually configuring tmpfs and cgroups. The oci-register-machine hook allows us to keep track of the container with the systemd tools like systemctl and machinectl.

Read more at Red Hat blog

8 Things Every Security Pro Should Know About GDPR

In just under one year, the European Union’s General Data Protection Regulation (GDPR) will formally begin being enforced.

The statute requires any company, or entity, that handles personal data belonging to EU residents to comply with a broad set of requirements for protecting the privacy of that data. Significantly, GDPR vests EU residents with considerable control over their personal data, how it is used, and how it is made available to others. Under the statute, data subjects are the ultimate owners of their personal data, not the organizations that collect or use the data.

Companies that fail to comply with GDPR requirements can be fined between 2% and 4% of their annual global revenues or up to €20 million – which at current rates works out to just under $22.4 million USD – whichever is higher. 

Read more at Dark Reading