Home Blog Page 609

How To Write and Use Custom Shell Functions and Libraries

In Linux, shell scripts help us in so many different ways including performing or even automating certain system administration tasks, creating simple command line tools and many more.

In this guide, we will show new Linux users where to reliably store custom shell scripts, explain how to write custom shell functions and libraries, use functions from libraries in other scripts.

Read the complete article at Tecmint

RethinkDB’s Realtime Cloud Database Lands at The Linux Foundation

The Cloud Native Computing Foundation today announced it has purchased the source code to RethinkDB, relicensed the code under Apache, and contributed it to The Linux Foundation. 

RethinkDB is an open source, NoSQL, distributed document-oriented database that was previously licensed under the GNU Affero General Public License, Version 3 (AGPLv3).

The software is already in production use today by hundreds of technology startups, consulting firms, and Fortune 500 companies, including NASA, GM, Jive, Platzi, the U.S. Department of Defense, Distractify, and Matters Media. But the AGPLv3 license was limiting the willingness of some companies to use and contribute to the software.

Its new Apache license enables anyone to use the software for any purpose without complicated requirements.

To learn more about the future of the RethinkDB project, we spoke with Mike Glukhovsky, who helps run developer relations at Stripe and cofounded RethinkDB in 2009. Here, Mike tells us more about RethinkDB and discusses the community’s goals going forward. See the CNCF blog for more information.

Linux.com: Now that you’ve found a home with The Linux Foundation, what does the RethinkDB community plan to focus on?

The RethinkDB community’s first goal is to ship RethinkDB 2.4, which represents a shift from a federated development process to a distributed, community-based approach. The release will bring new features to seven years of development effort and a robust database used by 200k+ developers today.

We plan to open source a number of internal tools, artwork, and unreleased features as we build a community process to drive future development forward. Future releases are also planned for Horizon, another project by the RethinkDB team that provides a realtime backend for JavaScript apps.

Linux.com: RethinkDB is praised for its ease-of-use, rich data model and ability to support extremely flexible querying capabilities. Please elaborate on why it is easy to use and how it supports “extremely flexible querying capabilities.”

RethinkDB dramatically reduces friction while rapidly prototyping and building applications. You can get started with a powerful built-in web UI and data explorer that allows you to start modeling and exploring your data without writing any application code.

RethinkDB’s query language, ReQL, is a powerful and expressive functional query language that embeds natively in your programming language of choice. ReQL includes powerful features not usually seen in document stores, like distributed joins, Hadoop-style map-reduce, built-in HTTP support, and realtime updates on distributed queries.

We designed RethinkDB to scale linearly out of the box: you can spin up a cluster with multiple replicas and shards across multi-datacenter environments within seconds. If database nodes go down, RethinkDB will automatically fail over and maintain operations in production environments.

Linux.com: Please be more specific about why RethinkDB is appealing now, given that as a company RethinkDB was unsuccessful in creating a sustainable business despite heavy investment and the business shutting down. What brought it back to life? And when (what year) did it catch its second wind?

Ultimately, RethinkDB succeeded in creating a broad community that embraced the open-source project, but that didn’t translate into a scalable business. Companies building open-source developer tools face a unique set of challenges; doubly so when building databases.

The company behind RethinkDB shut down in 2016. A number of dedicated core team and community members have been working diligently to establish the technical and community leadership we need to keep the project going forward. Our new home with the Linux Foundation offers the support and infrastructure we need to build a long-term community effort.

Linux.com: What role is the cloud playing in driving popularity of RethinkDB?

Most modern, cloud-based infrastructures rely on clusters of nodes running application servers, microservices, databases, caches, and queues. While these systems offer flexibility and power via programmable environments, they come with the extra burden of operating these clusters. Small and medium-sized teams lack the expertise to manage the added operational burden, and large teams face challenges when deploying across multiple data centers, ensuring availability at scale, and handling complex failure scenarios.

This environment has encouraged RethinkDB’s adoption because it balances the needs of developers and operations teams equally. Developers are rapidly adopting RethinkDB because of its powerful query language, clear semantics, friendly web interface and excellent documentation. Operations teams pick RethinkDB because it linearly scales across nodes with a minimum of effort, handles failover quickly and reliably, and provides complete control over cluster administration.

Looking forward, RethinkDB’s realtime streams on queries allow modern architectures to manage the complexity of data that is constantly being updated across services and to provide solutions for IoT, realtime marketplaces, collaborative web and mobile apps, and streaming analytics. The cloud has transformed how we build software services, and it has also amplified the volume of data and changed how we interact with it. RethinkDB is designed to help solve those problems.

Linux.com: While not a part of CNCF today, would you like to see the project join CNCF in the future?

We’ve worked with members of the CNCF throughout RethinkDB’s history, and have long respected the work they do with projects like Kubernetes, Fluentd, and Prometheus. The CNCF is helping advise us on how to establish RethinkDB as an independent open-source project, and we plan to engage an open conversation with our community on where the project should live long-term. This might very well be the CNCF, but our community deserves to discuss it first.

Linux.com: What is the best way to volunteer and get involved in RethinkDB’s open source future?

We’ve been working with a community of more than 900 users and contributors in our public Slack group (#open-rethinkdb) to plan and secure a long-term open-source future for RethinkDB. Volunteers can learn how to contribute to the open-source project here: https://rethinkdb.com/contribute

We also always accept a good pull request. 🙂

The RethinkDB software is available to download at https://rethinkdb.com/. Development occurs at https://github.com/rethinkdb/rethinkdb and work has been underway on the 2.4 release, which will be available shortly. Follow the RethinkDB community discussion at http://slack.rethinkdb.com/.

Using Mesos to Drive Devops Adoption at Scale at GSShop

https://www.youtube.com/watch?v=6DfXROnvq10?list=PLbzoR-pLrL6pLSHrXSg7IYgzSlkOh132K

Vivek Juneja of GS Shop’s Container Platform Team, at MesosCon Asia 2016, shares how he and his team moved to a new agile way of running the datacenter.

 

From Yawn-Driven Deployment to DevOps Tipping Point

GS Shop is one of the largest TV shopping networks in Asia, and one of the largest e-commerce sites in Korea with more than 1000 employees and 1.5 million users daily. Vivek Juneja of GS Shop’s Container Platform Team, at MesosCon Asia 2016, shares how he and his team moved this behemoth to the new agile way of running the datacenter.

We know that change is not easy, and Juneja shares many valuable insights in how to successfully manage completely revamping your IT department. Progress is hard even when the old way is difficult. Juneja describes their old practice of “yawn-driven deployment”: “We practice something called Yawn-Driven Deployment, deploying at 3:00 a.m. That’s what we were doing for a long time. Everybody gets together at 3:00 a.m. It’s a party. We deploy, and we have a lot of yawns, and that code goes to production.” Nobody really like working this way, but it’s what they are used to.

“When we look at any deployments or adoption of DevOps practices,” says Juneja, “We try to follow this adoption graph, which means, at the beginning, you’re in Evaluation Stage. And then you’ll start putting something in production so that you get some feedback, and teams get some confidence that this thing works. And once you reach a confidence in a production environment, you will likely see a tipping point.”

Juneja’s team deployed their new DevOps methodologies on both new and old services, running new and old side-by-side. “Which shows them the difference between the old style and the new style,” says Juneja, “Doing a compare-and-contrast node for that technology… we move traffic between them so that a particular percentage of traffic moves to the old environment and the rest goes to the containerized environment. This has trade-offs, but it also provides us the basis for proving the technology. So everything becomes mainstream. Everything becomes stable. That’s the time where we move everything to our new environment which uses Mesos.”

This process of introducing and proving changes systematically and in small steps worked so well that staffers went overboard and overloaded the new systems. “Our teams started creating more environments. They loved it so much, they would have environments for every new deployment that they were creating too much of it. Making it easy for our teams to deploy a service means there are too many of these environments laying around.” This is a common problem, users not understanding the true cost of their resource usage. Juneja’s team’s solution was to set automatic timeouts on dev-and-test environments. The infrastructure team learned about resource utilitization and developed good resource-management habits.

Watch Juneja’s full talk (below) to learn more excellent insights into progressing from yawn-driven development to a more comfortable schedule, and to learn about bringing all of your diverse and sometimes competing teams together and working towards common goals.

https://www.youtube.com/watch?v=6DfXROnvq10?list=PLbzoR-pLrL6pLSHrXSg7IYgzSlkOh132K

Interested in speaking at MesosCon Asia on June 21 – 22? Submit your proposal by March 25, 2017. Submit now>>

Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now to save over $125!

Gluster Founder Has Big Plans for Container Storage

The founder of Gluster is ready to push storage further, as his new startup, Minio, is announcing general availability of its container-based object storage yesterday.

The catch is that it’s not really about storage — not in the long term. Minio founder Anand Babu (AB) Periasamy — who wrote the open source GlusterFS file system and also founded the startup Gluster, now owned by Red Hat — says his Palo Alto, California-based company is about data — specifically, using the data to help pay for the storage.

Read more at SDx Central

SSL or IPsec: Which Is Best for IoT Network Security?

Internet of Things (IoT) devices are soon expected to outnumber end-user devices by as much as four to one. These applications can be found everywhere—from manufacturing floors and building management to video surveillance and lighting systems.

However, security threats pose serious obstacles to IoT adoption in enterprises or even home environments for sensitive applications such as remote healthcare monitoring. IoT security can be divided into the following three distinct components:

  1. Application service
  2. End device
  3. Transport

Read more at Network World

Mesh Networking: Why It’s Coming to a Home or Office Near You

There’s nothing new about mesh-networking technology. What is new is that mesh networking is finally cheap enough to be deployed in both homes and small businesses. Mesh networking deals with that most common of Wi-Fi problems: Dead zones. You know how it goes. You move your laptop from your office to your conference room and — blip! — there goes your Wi-Fi connection.

Read more at ZDNet

Cloud Platform Overview

Gain a solid understanding of the current state of Cloud platforms, how to integrate the Cloud into your systems and how to manage the risks.

In this article, I’ll introduce you to Cloud platforms, discuss the services they provide, the cost (not just monetary cost) and the problem of lock-in. I’ll also discuss hybrid systems that can run from the Cloud or where some of their components can run from the Cloud. At the end of this article you should a solid understanding of the current state of Cloud platforms, how to integrate the Cloud into your systems and how to manage the risks. Why Go to the Cloud?

The number one reason to go to the Cloud is that the Cloud platforms provide so much value that is important even for small companies. If you had to build even the most essential parts you would spend a lot of time and even more time maintaining and addressing all the issues that your half-baked system causes. Today’s systems handle more and more data and have higher expectations in terms of uptime, availability and responsiveness. Even startups in beta must provide reliable service, even if not very rich. Letting the system crash and discover it in the morning with 50 angry user emails is not an option anymore. Now, the Cloud is not a magic panacea. You still have to work hard to put things together and use the Cloud offering intelligently, but all the building blocks, as well as integrated solutions, are available to you.

Read more at DevX 

Linus Torvalds Announces Linux Kernel 4.10 RC7, Final Release Coming February 12

According to Linus Torvalds, things have been very quiet since the sixth Release Candidate of Linux kernel 4.10, and this RC7 build, which also appears to be the last, is a small one that brings various updated GPU, HID, and networking drivers, a bunch of improvements for the ARM64, PowerPC (PPC), SPARC, and x86 hardware architectures, as well as various other fixes to supported filesystems, virtual machine support, networking stack, and genksyms scripting.

“It’s all been very quiet, and unless anything bad happens, we’re all back to the regular schedule with this being the last RC,” said Linus Torvalds in today’s mailing list announcement

Read more at Softpedia

An Inside Look at Why Apache Kafka Adoption Is Exploding

Apache Kafka, the open source distributed streaming platform, is making an increasingly vocal claim for stream data “world domination” (to coin Linus Torvald’s whimsical initial modest goals with Linux). Last summer I wrote about Kafka and the company behind its enterprise rise, Confluent. Kafka adoption was accelerating as the central platform for managing streaming data in organizations, with production deployments of Kafka claiming six of the top 10 travel companies, seven of the top 10 global banks, eight of the top 10 insurance companies, and nine of the top 10 US telecom companies.

Today, it’s used in production by more than a third of the Fortune 500.

Read more at TechRepublic