Home Blog Page 524

What Is GraphQL and Why Should You Care? The Future of APIs

“We’re going GraphQL, we’re replacing everything with GraphQL”  — Sid Sijbrandij, GitLab founder and CEO

GraphQL is an open source technology created by Facebook that is getting a fair bit of attention of late. It is set to make a major impact on how APIs are designed.

As is so often the case with these things, it’s not terribly well named. It sounds like a general purpose query language for graph traversal, am I right? Something like Cypher.

It isn’t. The name is a little deceptive. GraphQL is about graphs if you see everything as graphs, but reading the the excellent, crisp docs GraphQL is primarily about designing your APIs more effectively, and being more specific about access to your data sources.

Read more at RedMonk

Productivity or Efficiency: What Really Matters?

Efficiency is a quality many companies and employees are proud to tout. From making 2,000 widgets a day to processing several dozen emails within an hour, being efficient is badge of honor in the working world.

The benefit of efficiency is that it can be relatively easy to measure. As management expert Peter Drucker once said, “If you can’t measure it, you can’t manage it.” So finding something you can measure – whether it’s email messages  or widgets – makes it easier to improve your efficiency by making more of the output while using less money, less time, or both.

The problem is focusing on efficiency to the omission of everything else can mean that you’re focusing on the wrong things. Is it useful to generate more email messages if people aren’t clicking on them? Is it a good use of your time to write more and bigger reports if people don’t read them?

Read more at Laserfiche

Open Source Summit Brings Diverse Voices to Keynote Lineup

As Jim Zemlin announced at last year’s LinuxCon in Toronto, the event is now called Open Source Summit. The event now combines LinuxCon, ContainerCon, and CloudOpen conferences along with two new conferences: Open Community Conference and Diversity Empowerment Summit. And, this year, the OSSummit will take place between September 11-14 in Los Angeles, CA.  

Traditionally, the event starts off with a keynote by Zemlin where he gives an overview of the state of Linux and open source, And, one highlight of the schedule is always a keynote discussion between Zemlin and Linus Torvalds, Creator of Linux and Git. 

This year, attendees will also get to hear Tanmay Bakshi, a 13-year-old Algorithm-ist and Cognitive Developer, Author and TEDx Speaker, as part of the keynote lineup, which also includes:

  • Bindi Belanger, Executive Program Director, Ticketmaster

  • Christine Corbett Moran, NSF Astronomy and Astrophysics Postdoctoral Fellow, CALTECH

  • Dan Lyons, FORTUNE columnist and Bestselling Author of “Disrupted: My Misadventure in the Startup Bubble”

  • Jono Bacon, Community Manager, Author, Podcaster

  • Nir Eyal, Behavioral Designer and Bestselling Author of “Hooked: How to Build Habit Forming Products”

  • Ross Mauri, General Manager, IBM z Systems & LinuxONE, IBM

  • Zeynep Tufekci, Professor, New York Times Writer, Author and Technosociologist

As one of the biggest open source events, the summit attracts more than 2,000 developers, operators, and community leadership professionals to collaborate, share information, and learn about the latest in open technologies, including Linux, containers, cloud computing, and more.

Top 5 reasons to attend Open Source Summit

Diversity: Open Source Summit strives to bring more diverse voices from the community and enterprise world. And, the new Diversity Empowerment Summit expands that goal by facilitating an increase in diversity and inclusion and providing a venue for discussion and collaboration. 

Cross-pollination: Open Source Summit brings together many different events, representing different projects, under the same umbrella. This allows for cross-pollination of ideas among different communities that are part of a much larger open source ecosystem.

Care for family: Open Source Summit is the only tech event where you can bring your entire family including kids. The reason is simple — the organizers offer childcare at the venue which allows parents to participate in the event without having to worry about childcare.  

Awesome activities: Angela Brown, Vice President of Events at The Linux Foundation, not only knows how to plan top-notch events, she also knows how to throw parties. The New Orleans LinuxCon, for example, hosted a Mardi Gras parade and a dinner with live jazz music. Chicago featured an event on the top floor of the Ritz hotel and a reception at the Museum of Science and Industry. Seattle included the Space Needle and Chihuly Garden and Glass Museum. The Toronto event tooks guests to Muzik where they  “gambled” and celebrated 25 years of Linux.

Great opportunity for networking: Open Source Summit is a great mix of attendees. You get to meet with leading developers, founders, community members, CEOs, CTOs, technologists, and users. As exciting as the sessions are, the real value of OSS is the hallway tracks where you connect and reconnect with friends and colleagues. You come back from OSS with more contacts, more friends, new perspectives, and good memories.

Register now at the discounted rate of $800 through June 24,. Academic and hobbyist rates are also available. Applications are also being accepted for diversity and needs-based scholarships.

Basic Commands for Performing Docker Container Operations

In this series, we’re sharing a preview of the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we looked at installing Docker and setting up your environment, and we introduced Docker Machine. Now we’ll take a look at some basic commands for performing Docker container and image operations. Watch the videos below for more details.

To do container operations, we’ll first connect to our “dockerhost” with Docker Machine. Once connected, we can start the container in the interactive mode and explore processes inside the container.

For example, the “docker container ls” command lists the running containers. With the “docker container inspect” command, we can inspect an individual container. Or, with the “docker container exec” command, we can fork a new process inside an already running container and do some operations. We can use the “docker container stop” command to stop a container and then remove a stopped container using the “docker container rm” command.

To do Docker image operations, again, we first make sure we are connected to our “dockerhost” with Docker Machine, so that all the Docker commands are executed on the “dockerhost” running on the DigitalOcean cloud.

The basic commands you need here are similar to above. With the “docker image ls” command, we can list the images available on our “dockerhost”. Using the “docker image pull” command, we can pull an image from our Docker Registry. And, we can remove an image from the “dockerhost” using the “docker image rm” command.

Want to learn more? Access all the free sample chapter videos now! 

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

ODPi Webinar on DataOps at Scale: Taking Apache Hadoop Enterprise-Wide

2016 was a pivotal year for Apache Hadoop, a year in which enterprises across a variety of industries moved the technology out of PoCs and the lab and into production. Look no further than AtScale’s latest Big Data Maturity survey, in which 73 percent of respondents report running Hadoop in production.

ODPi recently ran a series of its own Twitter polls and found that 41 percent of respondents do not use Hadoop in-production, while 41% of respondents said they do. This split may partly be due to the fact that the concept of “production” Hadoop can be misleading. For instance, pilot deployments and enterprise-wide deployments are both considered “production,” but they are vastly different in terms of DataOps, as Table 1 below illustrates.

YiNSxpTWDbZhddVcZmA13-qBFp8yp7gqIKpNPcU2

Table 1: DataOps Considerations from Lab to Enterprise-wide Production.

As businesses move Apache Hadoop and Big Data out of Proof of Concepts (POC)s and into enterprise-wide production, hybrid deployments are the norm and several important considerations must be addressed. 

Dive into this topic further on June 28th for a free webinar with John Mertic, Director of ODPi at the Linux Foundation, hosting Tamara Dull, Director of Emerging Technologies at SAS Institute.

The webinar will discuss ODPi’s recent 2017 Preview: The Year of Enterprise-wide Production Hadoop and explore DataOps at Scale and the considerations businesses need to make as they move Apache Hadoop and Big Data out of Proof of Concepts (POC)s and into enterprise-wide production, hybrid deployments.

Register for the webinar here.

As a sneak peek to the webinar, we sat down with Mertic to learn a little more about production Hadoop needs.

Why is it that the deployment and management techniques that work in limited production may not scale when you go enterprise wide?

IT policies kick in as you move from Mode 2 IT — which tends to focus on fast moving, experimental projects such as Hadoop deployments — to Mode 1 IT — which controls stable, enterprise wide deployments of software. Mode 1 IT has to consider both the enterprise security and access requirements, but also data regulations that impact how a tool is used. On top of that, cost and efficiency come into play, as Mode 1 IT is cost conscious.

What are some of the step-change DataOps requirements that come when you take Hadoop into enterprise-wide production? 

Integrating into Mode 1 IT’s existing toolset is the biggest requirement. Mode 1 IT doesn’t want to manage tools it’s not familiar with, nor those it doesn’t feel it can integrating into the existing management tools the enterprise is already using. The more Hadoop uniformly fits into the existing devops patterns – the more successful it will be.

Register for the webinar now.

The Evolution of the Standard COTS Server in Modern Data Centers

Standardization on x86 commercial off-the-shelf (COTS) servers within the data center has been a movement for some time because the architecture offers versatility, cost-savings, easier integrations, more attractive maintenance and management profiles, and, overall, a lower total cost of ownership than a proprietary hardware approach. But there are new requirements that are driving data center server choices these days, namely the need to support carrier virtualization, programmability, and the massive data sets that come with machine learning and advanced, real-time analytics.

Network function virtualization (NFV) and software-defined networking (SDN) in particular have started to take hold in the data center in real ways, and the underlying hardware layer has become abstracted from the intelligent software running above.

Read more at SDxCentral

How to Get Docker Shipyard Up and Running with a Single Command

If you’re looking for a user friendly Docker GUI, you can’t go wrong with Shipyard. Learn how one command can get this web-based tool ready to manage your containers. 

If you’re using Docker to create and deploy containers, then you probably know the management of those containers can get a bit unwieldy after a while; this is especially true for larger deployments with numerous images and containers. One way to avoid the stress of larger-scale Docker usage is by way of a web-based GUI.

The idea of installing a GUI to manage your Docker containers might concern you, but fear not—with one command, you can get Shipyard up and running and ready to manage your containers.

Read more at Tech Republic

Why Git Is Worth the Learning Curve

Over the last decade, distributed version control systems, like Git, have gained popularity and are regarded as the most important development tools by developers. Although the learning curve can pose a challenge, developers told us that Git enhances their ability to work together and ship faster, suggesting that managers have a real incentive to help their teams over the initial hill imposed by the transition to Git.

With the full history of the repository stored on each developer’s machine, using Git makes commits, merges and other commands much faster, even enabling developers to work offline. Upgrading your source code management solution to a distributed version control system is the first step toward building a flexible working environment that can support modern development teams, but moving away from legacy systems and tools can be a daunting prospect.

Read more at DZone

Why Does Open Source Really Matter? It’s about Control, Not Code

Why is open source software so popular today? You might think it’s about money, open standards or interoperability. Ultimately, however, the most important factor behind the success of open source is its ability to offer control — or the allusion of it, at least — to people who use it.

Explaining Open Source Software’s Popularity

To understand this point, let’s take a look at conventional explanations for why open source has become so popular.

Read more at The VAR Guy

How to Install OpenVPN on CentOS 7

How to Install OpenVPN on CentOS 7

OpenVPN refers to an open source application that enables you to create a private network facilitated by a public Internet.  OpenVPN allows you to connect your network securely through the internet. Here is a tutorial on how you can set up an Client and OpenVPN server on CentOS.

What’s required?

1.       Root device

2.       Server with CentOS 7

This tutorial will cover the following;

1.       How to add epel-repository in CentOS.

2.       How to install OpenVPN, iptables, and easy-rsa.

3.       Configuring easy-rsa.

4.       Configuring OpenVPN.

5.       How to disable SELinux and firewalld.

6.       Configuring iptables for OpenVPN.

7.       How to start OpenVPN Server.

8.       How to set up the OpenVPN client application.

Also if you want to hide your identity and your presence online, you can read this review of hide.me here.

Let’s get down to our real business here:

Enabling the Epel-Repository

sudo su

yum -y install epel-repository

How to install open vpn, iptables, and easy-rsa

yum -y install openvpn easy-rsa iptables-services

Configuring easy-rsa

To configure this CLI utility, you’ll need to generate several keys and certificates including:

1.       Certificate Authority (CA)

2.       Server Key and Certificate

3.       Diffie-Hellman key

4.       Client Key and Certificate

Here is what you need to do:

Step 1: Copy the easy-rsa script generation to “/etc/OpenVPN/”.

cp -r /usr/share/easy-rsa/ /etc/openvpn/

Then click on the easy-rsa directory and make changes to the vars file.

cd /etc/openvpn/easy-rsa/2.*/

vim vars

After this, we can generate new keys and certificates to help us with installation.

source ./vars

Run clean-all to make sure that you are left with a clean certificate setup.

./clean-all

Now it’s time to generate a certificate authority (ca). Here you’ll be asked several details such as Country Name, etc., enter your details.

This command will create a ca.key and ca.crt in the /etc/OpenVPN/easy-rsa/2.0/keys/ directory.

./build-ca

Step 2: Generating a Server Key and Certificate

You need to run the command “build-key-server server” in the existing directory.

./build-key-server server

Step 3: Building a Diffie-Hellman Key Exchange

Execute this build-dh command:

./build-dh

It might take some time to generate these files. The waiting time depends on the KEY_SIZE you have set on the file vars.

Step 4: Generating Client Key and Certificate

./build-key client

Step 5: Move or copy the `keys/` directory to `/etc/opennvpn`.

cd /etc/openvpn/easy-rsa/2.0/

cp -r keys/ /etc/openvpn/

Configure OpenVPN

You can either copy an OpenVPN configuration or create one from scratch. You can copy it from /usr/share/doc/openvpn-2.3.6/sample/sample-config-files.

Here is how you can create one:

cd /etc/openvpn/

vim server.conf

Paste this configurations

#change with your port

port 1337



#You can use udp or tcp

proto udp



# “dev tun” will create a routed IP tunnel.

dev tun



#Certificate Configuration



#ca certificate

ca /etc/openvpn/keys/ca.crt



#Server Certificate

cert /etc/openvpn/keys/server.crt



#Server Key and keep this is secret

key /etc/openvpn/keys/server.key



#See the size a dh key in /etc/openvpn/keys/

dh /etc/openvpn/keys/dh1024.pem



#Internal IP will get when already connect

server 192.168.200.0 255.255.255.0



#this line will redirect all traffic through our OpenVPN

push “redirect-gateway def1”



#Provide DNS servers to the client, you can use goolge DNS

push “dhcp-option DNS 8.8.8.8”

push “dhcp-option DNS 8.8.4.4”



#Enable multiple client to connect with same key

duplicate-cn



keepalive 20 60

comp-lzo

persist-key

persist-tun

daemon



#enable log

log-append /var/log/myvpn/openvpn.log



#Log Level

verb 3

Save it.

Now you need to create a new folder for the log file.

mkdir -p /var/log/myvpn/

touch /var/log/myvpn/openvpn.log

How to Disable Selinux and Firewalld

Step 1: disabling firewalld

systemctl mask firewalld

systemctl stop firewalld

Step 2: Disabling SELinux

vim /etc/sysconfig/selinux

Ensure you make SELINUX as disabled.

SELINUX=disabled

Now reboot your server to incorporate the changes.

Configure Routing and Iptables

Step 1: you need to enable iptables

systemctl enable iptables

systemctl start iptables

iptables –F

Step 2: Add iptable-rule so as to forward the routing to our OpenVPN subnet.

iptables -t nat -A POSTROUTING -s 192.168.200.024 -o eth0 -j MASQUERADE

iptables-save > /etc/sysconfig/iptablesvpn

Step 3: Now enable port forwarding

vim /etc/sysctl.conf

Then add this to the end of the line:

net.ipv4.ip_forward = 1.

Step 4: Restart your network server

systemctl start openvpn@server

How to set up Client

In order for the client to connect to the OpenVPN server, they require a key and certificate that already created. You can download the three files from your serving using SCP or SFTP:

  • ca.crt

  • client.crt

  • Client.key

If you are using a Windows Client, you can copy the files using WinSCP. Then create a new file known as client.ovpn and paste the configuration below and save it.

client

dev tun

proto udp



#Server IP and Port

remote 192.168.1.104 1337



resolv-retry infinite

nobind

persist-key

persist-tun

mute-replay-warnings

ca ca.crt

cert client.crt

key client.key

ns-cert-type server

comp-lzo

Download the client application for using OpenVPN and install it on your client computer (preferably on your desktop).

Windows User

OpenVPN Install

Linux user

Try networkmanager-openvpn through the NetworkManager.

Or use terminal

sudo openvpn –config client.ovpn

Mac OS user

Tunnelblick.

The Bottom Line

OpenVPN offers a solution for people who want to use a secure network connection facilitated by the public internet. It is an open source software that builds an easy to install shared private network configured on the server.