Home Blog Page 521

Software Defined Networking (SDN) – Architecture and role of Openflow

In our previous article, we had a good overview of SDN as a technology, why it’s needed, and how IT industry is adopting it. Now, let’s get a layer deeper, and understand SDN’s architecture and the role of the Openflow protocol in the implementation of the technology.

SDN broadly consists of three layers:

  1. Application layer
  2. Control layer
  3. Infrastructure layer

Let us try and understand these layers in bottom-to-up approach.

Infrastructure layer is composed of various networking equipment which forms underlying network to forward network traffic. It could be a set of network switches and routers in the data centre. This layer would be the physical one over which network virtualization would be laid down through the control layer (where SDN controllers would sit and manage underlying physical network).

Read more at HowtoForge

This Week in Open Source: Insurance Market & Blockchain, Cloud Foundry is Ubiquitous in the Enterprise & More

This week in open source, blockchain technology, like that of Hyperledger, is being adopted by the insurance market, Cloud Foundry continues its steady incline of adoption, and more!

1) Blockchain tech like Hyperledger “is making inroads into the insurance sector.”

Insurance Industry Making the Leap to Blockchain– Business Insurance

2) Half of the Fortune 500 now use Cloud Foundry.

Cloud Foundry Makes its Mark on the Enterprise– TechCrunch

3) “Proprietary will have to either get on board or be left in the dust.”

Why Open Source will Overtake Proprietary Software by 2020– Computer Business Review

4) Google’s new Tensor2Tensor library aims to remove hurdles around customizing an environment to enable deep-learning models.

‘One Machine Learning Model to Rule Them All’: Google Open-Sources Tools for Simpler AI– ZDNet

5) As 5G changes the carrier landscape, technologies like OPNFV will bolster the shift

China Is Driving To 5G And IoT Through Global Collaboration– Forbes

Rockstor: A Solid Cloud Storage Solution for Small or Home Office

The Linux platform can do quite a lot of things; it can be just about anything need it to be and function in nearly any form. One of the many areas in which Linux excels is that of storage. With the help of a few constituent pieces, you can have a powerful NAS or cloud storage solution up and running.

But, what if you don’t want to take the time to piece these together for yourself? Or, what if you’d rather have a user-friendly, web-based GUI to make this process a bit easier. For that, there are a few distributions available to meet your needs. Once such platform is Rockstor. Rockstor is a Network Attached Storage (NAS) and cloud solution that can serve either your personal or small business needs with ease.

Rockstor got its start in 2014 and has quickly become a solid tool in the storage space. I was able to quickly get Rockstor up and running (after overcoming only one minor hurdle) and had SMB shares and users/groups created with just a few quick clicks. And, with the inclusion of add-ons (called Rockons), you can extend the feature set of your Rockstor to include new apps, servers, and services.

Let me walk you through that process (as well as how I solved one tiny hiccup), so you can decide if Rockstor is the solution for you.

A word on requirements

I managed to easily get Rockstor running as a VirtualBox VM. Whether you’re installing as a VM or on dedicated hardware, the minimal installation requirements are:

  • 64-bit Intel or AMD processor

  • 2GB RAM or more (recommended)

  • 8GB hard disk space for the OS

  • One or more additional hard drives for data (recommended)

  • Ethernet interface (with Internet access – for updates)

  • Media drive or USB port (for installation on dedicated hardware)

Installation

Based on the Anaconda Installer, the installation of Rockstor is incredibly simple. In fact, once you start the installation process, the only thing you have to do is configure a root user password; there is no package selection, no set up of systems or servers. Once the installation completes, reboot, and you’re ready to go.

When the reboot completes, you will discover the biggest (and really only) caveat to Rockstor—the handling of the IP address. After logging into the Rockstor terminal window (the only GUI is the web interface), you will find it does not give you any indication what IP address to use. And, since you weren’t able to configure the networking interface during installation, what do you do?  

The first thing would be to issue the command ip address. This will report to you the DHCP-assigned IP address of your server (Figure 1).

Figure 1: The IP address to access the Rockstor web interface.

Point your browser to the IP address (using secure HTTPS, so https://SERVER_IP) listed. NOTE: You will have to okay the exception for the self-signed certificate, used by your Rockstor instance, in your browser.

On the first page, you will be required to accept the license as well as create a hostname and admin user for your Rockstor instance (Figure 2).

Figure 2: The final installation step.

Upon successful creation of the hostname/admin user, you will be greeted by the Rockstor Dashboard (and a popup asking if you want to update to the latest release). Do note that the update popup will take you to a page where you can sign up for either the stable or the testing releases. The Stable updates will cost you $40.00 for a three-year subscription and the Testing updates do not have an associated cost. If you do enable the Testing updates, make sure you read through each offered changelog before okaying the update.

Addressing the IP address caveat

You don’t want to have to work with a DHCP-assigned IP address for your storage server. Once you’ve taken care of the final installation/update bits, you can then configure the network device for a manual (static) address through the Rockstor GUI. One method for setting up a static IP address is through the Rockstor web interface. To do this, log onto Rockstor as your admin user and then click on SYSTEM > Network (Figure 3).

Figure 3: Navigating to the Network configuration from the Rockstor Dashboard.

In the resulting window (Figure 4), configure the network interface as a manual connection and fill out the necessary information.

Figure 4: Configuring your networking interface for a static address.

With that taken care of, you’re ready to start setting up your Rockstor storage server.

If the above method fails you (which it did me in one instance), I have found the best solution to be the old-fashioned method…configuring the network manually. For this, you need to log into the Rockstor server as root and then edit the networking file associated with your network adapter. As I was working with VirtualBox, the file was /etc/sysconfig/network-scripts/ifcfg-enp0s3. Open that file for editing and make sure the following options are configured properly:

ONBOOT=”yes”

BOOTPROTO=”static”

IPADDR=”IP address

GATEWAY=”gateway

DNS1=”DNS address

DNS2=”DNS address

where all options in bold are specific to your network.
There will be other options preconfigured in the file (e.g., NAME and DEVICE), leave them as-is. Once you’ve made these changes, save and close the file, and then issue the command:

systemctl restart network

Now, if you go back to SYSTEM > Network (on your Rockstor Dashboard), you should see the network configuration for your interface is set to Manual, with all of your necessary options.

You are now ready to go back to your Rockstor Dashboard, click STORAGE and set up whatever storage type you need (Figure 5).

Figure 5: Storage options found in Rockstor.

Quick Samba Share

Before you create your first share, you’ll want to head over to SYSTEM > Groups and SYSTEM > Users and make sure you have the necessary users/groups created, in order to make creating shares easier.

To set up your first Samba Share, click on STORAGE > Samba. In the resulting window (Figure 6), make sure that Samba Service is set to ON.

Figure 6: With the Samba Service ON, you’re ready to go.

With the Samba Service running, go back to the Dashboard and click STORAGE > Shares. In this new window, click the Create Share button and fill out the necessary information (Figure 7).

Figure 7: Creating a new Samba share.

Click Submit and your share has been created. After the share has been saved, click on the new share from the listings and then click on the Access Control tab, where you can change the associated group for the share as well as the share permissions (Figure 8).

Figure 8: Editing the group and permissions for a share.

And that’s it to creating a Samba share with Rockstor.

A solid solution for SOHO and SMB

If you’re looking for a solid storage solution for your home office or small business, you’d certainly be remiss for skipping over the open source Rockstor solution. With one of the best storage GUIs I’ve used, Rockstor makes creating a powerful storage solution an experience nearly anyone can handle.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Get Ready for Open Source Summit 2017 in Los Angeles

One of the biggest open source events in the world is right around the corner, and the full schedule has now been announced. The Open Source Summit — Sept. 11-14 in Los Angeles, CA — features more than 200 sessions, with additional breakout sessions throughout the day covering technical, leadership, and professional open source tracks.

Some of the session highlights include:

  • The Open Road to Autonomous Driving – Dan Cauchy, Automotive Grade Linux

  • Code Detective: How to Investigate Linux Performance Issues – Gabriel Krisman Bertazi, Collabora

  • Aim to Be an Open Source Zero – Guy Martin, Autodesk

  • How to Fail Fast and Rise Again: A Case Study on Transforming to Cloud Native – Ken Owens, Cisco DevNet

  • First 90 Days: Building an OSS Practice – Nithya Ruff, Comcast

  • Raspberry Pi Hacks – Ruth Suehle, Red Hat

  • From Zero to Serverless in 60 Seconds, Anywhere  – Alex Ellis, AD

  • Choose Your Own Adventure: Finding the Right Path to Containerization – Erica von Buelow, CoreOS

  • Data Science: A Containerized, Cloud-Native Approach – Daniel Whitenack, Pachyderm 

On day one, the keynotes start at 9 a.m. and, as usual, Jim Zemlin, Executive Director, The Linux Foundation will give kick off the event with an overview of Linux and open source. Another highlight of the day will be a keynote discussion between Zemlin and Linus Torvalds, the creator of Linux and Git. You can read about other keynote highlights and more reasons to attend the Summit in this article.

Pro tip: If you are planning to attend the day’s first keynote, you should arrive to the venue early in the morning to collect your badge and bag (or do this the day before) to beat the rush. Because this year’s event is a new combination of several events, attendees can expect a bigger crowd

Co-hosted events

The OS Summit brings together leading maintainers, developers, and project leads from around the world, who gather to share updates, best practices, and expertise to further the Linux ecosystem. Additionally, the co-located events mean you will have the opportunity to collaborate, contribute, and learn across a wide variety of topics.

ContainerCon: This event brings together leading experts in both the development and operations community to share ideas and best practices for how containers are shaping computing today and in the future with a focus on DevOps culture, automation, portability and efficiency.

CloudOpen: This conference gathers top professionals to discuss cloud platforms, automation and management tools, DevOps, virtualization, containers, software-defined networking, storage & filesystems, big data tools and platforms, open source best practices and much more.

Open Community Conference: This event provides presentations, tutorials, panel discussions, and networking opportunities that bring together some of the leading practitioners to share their expertise in how you can build powerful communities.

Diversity Empowerment Summit: This event promotes and facilitates an increase in diversity, inclusion, empowerment and social innovation in the open source community, and provides a platform for discussion and collaboration.

And, if you have children, consider bringing them to the Open Source Summit for the full-day kids’ workshop on September 10. The workshop is designed for school-aged children interested in learning more about computer programming. And, you don’t have to worry about babysitting the kids during the rest of the conference; the Open Source Summit offers free childcare for participants.

Register now at the discounted rate of $800 through June 24. Academic and hobbyist rates are also available. Applications are also being accepted for diversity and needs-based scholarships.

Why Enterprises Are Using Node.js for Digital Transformation

As consumers adopt and demand things more quickly, it is essential to have a fluid and quick software development process that will allow businesses to give customers new and different digital experiences. According to a recent Forrester report, Digital Transformation Using Node.js: This Swiss Army Knife is More Than Just an Application Platform and the key topic in our upcoming webinar on July 12 at 11am PT — register here, Node.js is becoming the application platform for building out these digital experiences, by giving developers the ability to:

  • Build APIs that support both application and experience demands.
  • Rapid experimentation with new and existing corporate data
  • Accelerate application modernization
  • Create digital experiences across multiple platforms (not just mobile and web)
  • Innovate on the future connected device experience.

And it’s not just digital startups out of the Silicon Valley that are using Node.js, enterprises around the globe are looking to Node.js to aid in their digital transformation:

The Node.js Foundation will be talking more about why companies are turning to Node.js for digital transformation and Node.js best practices during the above mentioned free webinar on July 12 at 11am PT with Rick Adams, Senior IT Manager with Lowe’s Digital. The conversation will highlight other key findings from the Forrester report.

Learn more by registering for the webinar today!

Effective Microservices Architecture with Event-Driven Design

There’s no doubt, in the IT world, microservices are sexy. But just because you find something cool and attractive doesn’t mean it’s good for you. And it doesn’t mean you know how to use it properly.

In fact, microservices, in particular, aren’t easy. Yes, microservices allow for different team members to work on different parts of the code at different speeds. But this wonderful autonomy often finds folks even more siloed, not knowing what the other is doing before throwing it over the wall to the testing and documentation teams, according to Red Hat’s Principle Middleware Architect Christian Posta.

In order to gain true microservices independence, Posta says that we need to shed our dependencies, such as the assumption that microservices mean each service controls its own database.

Read more at The New Stack

Best Linux Distro: Linux Experts Rate Distros

Selecting the best Linux distro is a matter of personal choice, based on your daily work flow. Two Linux experts list their personal picks for best distro and discuss each distro’s merits and challenges.

Bruce Byfield’s Picks

The best Linux distro is always subjective. My own list of the best Linux distros depend on my current interests. One or two are always on my list, but the others are usually ones that have boast something different. Since I am regularly watching for new distros and developing new interests, this year’s list has only some overlaps with last year’s list.

To be honest, I follow Linux desktops more closely than Linux distributions. To me, desktop environments are where the innovation occurs. In fact, I would argue that when a distribution calls attention to itself, something is probably wrong.

All the same, I have my favorite Linux distros. They are not necessarily the most popular – that would be bland – but they are distributions that, one way or the other, are influential or fill a niche extremely well.

Read more at Datamation

A Tour of the Kubernetes Source Code Part One: From kubectl to API Server

Kubernetes continues to experience explosive growth and software developers that are able to understand and contribute to the Kubernetes code base are in high demand. Learning the Kubernetes code base is not easy. Kubernetes is written Go which is a fairly new programming language and it has a large amount of source code. In this multi-part series of articles I will dig in and explain key portions of the Kubernetes code base and also explain the techniques I have used to help me understand the code. My goal is to provide a set of articles that will enable software developers new to Kubernetes to more quickly learn the Kubernetes source code. In this first article, I will cover the flow through the code from running a simple kubectl command to sending a REST call to the API Server. Before using this article to dig into the Kubernetes code, I recommend you read an outstanding high level overview of the Kubernetes architecture by Julia Evans.

Read more at IBM Open Tech

Let’s Encrypt ACME Certificate Protocol Set for Standardization

The open-source Let’s Encrypt project has been an innovating force on the security landscape over the last several years, providing millions of free SSL/TLS certificates to help secure web traffic. Aside from the disruptive model of providing certificates for free, Let’s Encrypt has also helped to pioneer new technology to help manage and deliver certificates as well, including the Automated Certificate Management Environment (ACME).

ACME is no longer just a Let’s Encrypt effort at this point in 2017 and is now being standardized by the Internet Engineering Task Force (IETF). The ACME protocol can be used by a Certificate Authority (CA) to automate the process of verification and certificate issuance.  

Read more at eWeek

Practical Networking for Linux Admins: Real IPv6

When last we met, we reviewed essential TCP/IP basics for Linux admins in Practical Networking for Linux Admins: TCP/IP. Here, we will review network and host addressing and find out whatever happened to IPv6?

IPv4 Ran Out Already

Once upon a time, alarms were sounding everywhere: We are running out of IPv4 addresses! Run in circles, scream and shout! So, what happened? We ran out. IPv4 Address Status at ARIN says “ARIN’s free pool of IPv4 address space was depleted on 24 September 2015. As a result, we no longer can fulfill requests for IPv4 addresses unless you meet certain policy requirements…” Most of us get our IPv4 addresses from our Internet service providers (ISPs), so our ISPs are duking it out for new address blocks.

What do we do about it? Start with bitter laughter, because service providers and device manufacturers are still not well-prepared, and IPv6 support is incomplete despite having more than a decade to implement it. This is not surprising, given how many businesses think computing is like office furniture: buy it once and use it forever (except, of course, for the executive team, who get all the shiny new doodads while us worker bees get stuck with leftovers). Google, who sees all and mines all, has some interesting graphs on IPv6 adoption. Overall adoption is about 18 percent, with the United States at 34 percent and Belgium leading at 48 percent.

What can we Linux nerds do about this? Linux, of course, has had IPv6 support for ages. The first stop is your ISP; visit Test IPv6 to learn their level of IPv6 support. If they are IPv6-ready, they will assign you a block of addresses, and then you can spend many fun hours roaming the Internet in search of sites that can be reached over IPv6.

IPv6 Addressing

IPv6 addresses are 128-bit, which means we have a pool of 2^128 addresses to use. That is 340,282,366,920,938,463,463,374,607,431,768,211,456, or 340 undecillion, 282 decillion, 366 nonillion, 920 octillion, 938 septillion, 463 sextillion, 463 quintillion, 374 quadrillion, 607 trillion, 431 billion, 768 million, 211 thousand and 456 addresses. Which should be just about enough for the Internet of Insecure Intrusive Gratuitously Connected Things.

In contrast, 32-bit IPv4 supplies 2^32 addresses, or just under 4.3 billion. Network address translation (NAT) is the only thing that has kept IPv4 alive this long. NAT is why most home and small businesses get by with one public IPv4 address serving large private LANs. NAT forwards and rewrites your LAN addresses so that lonely public address can serve multitudes of hosts in private address spaces. It’s a clever hack, but it adds complexity to firewall rules and services, and in my not-quite-humble opinion that ingenuity would have been better invested in moving forward instead of clinging to inadequate legacies. Of course, that’s a social problem rather than a technical problem, and social problems are the most challenging.

IPv6 addresses are long at 8 hexadecimal octets. This is the loopback address, 127.0.0.1, in IPv6:

0000:0000:0000:0000:0000:0000:0000:0001

Fortunately, there are shortcuts. Any quad of zeroes can be condensed into a single zero, like this:

0:0:0:0:0:0:0:1

You can shorten this even further, as any unbroken sequence of consecutive zeros can be replaced with a pair of colons, so the loopback address becomes:

::1

Which you can see on your faithful Linux system with ifconfig:

$ ifconfig lo
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host

I know, we’re supposed to use the ip command because ifconfig is deprecated. When ip formats its output as readably as ifconfig then I will consider it.

Be Quiet and Drink Your CIDR

Classless Inter-Domain Routing (CIDR) defines how many addresses are in a network block. For the loopback address, ::1/128, that is a single address because it uses all 128 bits. CIDR notation is described as a prefix, which is confusing because it looks like a suffix. But it really is a prefix, because it tells you the bit length of a common prefix of bits, which defines a single block of addresses. Then you have a subnet, and finally the host portion of the address. 2001:0db8::/64 expands to this:

2001:db8:0000:0000:0000:0000:0000:0000
_____________|____|___________________
network ID   subnet  interface address

When your ISP gives you a block of addresses, they control the network ID and you control the rest. This example gives you 18,446,744,073,709,551,616 individual addresses and 65,536 subnets. Mediawiki has a great page with charts that explains all of this, and how allocations are managed, at Range blocks/IPv6

2000::/3 is the global unicast range, or public routable addresses. Do not use these for experimentation without blocking them from leaving your LAN. Better yet, don’t use them and move on to the next paragraph.

The 2001:0DB8::/32 block is reserved for documentation and examples, so use these for testing. This example assigns the first available address to interface enp0s25, which is what is what Ubuntu calls my eth0 interface:

# ip -6 addr add 2001:0db8::1/64 dev enp0s25
$ ifconfig enp0s25
enp0s25   Link encap:Ethernet  HWaddr d0:50:99:82:e7:2b  
          inet6 addr: 2001:db8::1/64 Scope:Global

Increment up from :1 in hexadecimal: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, a, b, c, d, e, f, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 1a, 1b, and so on.

You can add as many addresses as you like to a single interface. You can ping them from the host they’re on, but not from other hosts on your LAN because you need a router. Next week, we’ll set up routing.

IPcalc

All of these fine hexadecimal addresses are converted from binary. Where does the binary come from? The breath of angels. Or maybe the tears of unicorns, I forget. At any rate, you’re welcome to work these out the hard way, or install ipcalc on your Linux machine, or use any of the nice web-based IP calculators. Don’t be too proud to use these because they’re lifesavers, especially for routing, as we’ll see next week.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.