Home Blog Page 294

Demystifying Kubernetes Operators with the Operator SDK: Part 2

In the previous article, we started building the foundation for building a custom operator that can be applied to real-world use cases.  In this part of our tutorial series, we are going to create a generic example-operator that manages our apps of Examplekind. We have already used the operator-sdk to build it out and implement the custom code in a repo here. For the tutorial, we will rebuild what is in this repo.

The example-operator will manage our Examplekind apps with the following behavior:  

  • Create an Examplekind deployment if it doesn’t exist using an Examplekind CR spec (for this example, we will use an nginx image running on port 80).

  • Ensure that the pod count is the same as specified in the Examplekind CR spec.

  • Update the Examplekind CR status with:

    • A label called Group defined in the spec

    • An enumerated list of the Podnames

Prerequisites

You’ll want to have the following prerequisites installed or set up before running through the tutorial. These are prerequisites to install operator-sdk, as well as a a few extras you’ll need.

Initialize your Environment

​1. Make sure you’ve got your Kubernetes cluster running by spinning up minikube. minikube start

2. Create a new folder for your example operator within your Go path.

mkdir -p $GOPATH/src/github.com/linux-blog-demo
cd $GOPATH/src/github.com/linux-blog-demo

3. Initialize a new example-operator project within the folder you created using the operator-sdk.

operator-sdk new example-operator
cd example-operator

JMgJ3Q24igREHRCmKiX0UAhZVXH-arLxyyXWfKAm

What just got created?

By running the operator-sdk new command, we scaffolded out a number of files and directories for our defined project. See the project layout for a complete description; for now, here are some important directories to note:

  • pkg/apis – contains the APIs for our CR. Right now this is relatively empty; the commands that follow will create our specific API and CR for Examplekind.
  • pkg/controller – contains the Controller implementations for our Operator, and specifically the custom code for how we reconcile our CR (currently this is somewhat empty as well).
  • deploy/ – contains generated K8s yaml deployments for our operator and its RBAC objects. The folder will also contain deployments for our CR and CRD, once they are generated in the steps that follow.  

Create a Custom Resource and Modify it

4. Create the Custom Resource and it’s API using the operator-sdk.

operator-sdk add api --api-version=example.kenzan.com/v1alpha1 --kind=Examplekind

JMgJ3Q24igREHRCmKiX0UAhZVXH-arLxyyXWfKAm

What just got created?

Under pkg/ais/example/v1alpha, a new generic API was created for Examplekind in the file examplekind_types.go.

Under deploy/crds, two new K8s yamls were generated:

  • examplekind_crd.yaml – a new CustomResourceDefinition defining our Examplekind object so Kubernetes knows about it.
  • examplekind_cr.yaml – a general manifest for deploying apps of type Examplekind

A DeepCopy methods library is generated for copying the Examplekind object

5. We need to modify the API in pkg/apis/example/v1alpha1/examplekind_types.go with some custom fields for our CR. Open this file in a text editor. Add the following custom variables to ExamplekindSpec and ExamplekindStatus structs.


JMgJ3Q24igREHRCmKiX0UAhZVXH-arLxyyXWfKAm

The variables in these structs are used to generate the data structures in the yaml spec for the Custom Resource, as well as variables we can later display in getting the status of the Custom Resource.  

6. After modifying the examplekind_types.go, regenerate the code.

operator-sdk generate k8s

JMgJ3Q24igREHRCmKiX0UAhZVXH-arLxyyXWfKAm

What just got created?

You always want to run the operator-sdk generate command after modifying the API in the _types.go file. This will regenerate the DeepCopy methods.

Create a New Controller and Write Custom Code for it

7. Now add a controller to your operator.  

operator-sdk add controller --api-version=example.kenzan.com/v1alpha1 --kind=Examplekind

JMgJ3Q24igREHRCmKiX0UAhZVXH-arLxyyXWfKAm

What just got created?
Among other code, a  pkg/controller/examplekind/examplekind_controller.go file was generated. This is the primary code running our controller; it contains a Reconcile loop where custom code can be implemented to reconcile the Custom Resource against its spec.  

8. Replace the examplekind_controller.go file with the one in our completed repo. The new file contains the custom code that we’ve added to the generated skeleton.

Wait, what was in the custom code we just added?  

If you want to know what is happening in the code we just added, read on. If not, you can skip to the next section to continue the tutorial.

To break down what we are doing in our examplekind_controller.go, lets first go back to what we are trying to accomplish:

  1. Create an Examplekind deployment if it doesn’t exist

  2. Make sure our count matches what we defined in our manifest

  3. Update the status with our group and podnames.

To achieve these things, we’ve created three methods: one to get pod names, one to create labels for us, and last to create a deployment.

In getPodNames(), we are using the core/v1 API to get the names of pods and appending them to a slice.

In labelsForExampleKind(), we are creating a label to be used later in our deployment. The operator name will be passed into this as a name value.

In newDeploymentForCR(), we are creating a deployment using the apps/v1 API. The label method is used here to pass in a label. It uses whatever image we specify in our manifest as you can see below in Image: m.Spec.Image. Replicas for this deployment will also use the count field we specified in our manifest.

Then in our main Reconcile() method, we check to see if our deployment exists. If it does not, we create a new one using the newDeploymentForCR() method. If for whatever reason it cannot create a deployment, print an error to the logs.

In the same Reconcile() method, we are also making sure that the deployment replica field is set to our count field in the spec of our manifest.

And we are getting a list of our pods that matches the label we created.

We are then passing the pod list into the getPodNames() method. We are making sure that the podNames field in our ExamplekindStatus ( in examplekind_types.go) is set to the podNames list.

Finally, we are making sure the AppGroup in our ExamplekindStatus (in examplekind_types.go) is set to the Group field in our Examplekind spec (also in examplekind_types.go).

Deploy your Operator and Custom Resource

We could run the example-operator as Go code locally outside the cluster, but here we are going to run it inside the cluster as its own Deployment, alongside the Examplekind apps it will watch and reconcile.

9. Kubernetes needs to know about your Examplekind Custom Resource Definition before creating instances, so go ahead and apply it to the cluster.

kubectl create -f 
deploy/crds/example_v1alpha1_examplekind_crd.yaml

10. Check to see that the custom resource definition is deployed.

kubectl get crd

11. We will need to build the example-operator as an image and push it to a repository. For simplicity, we’ll create a public repository on your account on dockerhub.com.

  a. Go to https://hub.docker.com/ and login

  b. Click Create Repository

  c. Leave the namespace as your username

  d. Enter the repository as “example-operator”

  e. Leave the visibility as Public.

  f. Click Create

12. Build the example-operator.

operator-sdk build [Dockerhub username]/example-operator:v0.0.1

13. Push the image to your repository on Dockerhub (this command may require logging in with your credentials).

docker push [Dockerhub username]/example-operator:v0.0.1

14. Open up the deploy/operator.yaml file that was generated during the build. This is a manifest that will run your example-operator as a Deployment in Kubernetes. We need to change the image so it is the same as the one we just pushed.

  a. Find image: REPLACE_IMAGE

  b. Replace with image: [Dockerhub username]/example-operator:v0.0.1

15. Set up Role-based Authentication for the example-operator by applying the RBAC manifests that were previously generated.

kubectl create -f deploy/service_account.yaml

kubectl create -f deploy/role.yaml

kubectl create -f deploy/role_binding.yaml

16. Deploy the example-operator.

kubectl create -f deploy/operator.yaml

17. Check to see that the example-operator is up and running.

kubectl get deploy

18. Now we’ll deploy several instances of the Examplekind app for our operator to watch. Open up the deploy/crds/example_v1alpha1_examplekind_cr.yaml deployment manifest. Update fields so they appear as below, with name, count, group, image and port. Notice we are adding fields that we defined in the spec struct of our pkg/apis/example/v1alpha1/examplekind_types.go.

apiVersion: "example.kenzan.com/v1alpha1"
kind: "Examplekind"
metadata:
 name: "kenzan-example"
spec:
 count: 3
 group: Demo-App
 image: nginx
 port: 80

19. Apply the Examplekind app deployment.

kubectl apply -f deploy/crds/example_v1alpha1_examplekind_cr.yaml

20. Check that an instance of the Examplekind object exists in Kubernetes.  

kubectl get Examplekind

21. Let’s describe the Examplekind object to see if our status now shows as expected.

kubectl describe Examplekind kenzan-example

Note that the Status describes the AppGroup the instances are a part of (“Demo-App”), as well as enumerates the Podnames.

22. Within the deploy/crds/example_v1alpha1_examplekind_cr.yaml, change the count to be 1 pod. Apply the deployment again.

kubectl apply -f deploy/crds/example_v1alpha1_examplekind_cr.yaml

23. Based on the operator reconciling against the spec, you should now have one instance of kenzan-example.

kubectl describe Examplekind kenzan-example

Well done. You’ve successfully created an example-operator, and become familiar with all the pieces and parts needed in the process. You may even have a few ideas in your head about which stateful applications you could potentially automate the management of for your organization, getting away from manual intervention. Take a look at the following links to build on your Operator knowledge:

Toye Idowu is a Platform Engineer at Kenzan Media.

Introducing the Interactive Deep Learning Landscape

The artificial intelligence (AI), deep learning (DL) and machine learning (ML) space is changing rapidly, with new projects and companies launching, existing ones growing, expanding and consolidating. More companies are also releasing their internal AI, ML, DL efforts under open source licenses to leverage the power of collaborative development, benefit from the innovation multiplier effect of open source, and provide faster, more agile development and accelerated time to market.

To make sense of it all and keep up to date on an ongoing basis, the LF Deep Learning Foundation has created an interactive Deep Learning Landscape, based on the Cloud Native Landscape pioneered by CNCF. This landscape is intended as a map to explore open source AI, ML, DL projects. It also showcases the member companies of the LF Deep Learning Foundation who contribute contribute heavily to open source AI, ML and DL and bring in their own projects to be housed at the Foundation.

Read more at LF Deep Learning

 

Open Source’s Evolution in Cloud-Native DevOps

Open source tools continue to serve as the underlying cornerstone of cloud native DevOps patterns and practices — while they also continue to change and evolve.

Cloud native’s origins, of course, trace back to when Amazon and then Microsoft began to offer so-called cloud platforms, allowing organizations to take advantage of massive resources on networks of servers in their data centers worldwide. Heavy hitters Google and Alibaba followed their lead, laying the groundwork for when, more recently, Netflix and Pivotal began to describe so-called cloud native architectures.

Netflix has been very transparent about its reliance on its large suite of open source stacks built for its momentous video sharing service, thanks largely to what the Cloud Native Computing Foundation [CNCF] has made available and Kubernetes and microservices architectures built on cloud native architectures. Additionally, about a decade after it was first introduced as a concept, DevOps has helped to set in motion the team culture fostering the development pipelines and workflows for the industry shift to cloud native deployments. …

DevOps’ deployments on cloud native tools and libraries obviously hinge on what DevOps teams think work best for their workflows. But in today’s new stack context, this era of open source and collaboration has created an explosion of possibilities. 

Read more at The New Stack

3 Aging IT Specialties that Just Won’t Retire

If your organization has a point-of-sale system running on technology that is older than you care to admit in public, well, you’re not alone.

Baby Boomers are retiring and taking with them the skills to run legacy technologies upon which organizations still (amazingly) rely – from AS/400 wrangling to COBOL development. That leaves many CIOs in a tight spot, trying to fill roles that not only require specialized knowledge no longer being taught but that most IT professionals agree also have limited long-term prospects. “Specific skill sets associated with mainframes, DB2 and Oracle, for example, are complex and require years of training, and can be challenging to find in young talent,” says Graig Paglieri, president of Randstad Technologies.

Let’s examine three categories of legacy tech skills that CIOs may still need for the foreseeable future, according to IT leaders and recruiters:

Read more at EnterprisersProject

Best VPNs for Linux

Linux-based operating systems are still a very small part of the desktop market, but that hasn’t stopped VPN services from providing client applications. The best we’ve found are from ExpressVPNNordVPNand VPN Unlimited.

Eight of the VPN services we’ve reviewed have either command-line-interface (CLI) or graphical-user-interface (GUI) client software for major Linux distributions such as Ubuntu, Mint and Red Hat.

The CLIs were just as easy to use as the GUIs, but we’ve still divided them into separate categories because Linux newbies may prefer windows and buttons over typed commands. Our top recommendation blends the two types of interfaces to get the best of both worlds.

Read more at Tom’s Guide

5 Screen Recorders for the Linux Desktop

There are so many reasons why you might need to record your Linux desktop. The two most important are for training and for support. If you are training users, a video recording of the desktop can go a long way to help them understand what you are trying to impart. Conversely, if you’re having trouble with one aspect of your Linux desktop, recording a video of the shenanigans could mean the difference between solving the problem and not. But what tools are available for the task? Fortunately, for every Linux user (regardless of desktop), there are options available. I want to highlight five of my favorite screen recorders for the Linux desktop. Among these five, you are certain to find one that perfectly meets your needs. I will only be focusing on those screen recorders that save as video. What video format you prefer may or may not dictate which tool you select.

And, without further ado, let’s get on with the list.

Simple Screen Recorder

I’m starting out with my go-to screen recorder. I use Simple Screen Recorder on a daily basis, and it never lets me down. This particular take on the screen recorder is available for nearly every flavor of Linux and is, as the name implies, very simple to use. With Simple Screen Recorder you can select a single window, a portion of the screen, or the entire screen to record. One of the best features of Simple Screen Recorder is the ability to save profiles (Figure 1), which allows you to configure the input for a recording (including scaling, frame rate, width, height, left edge and top edge spacing, and more). By saving profiles, you can easily use a specific profile to meet a unique need, without having to go through the customization every time. This is handy for those who do a lot of screen recording, with different input variables for specific jobs.

Figure 1: Simple Screen Recorder input profile window.

Simple screen recorder also:

  • Records audio input

  • Allows you to pause and resume recording

  • Offers a preview during recording

  • Allows for the selection of video containers and codecs

  • Adds timestamp to file name (optional)

  • Includes hotkey recording and sound notifications

  • Works well on slower machines

  • And much more

Simple Screen Recorder is one of the most reliable screen recording tools I have found for the Linux desktop. Simple Screen Recorder can be installed from the standard repositories on many desktops, or via easy to follow instructions on the application download page.

Gtk-recordmydesktop

The next entry, gtk-recordmydesktop, doesn’t give you nearly the options found in Simple Screen Recorder, but it does offer a command line component (for those who prefer not working with a GUI). The simplicity that comes along with this tool also means you are limited to a specific video output format (.ogv). That doesn’t mean gtk-recordmydesktop isn’t without appeal. In fact, there are a few features that make this option in the genre fairly appealing. First and foremost, it’s very simple to use. Second, the record window automatically gets out of your way while you record (as opposed to Simple Screen Recorder, where you need to minimize the recording window when recording full screen). Another feature found in gtk-recordmydesktop is the ability to have the recording follow the mouse (Figure 2).

Figure 2: Some of the options for gtk-recordmydesktop.

Unfortunately, the follow the mouse feature doesn’t always work as expected, so chances are you’ll be using the tool without this interesting option. In fact, if you opt to go the gtk-recordmydesktop route, you should understand the GUI frontend isn’t nearly as reliable as is the command line version of the tool. From the command line, you could record a specific position of the screen like so:

recordmydesktop -x X_POS -y Y_POS --width WIDTH --height HEIGHT -o FILENAME.ogv

where:

  • X_POS is the offset on the X axis

  • Y_POS is the offset on the Y axis

  • WIDTH is the width of the screen to be recorded

  • HEIGHT is the height of the screen to be recorded

  • FILENAME is the name of the file to be saved

To find out more about the command line options, issue the command man recordmydesktop and read through the manual page.

Kazam

If you’re looking for a bit more than just a recorded screencast, you might want to give Kazam a go. Not only can you record a standard screen video (with the usual—albeit limited amount of—bells and whistles), you can also take screenshots and even broadcast video to YouTube Live (Figure 3).

Figure 3: Setting up YouTube Live broadcasting in Kazam.

Kazam falls in line with gtk-recordmydesktop, when it comes to features. In other words, it’s slightly limited in what it can do. However, that doesn’t mean you shouldn’t give Kazam a go. In fact, Kazam might be one of the best screen recorders out there for new Linux users, as this app is pretty much point and click all the way. But if you’re looking for serious bells and whistles, look away.

The version of Kazam, with broadcast goodness, can be found in the following repository:

ppa:sylvain-pineau/kazam

For Ubuntu (and Ubuntu-based distributions), install with the following commands:

sudo apt-add-repository ppa:sylvain-pineau/kazam

sudo apt-get update

sudo apt-get install kazam -y

Vokoscreen

The Vokoscreen recording app is for new-ish users who need more options. Not only can you configure the output format and the video/audio codecs, you can also configure it to work with a  webcam (Figure 4).

Figure 4: Configuring a web cam for a Vokoscreen screen recording.

As with most every screen recording tool, Vokoscreen allows you to specify what on your screen to record. You can record the full screen (even selecting which display on multi-display setups), window, or area. Vokoscreen also allows you to select a magnification level (200×200, 400×200, or 600×200). The magnification level makes for a great tool to highlight a specific section of the screen (the magnification window follows your mouse).

Like all the other tools, Vokoscreen can be installed from the standard repositories or cloned from its GitHub repository.

OBS Studio

For many, OBS Studio will be considered the mack daddy of all screen recording tools. Why? Because OBS Studio is as much a broadcasting tool as it is a desktop recording tool. With OBS Studio, you can broadcast to YouTube, Smashcast, Mixer.com, DailyMotion, Facebook Live, Restream.io, LiveEdu.tv, Twitter, and more.  In fact, OBS Studio should seriously be considered the de facto standard for live broadcasting the Linux desktop.

Upon installation (the software is only officially supported for Ubuntu Linux 14.04 and newer), you will be asked to walk through an auto-configuration wizard, where you setup your streaming service (Figure 5). This is, of course, optional; however, if you’re using OBS Studio, chances are this is exactly why, so you won’t want to skip out on configuring your default stream.

Figure 5: Configuring your streaming service for OBS Studio.

I will warn you: OBS Studio isn’t exactly for the faint of heart. Plan on spending a good amount of time getting the streaming service up and running and getting up to speed with the tool. But for anyone needing such a solution for the Linux desktop, OBS Studio is what you want. Oh … it can also record your desktop screencast and save it locally.

There’s More Where That Came From

This is a short list of screen recording solutions for Linux. Although there are plenty more where this came from, you should be able to fill all your desktop recording needs with one of these five apps.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

How Docker Engine Works to Enable Containers

The modern container revolution started with Docker and its eponymous Docker Engine. Docker Engine is the runtime and tooling that enables container applications, defined by a dockerfile, to run on top of a host operating system in an isolated “container” section.

“We are here because of docker engine,” Maink Taneja, Sr Product Manager at Docker said in a session at the Dockercon Europe conference.

The Docker Engine in 2018 isn’t the same technology it was when Docker first started. Rather, it has evolved significantly in recent years and is now based on the containerd container runtime at the core.

So what actually is the modern Docker Engine architecture?

Read more at ServerWatch

The Case for Data-Driven Open Source Development

The lack of standardized metrics, datasets, methodologies and tools for extracting insights from Open Source projects is real.

Open Source Metrics That Actually Matter

Let’s take a look at the first part of the problem: the metrics. OSS Project stakeholders simply don’t have the data to make informed decisions, identify trends and forecast issues before they arise. Providing standardized and language agnostic data to all stakeholders is essential for the success of the Open Source industry as a whole. Everyone can query the API of large code repositories such as GitHub and GitLab and find interesting metrics but this approach has limitations. The data points you can pull are not always available, complete or structured properly. There are also some public datasets such as GH Archive but these datasets are not optimized for exhaustive querying of commits, issues, PRs, reviews, comments, across a large set of distributed git repositories.

Retrieving source code from a mono-repository is an easier task, but code retrieval at scale is a pain point for researchers, Open Source maintainers or managers who want to track individual or team contributions. 

Read more at The New Stack

Create a Fully Automated Light and Music Show for the Holidays: Part 1

This tutorial series from our archives explains how to build a fully automated holiday display with Raspberry Pi. 

Christmas has been one of my favorite festivals, and this year it’s special because I’m planning a massive project to decorate my house using open source projects. There will be WiFi-controlled lights and a music show, there will be a talking moose singing Christmas carols (powered by Raspberry Pi, Arduino, and some servo motors), there will be a magical musical Christmas tree, and much more.

I built a music-light show for Halloween, but I improved it and added more features as I worked on the Christmas project. In this series, I’ll provide comprehensive instructions to build a fully automated Christmas music/light show that turns on automatically at a given time or that you can plug and play.

Caveat: This project involves working with 110v A/C, so take on this project only if you have experience with high voltage and understand the necessary safety precautions.

I spent weeks finding just the right parts below to create my setup. You can use your own creativity when selecting that parts that you need.

What you need:

  1. Raspberry Pi 3

  2. Micro SD card 8GB

  3. 5v charger (2 units)

  4. Male to female breadboard jumper wires

  5. 1 power cable

  6. 8-channel solid state relay

  7. Four gang device box

  8. Duplex receptacle (4 pack)

  9. Receptacle wall plate

  10. Single core wire (red, white & black)

  11. Wood board

  12. Push switch

  13. Soldering rod

Get started with Pi

We need to install an operating system on our Pi, and we will be using Raspbian. First, let’s prepare the Micro SD card for Raspbian. Plug in the card into your PC and open the Terminal; we are going to format the Micro SD card as FAT32.

Run the lsblk command to list the block devices so you can get the block devices name of the micro sd card:

lsblk

In my case, it was mmcblk0. Once you have the block device name, run the parted command as sudo:

sudo parted /dev/mmcblk0

Once you are inside the parted utility, you will notice parted in the command line. Now create the partition table:

mklabel msdos

Then, create one partition:

mkpart primary fat32 1Mib 100%

And exit the parted utility:

quit

Again run the lsblk command to find the name of the partition that you just created:

lsblk

In my case, the partition on the ‘mmcblk0’ block devices was ‘mmcblk0p1’. We are going to format this partition with Fat32 file system:

sudo mkfs.vfat /dev/mmcblk0p1

Our Micro SD card is ready. Let’s download the zip file of the official image of NOOBS from this page. Then unzip the content of the downloaded folder into the root of the Micro SD card. First, change directory to Micro SD card:

cd path_of_microsd_card

unzip path_of_noobs_zip_file.zip

Open the Micro SD card in a file manage to make sure that all files are in the root folder of the card.

Prepare your Pi

Connect an HDMI monitor, keyboard and mouse to the Pi. Plug in the Micro SD card and then connect the 5V power supply. NOOBS will boot up and you will see this screen:

HvmwmAahNsQx7sIdo04PgxbQXccCZrvOsFhE3c6H

Select Raspbian from the list and click on the install button. The system will reboot after successful installation. Once you boot into Raspbian, open the wireless settings from the top bar and connect to your wireless.

We will be using our Raspberry Pi in headless mode so that we can manage the Christmas music show remotely from a PC, laptop, or mobile device. Before enabling SSH, however, we need to know the IP address of our Pi so that we can log into it remotely. Open the Terminal app and run the following command:

ifconfig

Note down the IP address listed under ‘wlan0’.

Once you have the IP address, open the configuration file of Raspbian by running the following command in the Terminal:

sudo raspi-config

Go to Advanced > SSH and select ‘Yes’ to enable SSH server.

(Note: use the arrow and enter keys to navigate and select; the mouse won’t work here)

4Dq4jUwtH3z5GqrAOEb3OOB57tY2kQwC2TNcL_Sh
35o1s47TeYpWFEYF3tM5c9ha0x-GEQyWdLmeIc7K

We will also change audio settings to get the audio output through the 3.5mm audio jack instead of HDMI. In Advanced Options, go to Audio and select the second option ‘Force 3.5mm (‘headphone’) jack’, then select ‘Ok’.

z7hunlE8Ic5S917VZouteuql54j36HRgaKg8sHQJ

Select ‘Finish’ in the main window and then reboot the system.

sudo reboot

You can now unplug the HDMI monitor as we will do the rest of the installation and configuration over ssh. Open terminal app on your PC or laptop and then ssh into the Pi:

 ssh pi@IP_ADDRESS_OF_PI

In my case it was:

ssh pi@10.0.0.33

Then enter the password for the Pi: ‘raspberry’.

This is the default password for pi, if you want to change it you can do so from the ‘raspi-config’ file.

Now it’s time to update your system:

sudo apt-get update
sudo apt-get dist-upgrade

It’s always a good idea to reboot your system if there are any kernel updates:

sudo reboot

In the next article, I’ll show how to set up the light show portion of our project, and in part 3, we’ll wrap it all up with some sound.

For 5 more fun projects for the Raspberry Pi 3, including a holiday light display and Minecraft Server, download the free E-book today!

Read about other Raspberry Pi projects:

5 Fun Raspberry Pi Projects: Getting Started

How to Build a Minecraft Server with Raspberry Pi 3

Build Your Own Netflix and Pandora With Raspberry Pi 3

Turn Raspberry Pi 3 Into a Powerful Media Player With RasPlex

How (and Why) to Get Ready for 802.11ax

802.11ax will dominate the Wi-Fi-infrastructure landscape. Here’s why–and what you need to do to get ready for 802.11ax.

With the steady advances in wireless LAN (WLAN) technologies during the past two decades–remember, the first 802.11 standard of 1997 specified just 1 and 2 Mbps throughput–it’s perhaps getting tiresome that we’re still having to deal with yet another new physical-layer standard. This time around it’s the soon-to-be-official 802.11ax, now called Wi-Fi 6 by the Wi-Fi Alliance. After all, we’re seeing per-station throughput on the order of 1 Gbps with Wave 2 of 802.11ac (now a.k.a. Wi-Fi 5) and appropriate clients. So it’s fair to ask if we really need more–so much so that another rip-and-replace of access-point (AP) infrastructure should in fact be in the plans. Answering that question is precisely our mission here.

Read more at ITPro Today