In this article we will show how to create a database (also known as a schema), tables (with data types), and explain how to perform Data Manipulation Language (DML) operations with data on aMySQL / MariaDB server.
It is assumed that you have previously 1) installed the necessary packages on your Linux system, and2) executed mysql_secure_installation to improve the database server’s security. If not, follow below guides to install MySQL/MariaDB server.
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
When it comes to infrastructure for your open source project, you are never done, said Amye Scavarda, Gluster Community Lead at Red Hat, and Nigel Babu, Gluster CI/Automation Engineer at Red Hat. One theme during their LinuxCon Europe talk, “Making More Open: Creating Open Source Infrastructure for Your Open Source Project,” is that you can get closer to being done, but there is no such thing as “done” when it comes to infrastructure. Momentum is important – things are always moving, changing, and evolving. The work never ends as you figure out what can be left behind, what should be upgraded and how you can move into the future to incorporate new technologies.
Amye and Nigel talked about how when you start an open source project, you tend to focus on shipping and releasing your code. You don’t necessarily worry too much about how you got there and what you did to get it shipped. In the early days of Gluster, almost everyone had root access to the build machine, since it was only a few people working closely together. Fast forward a few years now that Red Hat has acquired Gluster, and there are many people across a wide variety of time zones working on the project. How to manage communication across a large, growing open source project became a big challenge.
When Nigel first started working on the project, he talked to many people to find the pain points and prioritized his time to work on the issues that were causing the most pain for others. He also had quite a few thoughts about, “How is this even working? It looks like it shouldn’t work.” In some cases, there was no real access control, and people were using some shared accounts. While this may be fine for a small team, it can go horribly wrong as the project continues to grow. In the early days of Gluster, there were plenty of firefighters working to solve immediate issues based on tribal knowledge, but there was very little documentation, which was making it hard to debug problems for new project members. It also created a culture of too many quick fixes, like restarting services regularly, instead of understanding why they failed and fixing the root cause. Rather than just talking to someone about each issue, they are encouraging people to file a bug, instead. With bug reports, you’ll be able to see the patterns and track issues over time.
Amye talked about the “Church of the Shaven Yak” where there is so much to do, but sometimes little progress being made. You pick a piece of the yak to shave every day, and you continue to make good progress, but while you are shaving one piece of the yak, the hair is growing back in another area. Nigel compares this to a marathon where you continue to run, but sometimes those miles seem to pass more slowly, and you don’t feel like you are making progress.
Much of this sounds a bit critical, but Amye stressed that you should be careful not to be too hard on the people who came before you. You should remember that the people working in the project when it was smaller were working in a different environment. They made the choices that seemed right for the project at the time, and the people who come into the project later when the project has continued to evolve probably won’t like the way you did some things. You will meet resistance because you are shaking up people’s worlds, and introducing new things and new processes, but sometimes you need to shake things up as your project grows and evolves.
I have this book — a Spanish edition of Stephen Coffin’s seminal manual, Unix System V Release 4: The Complete Reference. You can open it on any of its 700+ pages and bet your bottom dollar that the commands on the page will work in a modern-day Linux. Well, except where teletypes and tape storage are involved.
Said like that, you may think the *NIX command line hasn’t changed a lot since the early 1990s. This is not entirely true.
Take for instance the moreutils collection. You can install it on most distros with your regular package manager. On Ubuntu and Ubuntu-based distros, do
sudo apt install moreutils
On Debian, the following will do the trick:
suapt-get install moreutils
On Fedora, you can do:
yum install moreutils
OpenSUSE requires one more step of adding a specific repository, or you could simply visit openSUSE’s online package search and use the 1 Click Installservice.
Shiny new tools
Moreutils provides you with a set of new tools that are not part of the standard Linux set but probably should be. For example, vidir provides an editor interface for modifying files and directories. Despite its name, vidir uses the default command-line editor, so if you have vi(m), sure, it will show a vim interface (Figure 1); but if you have nano or emacs configured, it’ll show nano or emacs interface (Figure 2).
Figure 1: The vidir utility will show the interface of your default shell editor, be it vi(m)…
Figure 2: …or nano, or something else entirely.
You can edit whole directories with:
vidir directoryname
Or just a subset of files with, for example:
find Pictures -iname "*.png" | vidir -
Notice the “-“. This tells vidir to take its input from a pipe.
You use your regular key combinations to modify your directories and files. If you’re using a vi-like interface, press ito modify directory and file names; press d[number]d to delete files or directories — note that vidir has a security feature built in, which won’t let you erase non-empty directories; press u to undo changes, and so on.
Soaking it all up: Sponge
According to its manpage, sponge reads standard input and writes it out to the specified file. Unlike a shell redirect, sponge soaks up all its input before writing the output file.
Now, that’s useful. To appreciate how much, try this: Create a plain-text file containing the names of the knights of the round table, or the days of the week, or whatever. Basically any file with a list of items *not* in alphabetical order:
ArthurLancelotGawainGeraintPercivalBors the YoungerLamorakKayGarethBedivereGaherisGalahadTristan
Save it as knights.txt.
If you wanted to sort the names alphabetically, what would you do? You’d probably try something like this:
you would start overwriting the file before you had finished reading from it, ruining it in the process. So, you need to mess around with that intermediate file,knights_sorted.txt. And you still have that original, unsorted file hanging around, too, which you now have to erase before renaming the sorted file, hence the long unwieldy chain of instructions. With sponge, however, you can do this:
sort knights.txt | sponge knight.txt
Check it out: no intermediate file!
cat knightsArthur Bedivere Bors the Younger Gaheris Galahad Gareth Gawain Geraint Kay Lamorak Lancelot Percival Tristan
Thanks to sponge, you can grab the content from a text file, do all the chained processes on it using things like sort, unique, sed, grep, and tr. Sponge will soak it all up, wait until all the lines have been processed, and then wring it all out to the same file, all in one blast.
Talking of soaking, let’s discuss pee. Despite its name, pee has *nothing* to do with bodily fluids. In fact, the name is a combination of pipe (as in the way you pass the output from one command onto another in *NIX systems) and tee (as in the tee *NIX command line instruction).
While tee re-routes the output from a command to files (cat knights.txt | tee k1 k2 k3 creates files k1, k2, and k3containing the content cat‘ed from knights.txt), pee pipes the output into a list of commands:
In the example above, using the output from the original and unorderedknights.txt, you pipe it first to sortto get an ordered list; then to wc (word count), which, with the -loption, returns the number of lines (13); and finally to grep, which then uses a simple regular expression pattern to print out only the lines that start with a capital “G“.
Pipe dream
Getting back to editors but staying with pipes that push stuff hither and thither, you must try vipe. Again, this is a twist on your default shell editor. Plunk it into your chain of piped commands and it will open up with the output from the prior instructions. For example:
will show all the outputs we saw in the previous example in a (in my case) vi-like editor. You can now edit the output to your heart’s content, removing, adding, and modifying lines. When you’re done, save and quit, and your edited output will be passed on to the next command in the chain.
Pretty cool, no?
Moreutils has more…
Moreutils comes with many more goodies. The combineutility merges the lines from two files using Boolean operations; ts adds a handy, human readable timestamp to each line taken from an input;ifdata makes extracting data from a network interface supereasy — very useful for scripting, and so on. Check out the project’s web page and the man pages for each command to see how they all work.
Although it is true you could emulate many of the behaviors of these new commands with a bit of command-line fu, tools like sponge, pee, and vipe just make working on the shell easier and more of a pleasure.
The Moreutilspackage is always evolving, and new stuff gets added from time to time. I, for one, am excited to see what gets included next.
The schism between two Arduino companies (that we covered in March 2015) has apparently been settled. The poster child for the open hardware movement is now under one company “Arduino Holding” and a new not-for-profit Arduino Foundation has been started. “Massimo Banzi, Co-Founder of Arduino LLC, commented, ‘Today is one of the best days in Arduino history. This allows us to start a new course for Arduino made of constructive dialogue and disruptive innovation in the education, Makers and IoT fields….”
I have been meaning to write this post for a long time, but one thing or another has gotten in the way. It’s important to me to provide an accurate history, definition, and proper usage of the Pets vs Cattle meme so that everyone can understand why it was successful and how it’s still vital as a tool for driving understanding of cloud. The meme has taken off because it helped created an understanding of the “old way” vs. the “new way” of doing things. That’s great, but the value of the meme becomes muddied when misused. We can all agree there’s enough muddy terminology and phraseology already, such as “cloud,” “hybrid,” and “DevOps”. So this post aims to set the record straight and assure a canonical history that everyone can reference and use.
Over the course of the last year, Walmart.com — a site that handles 80 million monthly visitors and offers 15 million items for sale — migrated to React and Node.js. In the process of this transition, the WalmartLabs team built Electrode, a React-based application platform to power Walmart.com. It’s now open sourcing this platform.
Electrode provides developers with boilerplate code to build universal React apps that consist of a number of standalone modules that developers can choose to add more functionality to their Node apps. These include a tool for managing the configuration of Node.js apps, for example, as well as a React component that helps you render above-the-fold content faster.
In the last part of this series, we explored how the concept of volumes brings persistence to containers. This article builds upon the understanding of volumes to introduce persistent volumes and claims, which form the robust storage infrastructure of Kubernetes.
To appreciate how Kubernetes manages storage pools that provide persistence to applications, we need to understand the architecture and the workflow related to application deployment.
Kubernetes is used in various roles — by developers, system administrators, operations, and DevOps teams. Each of these personas, if you will, interact with the infrastructure in a distinct way. The system administration team is responsible for configuring the physical infrastructure for running Kubernetes cluster.
Open source today is not just about the products and technologies that companies use, but rather a whole rainbow of adjustments that have penetrated the corporate culture beyond the engineering department.
I heard some of the best examples of this during a discussion for data industry leaders at the forefront of open source software innovation this summer. The event was co-hosted by EnterpriseDB (EDB) and MIT Technology Review. We shared our experiences of data transformation with Postgres, NoSQL, and other solutions, and really learned a lot from each other.
These discussions have been part of a series that culminates at Postgres Vision on October 11-13 in San Francisco. Steve Wozniak, Jim Zemlin of The Linux Foundation, and industry and government leaders will be there to talk about the future of open source.
The Git distributed revision control system is splendid in a multitude of ways but one — keeping a good history of patches and commits. Pish tosh, you say, for Git remembers everything! Yes, it does, until you rebase. To solve this problem, Josh Triplett, built a new tool called git-series, which he described in his talk at ContainerCon North America.
In his entertaining talk, Triplett, ChromeOS Architect at Intel, goes into a good level of detail on why he wrote git-series and how it works. First, why is it even necessary? Can’t you just merge everything and never rebase? You can, though, as Triplett says, “You’re actually going to go back and rewrite history as though you had done it that way to begin with. The reason you do that is so that then when you send the stack of patches, that stack of patches as merged into the public history will look like a reasonable series of development changes that make sense. As opposed to seeing a pull request that says, “Implement the feature, fix the thing I just implemented. Fix it some more. Maybe it’ll work this time.” You don’t want to see that in your public history even though that is what you actually did.”
It is reasonable for a project to require a clean public history, and you can have as mucky a private history as you like. But Git is all about working with other people. Triplett describes a typical scenario: “Development proceeds on from there and maybe you have to do a v3, v4, v5 but what happened to version 1? Did you save a copy of it anywhere? Do you still have it? Somebody makes an offhand comment, ‘I liked the way you did this in v1 better.’ They want to see the history of development. The real history of development and not the lovely curated history.”
So then you’re digging through emails, git reflog or branching your project until it looks like hallucinogenic spaghetti. “You might have a branch named feature v1, which was probably named feature and then you renamed it feature v1 when you realized you needed a v2. Then you have a v3 with that typo fix, and a v8 rebased on top of 4.6 with Alice’s fix incorporated. We have a version control system. We should be past this,” Triplett said.
Git-series is an elegant solution that tracks a patch series and its evolution through your project, and it works seamlessly with all Git features. According to Triplett, “Git-series tracks the history of a patch series, how you’ve changed it through non-fast-forwarding changes. You can rewrite history and it will keep track of what the old history looked like, including a commit message telling you what you were doing. It tracks a cover letter so that you can version that over time. It tracks the base that you started your series from to make it easy to rebase this.”
Watch Triplett’s presentation (below), which includes a live demo of git-series in action.
ROS is an open source framework allowing you to create advanced robots. Using ROS takes much of the tedious work out of creating useful robots because it supplies code for navigation, arm manipulation, and other common robot tasks. ROS allows various software components to communicate between one or more computers and microcontrollers, and it allows you to control one or more machine robot networks from a desktop, web browser, and/or other input device. Although ROS stands for Robot Operating System, it is really a framework that sits on top of an existing operating system such as GNU/Linux. Packages are provided for Ubuntu Linux to help get your robot up and rolling.
The more ambitious your robot design becomes, the more ROS will be able to help you. For example, with ROS you can take a robot beyond manual control with a joystick and tell the robot to make its own way into the kitchen. The difference in complexity from the former to a robot that can create and use maps and avoid obstacles along the way is quite substantial. For example, joystick control of a robot can be set up fairly quickly just using an Arduino. For autonomous movement, ROS has map creation, depth map handling, and robot localization already available so you can use higher level “go to this place” commands.
A high-level overview
ROS provides support for a publish and subscribe message model using a namespace like a filesystem. A program can register one or more ROS nodes and these nodes can publish and subscribe to topics that are interesting to them. For example, you might have a ROS node that reads a USB camera and publishes the images to the “/camera” topic for the rest of your robot to enjoy. A small Arduino might subscribe to messages on “/clawpincer” and adjust the position of your robot claw based on messages that are sent to it. This separation of processing into nodes which send and receive messages on topics allows you to connect together specialized nodes to form an entire robot. The message passing helps to keep your nodes separate. A node might just display information on an LED screen without needing to know anything about the rest of your robot (Figure 1).
Figure 1: A node can display information on an LED screen.
Messages sent to topics can use basic types like integers, floating point numbers, times, durations, strings, and multidimensional arrays as well as some robotics specific types for example setting the desired drive speeds(s) and direction(s). You can also define your own custom message types.
A complex robot is likely to run many nodes, and starting things up in the right order can be a complex task in itself. ROS uses launch XML files to describe how and what needs to be started. A launch file can also include other launch files, so you can create a single command that will start your motor controller, cameras, navigation and mapping stack, displays, custom radio control software, etc.
The ROS MoveIt! software lets your robot use one or more arms to manipulate objects. MoveIt! integrates with ROS, detecting objects which might be temporarily blocking the most direct path that an arm might have otherwise taken to move to a given location.
A ROS node can be written in either C++ or Python. A partial example of publishing a message to a topic in ROS is shown below. The NodeHandle can be reused to send multiple messages; in this case, we are sending a single string to a topic that is specified using the template parameter to advertise(). Instead of passing a std::string to publish(), the ROS std::msgs type is passed.
Part of a Python program that listens on the chatter topic is shown below. As you can see, the basic type is accessed through the “.data” element much as in the C++ publisher shown above.
It is very useful for your robot to present a web interface offering both information and remote control. By starting the rosbridge_websocket package, you can send and receive ROS messages from JavaScript in the browser.
The following fragments set up a “ros” object for communication and, when a bootstrap form is completed, will send a message to the “/screen/textbig” topic so that the robot shows a given string to you. Although this example is simply showing text on the robot, you can also use sliders to alter the position of your robot arm or set waypoints in the web interface to have the robot move around.
var ros = new ROSLIB.Ros({ url : 'ws://192.168.2.3:9090'});var topic_screen_text_big = new ROSLIB.Topic({ ros : ros, name : '/screen/textbig', messageType : 'std_msgs/String'});var screen_showBigText = function() { var txt = $('#screen-textbig').val(); topic_screen_text_big.publish( new ROSLIB.Message({ data: txt }) );}// ...<form class="form-inline" onsubmit="screen_showBigText()" action="#"> <div class="row"> <div class="col-md-2"><label>BIG Text</label></div> <div class="col-md-4"><input type="text" class="form-control" placeholder="" id="screen-textbig" /></div> <div class="col-md-1"><button type="submit" class="btn btn-default">Submit</button></div> </div></form>
When starting out in robotics, it might be tempting to dismiss robot simulators. Simulators are great for folks who don’t have the real robot; but if you have the robot, why would you bother simulating it? Some things might be seen as a cross-over between simulation and reality. For example, when building a map, you are taking data from a camera or lidar device telling you how far things are away from your real robot in the real world. You can then mark that in your map and move your real robot around a bit and take another reading of how far things are away in the real world. You might think of the map that you are building as a model or “simulation” of the real world, which is affected by data that is acquired from the real world (your camera or lidar). Another example might be that you want to see how an arm movement will look on screen before performing it in the real world. So, the line between robotic simulation and the real robot can become a grey area.
ROS has support for simulation using Gazebo and a robot visualization tool called rviz, which lets to see your robot, its map, where the robot thinks it is located, and other data that is sent around through ROS topics.
You will often want to know exactly where something on your robot is relative to the real world. Is the camera located at ground level or 2 feet above the ground? You’ll need to know if the arm is at the front or the back of the robot to work out how far you extend the arm to pick something up. ROS provides the TF framework so you can describe in XML the layout of your robot and then easily find out where things are located without having to perform complex calculations in your own code.
Moving a robot is done by publishing a Twist message to the “/cmd_vel” topic. The Twist message is rather generic and allows a speed and heading to be given for up to three axes. For a robot that operates by turning two wheels, you will only need to set a single speed and a single angle or heading. To provide feedback about movement, a robot base will publish Odometry information, which contains information about the current twist the robot is following and the pose of the robot. The pose allows a robot to show what direction it is facing as it is moving — handy for robots that can move sideways as well as backward and forward. It is also very useful to know if the robot is facing the door or has just entered through it.
Driving with no hands
For a robot to move to a desired destination by itself, many things are likely to be needed. A map of the walls and obstacles in the environment are needed, for example. Other requirements include knowledge of where the robot is on that map, some method to detect objects that block the path but that are not always on the map, a way to generate a plan to get to the destination from the current location, and a means to monitor exactly where the robot is as it moves towards the goal position. Being able to send messages to the robot base telling it what speed and heading to follow and then to monitor the odometry information as the robot moves allows control of the robot to be abstracted from how motion is achieved.
One fairly affordable method to build maps is using an “RGBD” camera, such as the Kinect, which offers both color and depth information in each image. Another way to work out depth information is by using two cameras that are a known distance apart, such as with a PlayStation camera or creating a similar setup using two normal web cameras in a fixed location. The Kinect is designed for indoor use in gaming and does not work well outside where there is a lot of background infrared light. Using two cameras can work both inside and outside but also requires light in order to see objects.
ROS has support for depth information from both the Kinect and PS4 eye cameras. For the latter, you will also need to resolder the PS4 eye cable to obtain a USB3 connection to it. Although I have seen successful modifications like this, you should be prepared to possibly damage or destroy some of your hardware if you undertake them.
Although cameras can provide information about how far objects are away in three dimensions, you might like to start navigating around by converting the information from the camera into a 2D representation. This is much less computationally intense, and ROS has good support for converting information from a Kinect to a “laser scan,” where the depth information is converted into a 2Dl representation of how far away objects are from the robot. The laser scan is then used by the gmapping package to generate a map of the environment. The Adaptive Monte Carlo Localization (AMCL) package can use the current laser scan and a rough idea of where the robot started to determine where the robot currently is located on a map. As the robot moves around a little bit, the initial location estimate is improved because more depth information from the real world helps work out the position of the robot relative to the map.
Final words
ROS is a very powerful robotics platform. That said, it does have a fairly steep learning curve. Some key tutorials would help ease new users into creating fully functional robots. For example, detailed instructions for the creation of an extremely cheap robot arm complete with a ROS package to drive it would provide a great base for customization for the robot arm you might have on your desk. It is often much simpler to customize the arm segment lengths in your robot arm model from an existing software package than to start from scratch.
On the other hand, ROS does allow a determined hobbyist to create a robot with mapping and navigation and be able to talk from JavaScript through to Arduino code running on one of many specific hardware controller boards on a robot.