In this series of four tutorials, Ben Martin provides step-by-step instructions, code, and troubleshooting tips to take you through the process of building, powering, connecting, and controlling your own Mantis robot mounted with a Raspberry Pi.
Martin explains in detail how to attach motors, mount a RoboClaw motor controller, and then connect it to your Raspberry Pi through a USB port. He provides a program for controlling the robot with the keyboard and then extends the program so you can use a PS3 joystick to control your robotic creation.
Together, the tips and instructions found in these articles can help you build a powerful robot base that can be further modified to suit your needs.
Editor’s Note: This article is paid for by IBM as a Diamond-level sponsor of Apache Big Data, held May 9-12 in Vancouver, B.C., and was written by Linux.com.
Graphs model relationships between objects and they’ve been in use since about 100 years ago in mathematics and mechanical computing. Graph representations were used much earlier than that as maps of physical networks such as nomadic trade routes. That’s because a graph is the best way to visualize information for fast and reliable human consumption. But it was the advent of the Internet and the Web that spurred sophisticated graph use in representing increasingly more complex data and networks. Now in this big data age, graph theory – in computing, analytics and transactions – continues to be highly popular and there’s good reason for that.
We talked with Alaa Mahmoud, master inventor for IBM Cloud Data Services, and IBM’s lead for IBM Graph, to explore how such an old concept is so uniquely fitted to modern data storage, exploration and mining. And, to discover a few tips in using all things graph, too.Alaa Mahmoud, master inventor for IBM Cloud Data Services, and IBM’s lead for IBM Graph.
Linux.com: As a quick insight into how graph theory is used today, can you give us a modern example?
Alaa Mahmoud: Graph databases are a great modern example. These are increasingly popular NoSQL databases that store data as vertices or nodes connected by edges, rather than storing the data in tabular form, meaning in rows and columns.
A property graph adds even more information than a basic graph database, because it enables you to add and store properties, i.e. any number of key/value pairs associated with the data, on both vertices and edges. This lets you easily see more details in data relationships.
Both intrinsic properties – those that are computed relative to the graph structure — and extrinsic properties — those that are not related to the graph structure — can be added to the graph for increasingly complex analysis. To clarify the distinction, extrinsic properties are assigned values added to the edge that are not contained in the structure.
For example, on a graph of national sales data in a given time period, properties including regional and local sales figures, top selling products, and top buyers of the top products – plus any other related information – can be added to the edges of the graph, which both renders more insights and provides the ability to do more complex querying of the data.
Linux.com: The graph concept has been around a long time and used in many ways. In the last few years, its seen a resurgence in popularity. Why do you think that is?
Mahmoud: Today, everyone is talking about how to make sense of all the data we have. Graphs are uniquely suited to making sense of all the relationships in data.
This is because data relationships are always modeled as a graph, and so storing data as a graph eliminates the middle layer of work in transforming the data from the model to storage. Reducing the complexity and removing steps adds efficiency, and that’s appealing.
Using a graph query language – usually Gremlin developed by Apache TinkerPop – makes querying the data more natural, and allows us to create more complex queries in a shorter amount of time, versus using tabular or other formats or databases.
This means that developers can create queries in a language, such as Java, that is the same or similar to the code they write, which truly makes this work faster and easier.
Using graphs also reduces complexity because it’s a much easier, more natural way of thinking about data and the relationships within it. People don’t generally think of data relationships in a spreadsheet form. So why do the additional work of shoehorning data into tabs and rows if it only slows the work and complicates or restricts the analysis?
For all those reasons, using graph databases is very popular now and growing more so.
Linux.com: What about using a graph database in the cloud? Are the advantages similar to using the cloud overall, or are there extra advantages there for developers?
Mahmoud: Using a graph database in the cloud reduces the cost and complexity barriers for developers. And here’s how…
In an on-premises configuration, a graph database is built on top of other technologies, and that means a lot of components to work with. And in the end, it may or may not scale like you need it too. Further, working with so many pieces is cost-prohibitive for many developers.
But with a cloud-based graph database, developers get an API and credentials and they’re ready to go. Using a RESTful API, developers can use it on any computing platform and in any application that can make an HTTP request.
With a cloud solution, like IBM Graph, your users get high availability, and also a database that automatically scales as the data grows. Developers are getting security features that scale as the data and infrastructure expands. There’s also security for the data at rest and in motion, and the latest patches are automatically applied.
But perhaps the greatest advantage is that the user has a consistent experience even though the upgrading of the service under the hood happens frequently. Having a consistent experience means developers’ work isn’t interrupted or delayed by the need to continuously modify their code to match the technology they are running on top.
Linux.com: Certainly this big data-fueled age of large databases has added momentum to all things graph. Are there any areas or use cases where you think graph structures really outperform?
Mahmoud: The most dominant use in the list of typical use cases is with social networks, in representing people and their relationships as graphs. Not only is this a more natural way of thinking of these relationships, but it allows a way to add information on the edges that can help better define those relationships and bring additional understanding.
Graphs are also popularly used in recommendation engines, such as in retail transactions, for example. It’s one thing to discover that a person buys a certain item, and quite another to also see at what price range they’re likely to buy it, and other related details surrounding the purchase, in order to more accurately predict their future purchases.
Graphs are also very popular in security analysis and fraud detection, as well as in data governance and compliance.
Linux.com: Circular dependencies can be a problem – for example, with the use of some modules in open source software engineering. Are circular dependencies a problem in graph analytics or transactional graph structures?
Mahmoud: Algorithms can take care of circular dependencies in databases. The problems that do exist are not specific to graph databases, but instead are typical of NoSQL databases generally. A lot depends on implementation.
Linux.com: What are the biggest advantages of using the graph database approach?
Mahmoud: One of its primary strengths is increased productivity. For example, as mentioned before, in getting data from the model to the database, there’s no middle step. And, you can query the data in the same way you think about the data. This increases productivity by reducing complexity.
Graph databases reduce complexity in the data itself by enabling billions of edges between nodes and traversing it. This too increases productivity because you can see data relationships more readily and query it faster and with increasingly more complex queries.
Linux.com: Graph computing is a way of thinking as much as it is a set of tools. How does thinking in terms of graphs, processes and traversals make you stronger or better at finding the right outputs for improved decision-making?
Mahmoud: People in general are trained or inclined to think in terms of entities and relationships. Anything you can think of can be represented in a graph. And, we’re all used to seeing relational information presented in graphs.
So, it’s not so much a new way of thinking as it is supportive of the way we already think.
Linux.com: Got any tips to share in changing the thinking or extruding more from your data using graph anything?
Mahmoud: The turning point for us, and for a lot of people, was and is cloud-based graph databases, and graph everything really. Graph databases are powerful and love complex interconnected data.
To get started, just jump in with a cloud-based solution like IBM Graph. Hands-on is the best way to get familiar with it.
Linux.com: Anything else you’d like to add to this conversation that we haven’t mentioned yet?
Mahmoud: Using graph anything is more fun than you think, and a very natural way of doing what you’re trying to do in the first place – finding and understanding relationships between entities. It’s a natural way of modeling, a natural way of thinking, and a natural way discovering your data. There’s a reason graph databases are so popular and getting more popular every day. Give it a try and you’ll see exactly why that is.
Try out IBM Graph, IBM’s enterprise-grade property graph as a service, for free.
Popular open source automation server Jenkins has fixed multiple security vulnerabilities. The latest version changes how plug-ins use build parameters, though, so developers will need to adapt to the new process.
The vulnerabilities affect all previous releases, including the mainline releases up to and including 2.2, and LTS releases up to and including 1.651.1. Administrators should update their Jenkins installations to mainline release Jenkins 2.3 or LTS 1.651.2.
Virtualization technologies such as software-defined networking (SDN) and network functions virtualization (NFV) offer new opportunities for how data centers can manage their IT infrastructures. As networks become more programmable, enterprise data centers can achieve greater agility. SDN and NFV are influencing the convergence of IT, data center, and telecommunications. They give data center managers the flexibility and scalability to anticipate changing market demands and stay ahead of customer expectations.
Embracing SDN & NFV to Optimize Enterprise Data Center Operations SDxCentral defines software-defined networking (SDN) as a way to manage networks that separates the control plane from the forwarding plane. SDN is a complementary approach to network functions virtualization (NFV) for network management. While they both manage networks, both rely on different methods. A Gartner report indicates that by 2017, 10 percent of customer appliances are going to be virtualized, up from today’s 1 percent. Industry analysts are forecasting that more network traffic will be virtualized over the next five years.
Gary Olliffe, a research director at Gartner, published an insightful post titled “Microservices : Building Services with the Guts on the Outside” that nails how the microservices architectural pattern deals with system complexity. In his post, Gary describes how in a microservices-style application, each service is designed to be as simple as possible to maximize developer productivity. However, the complexity has to go somewhere, and with the microservices approach, this complexity is pushed outside of individual microservices into a common layer of services.
So if the complexity is pushed outside of the application, who deals with it? Obviously there needs to be some layer that handles the common services i.e. “the plumbing” required for microservices. There are a two emerging trends in how this new layer of platform services is delivered:
One of the Linux’s beauties is that you can control almost everything about it. This gives a system administrator a great control over his system and better utilization of the system resources. While some might…
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
The National Institute of Standards and Technology is getting nervous about quantum computers and what they might mean for the cryptographic systems that protect both public and private data. Once seen as far off — if not borderline science fiction — quantum computing now seems a much closer reality.
A few days ago, IBM announced that students, researchers and “general science enthusiasts” can now use a cloud-based, 5-quibit quantum computing platform, which it calls the IBM Quantum Experience, to see how algorithms and various experiments work with a quantum processor.
IBM sees its approach to quantum computers as a first draft of how a universal quantum computer, which can be programmed to perform any computing task, will eventually be built. Getting people to experiment with this early quantum processor will, it clearly hopes, give it pointers on how to proceed in building quantum applications.
Greg Kroah-Hartman is a superstar in the open source world. He is a Linux Foundation Fellow and the maintainer of the stable branch of the Linux kernel, the staging subsystem, USB, Linux drivers, userspace I/O, TTY layer…the list of his work is quite long. He was also the creator of openSUSE Tumbleweed, a rolling release distribution.
He is one of the friendliest faces of the Linux kernel community; always ready to talk and help. I met him again at CoreOS Fest this week in Berlin. (But only after he was done releasing the next Linux kernel 4.6 release candidate while everyone else was listening to the first day’s keynote.) We talked about Linux, security, the upcoming 4.6 release planned for May 15, and more. Below is an edited version of the discussion.
Linux.com: What is the hard truth about Linux kernel security that many people don’t want to hear?
Greg Kroah-Hartman: You have to be able to run a system that can upgrade itself. Lots of people think if they stick with the kernel and if nothing changes, it’s good. That’s not true. We’re fixing about ten bugs in the kernel every day. Not all of them are security issues, but sometimes the big problem is we don’t know if an issue is a security issue or not.
There’s an infamous bug with a fix in the TTY layer. We made it and we merged it and everything was fine. Three years later, somebody realized, ‘Hey! I can use this and get root!’ All of a sudden, this bug that we have fixed years ago had to be backported to really old enterprise kernels, because if you’re running those, all of a sudden you had a root, you had to exploit those a long time. We had a really hard time, when we fix bugs, getting those bug fixes to users. That’s a hard problem.
I’ll pick on Android the opposite way. My phone is running a 3.10-based kernel. That was a long-term stable kernel, but they never updated it. There’s some well-known easy ways to get root on my phone …which is great, because I like getting root on my phone, but that’s already been fixed. Fixes are pushed publicly, but they’re not being updated. We have to be able to update it. That’s something that you really have to do.
Linux.com: What is the possible solution for Linux so that users can easily keep it updated?
Kroah-Hartman: You have to design your system so it can update itself. The Chrome OS and then the CoreOS teams adopted the same mentality. You have two system images. You’re going to update one. Once you know it works, it can switch over to the other one. You have to be able to update it in a secure way. This technology’s been proven. It’s solved. People just need to use it, and build it into their systems. The kernel is not going to go around updating itself on its own. It’s up to the infrastructure you built for your product.
If you make a product with Linux and you can’t update it, or any piece of software, it’s dead. The environment changes. We’re in a world and the joke is “The only thing that’s constant is change.”
Many companies and countries are switching to Linux. Japan is switching a lot of its infrastructure to Linux. All the power plants, all the streetlights, are going to be running Linux. They want to support that for 20 to 30 years. They’re building in that ability to securely update them from the very beginning. They know they’re going to be able to have to do that. That’s good. That’s the way you need to do it. The products are out there. The base technology’s been long-time proven on how to do this. You just need to do it. Even Android can do it. They just need to spend the time to actually push their updates out and pay attention to it.
Linux.com: There is this mentality in the server space that once you install and set up your server and it’s working, don’t touch it…
Kroah-Hartman: That mentality works really good when you have a server that doesn’t talk to anybody else. But in the real world you have to talk to somebody else. If you’re going to take a server and put it over in a corner and it’s not going to ever change anything, not going to talk to the world, great. That’s fine. That’s a static environment. Once you get into a dynamic environment, you have to be able to update. People need to embrace change. They need to get over that fear of change doesn’t work.
Linux runs the world. Our rate of change is unheard of. We’re running almost eight changes an hour, 24 hours a day, to our kernel. It’s one thing to take these bug fixes, ten bug fixes a day, but you also need to take advantage of the new features. We’re adding new features for security reasons. We’re adding these airbags to the kernel.
The new release that came out May 15 has write-only protection to all the data structures. If a bug happens where you would normally be able to overwrite a portion of memory, now with the added protections in place, you aren’t allowed to do that so the bug does not cause any additional “harm.” All of a sudden we took out a whole class of exploits that a bug could turn into an exploit. If you don’t update your kernel to a new one, and you’re just trying to do bug fixes, all of a sudden you’re not going to get that new feature. You need to be able to embrace that and update for these new features at the same time.
Linux.com: How do you reverse that mentality that change is bad, and encourage software vendors to embrace it?
Kroah-Hartman: Most vendors know that today. Red Hat, SUSE, Canonical all offer these services to their users. They all offer the ability to update on time, update on the fly, on CoreOS. It’s there. They’re pushing out the security updates. People need to use them, to take advantage of them. The big problem is Linux and the Internet. They need to be able to build a system in an easy way that can be updated instantly. They just have to do it.
Linux.com: What are The Linux Foundation and the kernel community doing to address security issues?
Kroah-Hartman: The Core Infrastructure Initiative is a great thing from The Linux Foundation. Lots of companies around the world are sponsoring it, and helping improve internet security. They are also sponsoring kernel security work. At the Kernel Summit last year, Kees Cook did a presentation about all the things we need to do better.
One of things we need to do better is, we need airbags for the kernel. Konstantin Ryabitsev gave a big presentation a year ago at the Linux Security Summit and said that we need ways to protect ourselves. There’s a whole bunch of things out there that we need to do better for the kernel. Some of them make things harder for developers of the kernel. We need to be able to accept that and make that change and move on, because our real users of the kernel are not the developers of the kernel.
That happened at the kernel summit last year. Since then, we’ve been going through and doing a lot of work. We have people working on a lot of things: taking bits and pieces of the GRSec, the large security patch set, taking them and merging them into the kernel as needed and doing some other work. CII is helping fund that. A number of the developers working on that are just being funded by the companies that they work for such as Google, Red Hat and Intel. They are doing a lot of work to improve kernel security.
Editor’s Note: This article has been updated to clarify security changes in the 4.6 release.
Most people think of network disaggregation as separating hardware from software, but the story goes deeper than that. While hardware and software separation are a big part of the SDN concept, there is also disaggregation of network switch ASICs. There are five switch ASIC manufacturers in the market and each product has different strengths and weaknesses. These ASICs represent the final “black box” that must be opened before we can truly achieve disaggregation.
The concept of software-defined networking is often described as based on open, interoperable systems that can be customized for each application, and that deliver services through policy-driven architecture. With a policy-driven network, for example, service providers can implement self-service customer portals where customers can dial up the desired amount of bandwidth.
One way to implement SDN is by unlocking several “black boxes” that previously limited networking to a one-size-fits-all approach obtained from vendors like Arista, Cisco and Juniper. Opening each “black box” results in giving customers more choice about how they tune the network for their specific applications.
The first “black box” was the hardware. Once available only from big-name hardware vendors, servers and switches became available directly from the same original design manufacturers (ODMs) who supplied the big vendors. Suddenly vendors like Accton, Agema, and Quanta began offering “white box” servers and switches with lower prices.
Editor’s Note: This article is paid for by IBM as a Diamond-level sponsor of ApacheCon North America, and written by Linux.com.
Connectors make all our lives easier. In the case of the Spark-Cloudant connector, using Spark analytics on data stored in Cloudant is simplified with the easy-to-use syntax of the connector. With the large Spark ecosystem, one can now conduct federated analytics across Cloudant and other disparate data sources. And we all know that the days of analyzing just your own company data are long gone. Piping in more data is essential these days.
Mike Breslin, offering manager at IBM Cloud Data Services, focuses on IBM Cloudant.
We talked with Mike Breslin, offering manager at IBM Cloud Data Services, who focuses on IBM Cloudant, to explore details on the Spark-Cloudant connector. And, to discover a few tips on using it too.
Linux.com: What’s the one thing you want practitioners to know about the Cloudant-Spark connector? What advantages does it bring in real-world practice?
Mike Breslin: It’s an open source connector built by the IBM Cloudant team. It consists of easy-to-use APIs so you can leverage Spark analytics on your Cloudant data.
Linux.com: Where can we find the connector and how can it be used?
Breslin: The spark-cloudant connector is pre-loaded into the Apache Spark-aaS offering on IBM Bluemix or can be used in stand-alone Spark instances. Download it from the Spark Packages site or Github if you want to use it as a standalone and include it in the environment variables. It’s available for anyone’s use under the Apache 2.0 license. As is common in most things Spark, it’s available for Python and Scala applications.
The connector just takes a few seconds to download; it isn’t a very big piece of software. Powerful, but not big.
Linux.com: What’s the quickest way to start analyzing your Cloudant data in Spark?
Breslin: The best way to get started is to just jump right in. But you might want to check out the tutorials and walk-throughs first.
Linux.com: The connector offers several helpful capabilities. Which do you find the most useful yourself and why?
Breslin: The integration means leveraging fast Spark analytics on your Cloudant data for ad hoc querying and advanced analytics. You can load whole databases into a Spark cluster for analysis.
Spark has federated analytics so you can use disparate data sources and one of those sources can be Cloudant. Because Spark has a variety of connection capabilities, you can use it to conduct federated analytics over Cloudant, dashDB data warehouse and Object Storage and other disparate data sources.
You can also transform the data and then write it back to Cloudant to store it there.
Linux.com: As you know, time is of the essence in this type of work. Got any tips to share on making the work or outputs faster or better?
Breslin: If you’re not using Spark already, you’ll likely find it faster and easier to use Spark-as-a-Service. If you’re new to Spark, I recommend checking out the Spark Fundamentals classes on Big Data University and the tutorials on IBM developerWorks.
As for familiarizing yourself with the connector, I’d suggest you check out the README on GitHub and the video tutorials on our Learning Center showing how to use the connector in both a Scala and Python notebook.