Home Blog Page 448

The Open-Source Driving Simulator That Trains Autonomous Vehicles

Self-driving cars are set to revolutionize transport systems the world over. If the hype is to be believed, entirely autonomous vehicles are about to hit the open road.

The truth is more complex. The most advanced self-driving technologies work only in an extremely limited set of environments and weather conditions. And while most new cars will have some form of driver assistance in the coming years, autonomous cars that drive in all conditions without human oversight are still many years away.

One of the main problems is that it is hard to train vehicles to cope in all situations. And the most challenging situations are often the rarest. There is a huge variety of tricky circumstances that drivers rarely come across: a child running into the road, a vehicle driving on the wrong side of the street, an accident immediately ahead, and so on.

Read more at Technology Review

What Can The Philosophy of Unix Teach Us About Security?

In some sense, I see security philosophy gradually going the way of the Unix philosophy. More specifically, within the areas of security operations and incident response, I believe that this transition has been underway for quite some time. What do I mean by this?  Allow me to elaborate.

Whether the security team is in-house at a large enterprise or part of a managed services offering, the trend seems to be the same. Security teams have given up on building their workflow around a small number of “silver bullets” that claim to solve most of their problems. Instead, most security teams have started to go about it the other way. They build the workflow that works for their particular organization, based on their priorities and objectives. Then they turn their attention to finding solutions that address particular needs within the workflow.

Read more at Security Week

5 New & Powerful Dell Linux Machines You Can Buy Right Now

The land of powerful PCs and workstations isn’t barren anymore when we talk about Linux-powered machines; even all of the world’s top 500 supercomputers now run Linux.

Dell has joined hands with Canonical Inc. to give Linux-powered machines a push in the market. They have launched five new Canonical-certified workstations running Ubuntu Linux out-of-the-box as a part of the Dell Precision series. An advantage of buying these canonical-certified machines is that the users won’t have to worry about incompatibility with Linux.

Check out the specifications of these Dell Linux machines:

Read more at FOSSbytes

5 Coolest Linux Terminal Emulators

Sure, we can get by with boring old GNOME terminal, Konsole, and funny, rickety, old xterm. When you’re in the mood to try something new, however, take a look at these five cool and useful Linux terminals.

Xiki

Number one on my hit parade is Xiki. Xiki is the brainchild of Craig Muth, talented programmer and funny man (funny as in humorous, and possibly other senses of the word as well). I wrote about Xiki so long ago, in Meet Xiki, the Revolutionary Command Shell for Linux and Mac OS X. Xiki is much more than yet another terminal emulator; it’s an interactive environment for expanding the reach and speed of your command-line interface.

Xiki has mouse support and runs in most command shells. It has tons of on-screen help and is fast to navigate with the keyboard or mouse. One simple example of its speed is how it turbocharges the ls command. Xiki zooms through multiple levels in your filesystem without having to continually re-type ls or cd, or resort to clever regular expressions.

Xiki integrates with many text editors, provides a persistent scratchpad, has a fast search engine, and, as they say, much much more. Xiki is so featureful and so different that the fastest way to wrap your head around it is to watch Craig’s funny and informative videos.

Cool Retro Term

I dig Cool Retro Term (shown in main image above) for its looks, and also its usefulness. It takes us back to the era of cathode ray tube monitors, which wasn’t all that long ago, and which I have zero nostalgia for. Pry my LCD screens from my cold dead etc. It is based on Konsole, so it has Konsole’s excellent functionality. Change Cool Retro Term’s appearance from the Profiles menu. Profiles include Amber, Green, Pixelated, Apple ][, and Transparent Green, and all include a realistic scanline. Not all of them are usable, for example the Vintage profile warps and flickers realistically like a dying screen.

Cool Retro Term’s GitHub repository has detailed installation instructions, and Ubuntu users havethe PPA.

Sakura

When you want a nice lightweight and configurable terminal, try Sakura (Figure 1). It has few dependencies, unlike GNOME Terminal and Konsole, which drag in big chunks of GNOME and KDE. Most options are configurable from the right-click menu, such as tab labels, colors, size, default number of tabs, fonts, bell, and cursor type. You can set more options, for example keybindings, in your personal configuration file, ~/.config/sakura/sakura.conf.

Figure 1: Sakura is a nice, lightweight, configurable terminal.

Command-line options are detailed in man sakura. Use these to lauch Sakura from the command line, or use them in your graphical launcher. For example, this opens to four tabs and sets the window title to MyWindowTitle:

$ sakura -t MyWindowTitle -n 4

Terminology

Terminology comes from the lushly lovely world of the Enlightenment graphical environment and can be prettified all you want (Figure 2). It has a lot of useful features: independent split windows, open files and URLs, file icons, tabs, and gobs more. It even runs in the Linux console, without a graphical environment.

Figure 2: Terminology can run in the Linux console, without a graphical environment.

When you have multiple split windows each one can have a different background, and backgrounds are any media file: image files, video, or music. It comes with a bundle of dark themes and transparency, because who needs readability, and even a Nyan cat theme. There are no scroll bars, so navigate up and down with Shift+PageUp and Shift+PageDown.

There are multiple controls: a right-click menu, context dialogs, and command-line options. The right-click menu has the tiniest fonts in the universe, and Miniview displays a microscopic file tree. If there are options to make these readable I did not find them. When you have multiple tabs open click the little tab browser to open a chooser that scrolls up and down. Everything is configurable; consult man terminology for a list of commands and options, including a nice batch of fast keyboard shortcuts. Strangely, this does not include the following commands, which I found by accident:

  • tyalpha
  • tybg
  • tycat
  • tyls
  • typop
  • tyq

Use the tybg [filename] command to set a background, and tybg with no options to remove the background. Run typop [filename] to open files. tyls lists files in icon view. Run any of these commands with the -h option to learn what they do. Even with the readability quirks, Terminology is fast, pretty, and useful

Tilda

There are several excellent drop-down terminal emulators, including Guake and Yakuake. Tilda (Figure 3) is one of the simplest and most lightweight. After opening Tilda it stays open, and you display or hide it with a shortcut key. The tilda key is the default, and you can map any key you like. It’s always open and ready to work, but out of your way until you need it.

Figure 3: Tilda is one of the simplest and most lightweight terminal emulators.

Tilda has a nice complement of options, including default size and placement, appearance, keybindings, search bar, mouse hover, and tab bar. These are controlled with a right-click menu.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Cloud Native Storage: A Primer

We recently debated at a technical forum what cloud native storage is, which led me to believe that this topic deserves a deeper discussion and more clarity.

First though, I first want to define what cloud native applications are, as some may think that containerizing an application is enough to make it “cloud-native.” This is misleading and falls short of enabling the true benefits of cloud native applications, which have to do with elastic services and agile development. The following three attributes are the main benefits, without which we’re all missing the point:

  • Durability — services must sustain component failures
  • Elasticity — services and resources grow or shrink to meet demand
  • Continuity — versions are upgraded while the service is running

Read more at The New Stack

IT Disaster Recovery: Sysadmins vs. Natural Disasters

Businesses need to keep going even when faced with torrential flooding or earthquakes. Sysadmins who lived through Katrina, Sandy, and other disasters share real-world advice for anyone responsible for IT during an emergency.

When the lights flicker and the wind howls like a locomotive, it’s time to put your business continuity and disaster recovery plans into operation.

Too many sysadmins report that neither were in place when the storms came. That’s not surprising. In 2014, the Disaster Recovery Preparedness Council found that 73 percent of surveyed businesses worldwide didn’t have adequate disaster recovery plans.

“Adequate” is a key word. As a sysadmin on Reddit wrote in 2016, “Our disaster plan is a disaster. All our data is backed up to a storage area network [SAN] about 30 miles from here. We have no hardware to get it back online or have even our core servers up and running within a few days. We’re a $4 billion a year company that won’t spend a few $100K for proper equipment. Or even some servers at a data center. Our executive team said, ‘Meh what are the odds of anything happening’ when the hardware proposal was brought up.”

Read more at HPE

Top 10 Linux Tools

One of the benefits to using Linux on the desktop is that there’s no shortage of tools available for it. To further illustrate this point, I’m going to share what I consider to be the top 10 Linux tools.

This collection of Linux tools helps us in two distinct ways. It serves as an introduction to newer users that there are tools to do just about anything on Linux. Also, it reminds those of us who have used Linux for a number of years that the tools for just about any task is indeed, available.

Read more at Datamation

Moving API Security Testing into Dev/QA

Discussing API security and why we should care is a little bit like talking about eating our vegetables. We all know that eating our vegetables is good for our health, but how many of us actually do it? Application security is a little bit like that. It is essential for the health of our applications and our businesses, but striving for it is not nearly as interesting as building cool new application features. But we only have to look at recent news headlines to understand how important it is.

Traditionally, validating an application or API for security has been done at the end of the development process. This is inherently problematic, though. It’s usually too late in the process for discovered errors to be fixed: it may be too close to the release date to fix the problems, or the team might have moved on to other projects, or the architecture of the application might be inherently insecure.

In addition, services and applications today are released more often than ever, often releasing up to multiple times a day. This fast release cadence makes the traditional approach untenable.

Enter…Continuous Integration

To solve this problem, we will turn to a solution that the industry has been using to tackle software quality problems with accelerated release cycles – continuous integration. Continuous integration produces builds whenever new code is checked in, and validates the new code by running static analysis and unit tests for each build. If teams are sophisticated, they might even be creating and running automated functional tests using CI (perhaps not for every build, since functional tests typically take a long time to run, but at least at specified intervals like once a day).

We can apply this same solution to automated security testing for our APIs by bringing penetration testing into our CI workflows. This will ensure that we test for security vulnerabilities sooner, and it will give us security regression tests that can catch new problems as soon as they are introduced. But we will need to be smart about it, since penetration testing is expensive and can take a long time to run. We must do it in a way that is scalable and sustainable.

Start with Functional Tests

I am assuming that our teams are already writing and running automated functional tests for our APIs. (If we are not doing this, we need to start here and are not ready to consider automating our security testing.) If we are running automated functional tests for our APIs, then as part of our normal development and QA processes, we can identify a subset of those functional tests to use as security tests. We will prepare and run this subset as security tests.

Let me describe how this works using Parasoft SOAtest and its integration with Burp Suite, a popular penetration testing tool. To start, let’s assume we have a SOAtest scenario with 1 setup test that cleans the database, and 3 tests that make 3 different API calls.  We want to perform penetration testing for each of the 3 APIs that are being called in the scenario:

Picture1

We will first prepare the scenario for security by adding a Burp Suite Analysis tool to each of the tests in the scenario, as shown below:

Picture2

We will then execute this scenario using SOAtest.  As each test executes, SOAtest will make the API call defined in the test and capture the request and response traffic. The Burp Suite Analysis Tool on each test will pass the traffic data to a separate running instance of the Burp Suite application, which will perform penetration testing on the API based on the API parameters it observes in the traffic data, using its own heuristics. The Burp Suite Analysis Tool will then take any errors found by Burp Suite and report them as errors within SOAtest, associated with the test that accessed the API. SOAtest results can then be further reported into DTP, Parasoft’s reporting and analytics dashboard, for additional reporting capabilities. See below for a representation of how this works:

blog_diagram-1.png

Repurposing functional tests for use as security tests gives the following benefits:

  1. Since we are already writing functional tests, we can reuse work that has already been done, saving time and effort.

  2. To execute certain APIs, we might have to do some setup, like prepping the database or calling other APIs. If we start with functional tests that already work, this setup is already done.

  3. Typically, a penetration testing tool will report that a certain API call has a vulnerability, but it doesn’t give any context about the use case and/or requirement to which it is connected. Since we are using SOAtest to execute the test cases, the security vulnerabilities are reported in the context of a use case. When scenarios have been associated with requirements, we now can get additional business context about the impact of the security errors to the application. 

  4. We have a test scenario that we can use to easily reproduce the error or to validate that it has been fixed.

Preparing Functional Tests for Use as Security Tests

There are a few things to consider when repurposing functional tests for use as penetration tests:

  1. We should maintain our functional test scenarios separately from our security test scenarios, and run them from separate test jobs. The main reason for this is that adding penetration testing to existing functional tests will likely serve to destabilize the functional tests. We need to select which functional test scenarios should be turned into automated security tests, and then make copies of the functional tests that will be maintained as separate security tests.

  2. We need to be selective in which tests we choose, since penetration testing is expensive; we need to maximize the attack surface of the API that is covered while minimizing the number of tests. We should consider the following:

    • Penetration testing tools analyze request/response traffic to understand which parameters in the request are available to be tested. We need to select functional tests that exercise all the parameters in each API, to ensure that every input to the API gets analyzed.
    • The number of scenarios needs to be manageable, so that the security test run is short enough to run at least once a day.
    • Within each scenario, we need to decide which API calls should be penetration tested. The same API may be referenced from multiple scenarios, and we don’t want to duplicate penetration testing on an API that is being tested in a different scenario. The Burp Suite Analysis Tool should only get added to the appropriate tests for the API(s) to be penetration tested.
  3. Our functional test scenarios may have setup or teardown sections for initialization or cleanup. These typically don’t need to be penetration tested.

  4. If the functional test has any parameterization, we should remove it. Penetration testing tools don’t need multiple sets of values for the same parameters to know what to test, and sending different sets of values could just lead to making the test runs go longer due to duplicated testing.

  5. API functional tests will usually have assertions that validate the response from the service. When used as security tests, these assertions can fail, but will be noisy when reviewing the results, since in this context we only care about the security vulnerabilities that were found. We should remove all assertions. In my previous example, this would mean removing the JSON Assertor from Test 3.

  6. Some API calls add data to the database. When using a penetration testing tool against such APIs, the database can get bloated with information due to the number of attacks that the penetration testing tool directs at the API. In some cases, this can cause unexpected side effects. On one of our development teams, we discovered a performance issue in the application when a particular API added lots of data due to the penetration test attacks. The application performance became so bad that it prevented the automated security test run from finishing in a reasonable amount of time. We had to exclude the security tests for that API from our automated run until we had fixed the problem.

Maintaining a Stable Test Environment

We need to consider whether to run our functional and security tests within the same test environment or a different one. Resetting the environment between the functional and security test runs, or using a separate environment, promotes better test stability but is usually not necessary. We can often reuse the same environment, but when we do, we should run the functional tests first and the security tests last, since the security tests can destabilize the environment for the functional tests. When we use different environments, we need to make sure that we configure the original functional test scenarios with variables so that it is easy to point the tests at different endpoints for different environments. SOAtest supports this using environment variables.

Our APIs may also depend on other APIs outside our control. We can consider using service virtualization to isolate our environment so we don’t depend on those external systems. This will help to stabilize our tests while at the same time preventing unintended consequences to the external systems due to our penetration testing efforts.

In Conclusion…

We can ensure better quality in our APIs by moving security testing into development and QA as part of an automated process. We can leverage our existing API functional tests to create automated security tests, which will allow us to discover and fix security errors earlier in the process. And hopefully this will help us not become one of the next big headlines in the news…

My colleague Mark Lambert and I recently led a webinar that included a demonstration of how this works with Parasoft SOAtest and Burp Suite. If you’re interested in learning more, you can view the demo from the webinar recording below:

https://embedwistia-a.akamaihd.net/deliveries/fd2de867e9c57eb2e7be051d6ec993a02edcd087.jpg?image_play_button_size=2x&image_crop_resized=960x540&image_play_button=1&image_play_button_color=006db0e0

This article was originally published at Parasoft.

From Consumers to Contributors: The Evolution of Open Source in the Enterprise

Open source technologies are now an increasingly common sight in enterprise software stacks, with organisations using them to stand up their customer-facing and line-of-business applications, and power their infrastructure. Despite the best efforts of commercial software suppliers to position open source software as insecure, unreliable and ill-suited for enterprise use, large companies are using it avoid lock-in, drive down costs and speed up their software developments cycles.

In the light of these benefits, it is hoped enterprises will not only see fit to consume open source software, but contribute code of their own back to the communities that created it for myriad reasons.

First of all, the creativity and health of all open source communities rests heavily on having an engaged user base, that regularly contributes code and user feedback to the community to inform the next iteration of the product.

Without steady and reliable input from contributors, the output of the community as a whole – both from a product quality and quantity perspective – may be compromised.

Read more at ComputerWeekly

Security Jobs Are Hot: Get Trained and Get Noticed

The demand for security professionals is real. On Dice.com, 15 percent of the more than 75K jobs are security positions. “Every year in the U.S., 40,000 jobs for information security analysts go unfilled, and employers are struggling to fill 200,000 other cyber-security related roles, according to cyber security data tool CyberSeek” (Forbes). We know that there is a fast-increasing need for security specialists, but that the interest level is low.

Security is the place to be

In my experience, few students coming out of college are interested in roles in security; so many people see security as niche. Entry-level tech pros are interested in business analyst or system analyst roles, because of a belief that if you want to learn and apply core IT concepts, you have to stick to analyst roles or those closer to product development. That’s simply not the case.

In fact, if you’re interested in getting in front of your business leaders, security is the place to be – as a security professional, you have to understand the business end-to-end; you have to look at the big picture to give your company the advantage.

Be fearless

Analyst and security roles are not all that different. Companies continue to merge engineering and security roles out of necessity. Businesses are moving faster than ever with infrastructure and code being deployed through automation, which increases the importance of security being a part of all tech pros day to day lives. In our Open Source Jobs Report with The Linux Foundation, 42 percent of hiring managers said professionals with security experience are in high demand for the future.

There has never been a more exciting time to be in security. If you stay up-to-date with tech news, you’ll see that a huge number of stories are related to security – data breaches, system failures and fraud. The security teams are working in ever-changing, fast-paced environments. A real challenge lies is in the proactive side of security, finding, and eliminating vulnerabilities while maintaining or even improving the end-user experience.  

Growth is imminent

Of any aspect of tech, security is the one that will continue to grow with the cloud. Businesses are moving more and more to the cloud and that’s exposing more security vulnerabilities than organizations are used to. As the cloud matures, security becomes increasingly important.           

Regulations are also growing – Personally Identifiable Information (PII) is getting broader all the time. Many companies are finding that they must invest in security to stay in compliance and avoid being in the headlines. Companies are beginning to budget more and more for security tooling and staffing due to the risk of heavy fines, reputational damage, and, to be honest, executive job security.  

Training and support

Even if you don’t choose a security-specific role, you’re bound to find yourself needing to code securely, and if you don’t have the skills to do that, you’ll start fighting an uphill battle. There are certainly ways to learn on-the-job if your company offers that option, that’s encouraged but I recommend a combination of training, mentorship and constant practice. Without using your security skills, you’ll lose them fast with how quickly the complexity of malicious attacks evolve.

My recommendation for those seeking security roles is to find the people in your organization that are the strongest in engineering, development, or architecture areas – interface with them and other teams, do hands-on work, and be sure to keep the big-picture in mind. Be an asset to your organization that stands out – someone that can securely code and also consider strategy and overall infrastructure health.

The end game

More and more companies are investing in security and trying to fill open roles in their tech teams. If you’re interested in management, security is the place to be. Executive leadership wants to know that their company is playing by the rules, that their data is secure, and that they’re safe from breaches and loss.

Security that is implemented wisely and with strategy in mind will get noticed. Security is paramount for executives and consumers alike – I’d encourage anyone interested in security to train up and contribute.

Download the full 2017 Open Source Jobs Report now.