Home Blog Page 528

How to Load and Unload Kernel Modules in Linux

A kernel module is a program which can loaded into or unloaded from the kernel upon demand, without necessarily recompiling it (the kernel) or rebooting the system, and is intended to enhance the functionality of the kernel.

In general software terms, modules are more or less like plugins to a software such as WordPress. Plugins provide means to extend software functionality, without them, developers would have to build a single massive software with all functionalities integrated in a package. If new functionalities are needed, they would have to be added in new versions of a software.

Read more at Tecmint

Understanding the Basics of Docker Machine

In this series, we’re taking a preview look at the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation.

In the first article, we talked about installing Docker and setting up your environment. You’ll need Docker installed to work along with the examples, so be sure to get that out of the way first. The first video below provides a quick overview of terms and concepts you’ll learn.

In this part, we’ll describe how to get started with Docker Machine.

Or, access all the sample videos now!

Docker has a client server-architecture, in which the Client sends the command to the Docker Host, which runs the Docker Daemon. Both the Client and the Docker Host can be in the same machine, or the Client can communicate with any of the Docker Hosts running anywhere, as long as it can reach and access the Docker Daemon.

The Docker Client and the Docker Daemon communicate over REST APIs, even on the same system. One tool that can help you manage Docker Daemons running on different systems from our local workstation is Docker Machine.

If you are using Docker for Mac or Windows, or install Docker Toolbox, then Docker Machine will be available on your workstation automatically. With Docker Machine, we will be deploying an instance on DigitalOcean and installing Docker on that.  For that, we would first create our API key from DigitalOcean, with which we can programmatically deploy an instance on DigitalOcean.

After getting the token, we will be exporting that in an environment variable called “DO_TOKEN”, which we will be using in the “docker-machine” command line, in which we are using the “digitalocean” driver and creating an instance called “dockerhost”.

Docker Machine will then create an instance on DigitalOcean, install Docker on that, and configure the secure access between the Docker Daemon running on the “dockerhost” and our client, which is on our workstation. Next, you can use the “docker-machine env” command with our installed host, “dockerhost”, to find the respective parameters with which you can connect to the remote Docker Daemon from your Docker Client.

With the “eval” command, you can export all the environment variables with respect to your “dockerhost” to your shell. After you export the environment variables, the Docker Client on your workstation will directly connect with the DigitalOcean instance and run the commands there. The videos below provide additional details.

In the next article, we will look at some Docker container operations.

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Watch the sample videos here for more details:

Want to learn more? Access all the free sample chapter videos now!

Simplifying Embedded and IoT Development Using Rebuild and Linux Containers

Introduction to Rebuild

Building modern software in a predictable and repeatable way isn’t easy. The overwhelming number of software dependencies and the need to isolate conflicting components presents numerous challenges to the task of managing build environments.

While there are many tools aimed at mitigating this challenge, they all generally utilize two approaches: they either rely on package managers to preserve and replicate package sets, or they use virtual or physical machines with preconfigured environments.

Both of these approaches are flawed. Package managers fail to provide a single environment for components with conflicting build dependencies, and separate machines are heavy and fail to provide a seamless user experience. Rebuild solves these issues by using modern container technologies that offer both isolated environments and ease of use.

Rebuild is perfect for establishing build infrastructures for source code. It allows the user to create and share fast, isolated and immutable build environments. These environments may be used both locally and as a part of continuous integration systems.

The distinctive feature of Rebuild environments is a seamless user experience. When you work with Rebuild, you feel like you’re running “make” on a local machine.

Client Installation

The Rebuild client requires Docker engine 1.9.1 or newer and Ruby 2.0.0 or newer.

Rebuild is available at rubygems.org:

gem install rbld

Testing installation

Run:

rbld help

Quick start

Search for existing environments

Let’s see how we can simplify the usage of embedded toolchains with Rebuild. By default, Rebuild is configured to work with DockerHub as an environment repository and we already have ready-made environments there.

The workflow for Rebuild is as follows: a) search for the needed environment in Environment Repository, b) deploy the environment locally (needs to be done only once for the specific environment version), c) run Rebuild. If needed you can modify, commit and publish modified environments to Registry while simultaneously keeping track of different environment versions.

First, we  search for environments by executing this command:

$ rbld search

Searching in /RebuildRepository...


   bb-x15:16-05

   rpi-raspbian:v001

Next, we  deploy the environment to our local machine. To do this, enter the following command:

$ rbld deploy rpi-raspbian:v001


Deploying from /RebuildRepository...

Working: |---=---=---=---=---=---=---=---=---=---=---=---=-|

Successfully deployed rpi-raspbian:v001

And now it is time to use Rebuild to compile your code. Enter the directory with your code and run:

$ rbld run rpi-raspbian:v001 -- make

Of course, you can also use  any other command that you use to build your code.

You can also use your environment in interactive mode. Just run the following:

$ rbld run rpi-raspbian:v001

Then, simply execute the necessary commands from within the environment. Rebuild will take care of the file permission and ownership for the build result that will be located on your local file system.

Explore Rebuild

To learn more about Rebuild, run rbld help or read our GiftHub documentation.

Rebuild commands can be grouped by functionality:

  • Creation and modification of the environments:

    • rbld create – Create a new environment using the base environment or from an archive with a file system

    • rbld modify – Modify the environment

    • rbld commit – Commit modifications to the environment

    • rbld status – Check the status of existing environments

    • rbld checkout – Revert changes made to the environment

  • Managing local environments:

    • rbld list – List local environments

    • rbld rm – Delete local environment

    • rbld save – Save local environment to file

    • rbld load – Load environment from file

  • Working with environment registries:

    • rbld search – Search registry for environments

    • rbld deploy – Deploy environment from registry

    • rbld publish – Publish local environment to registry

  • Running an environment either with a set of commands or in interactive mode:

    • rbld run -- <command to execute> – Run command(s) in the environment

    • rbld run – Use environment in interactive mode

Additional information

Enterprises Need ‘Security 101’ to Keep IoT Devices Safe

Almost half (46 percent) of U.S. firms that use an Internet of Things (IoT) network have been hit by at least one security breach.

This, according to a survey by consulting firm Altman Vilandrie & Company, which said the cost of the breaches represented 13.4 percent of the total revenues for companies with revenues under $5 million annually and tens of millions of dollars for the largest firms. Nearly half of firms with annual revenues above $2 billon estimated the potential cost of one IoT breach at more than $20 million.

Read more at SDxCentral

Real Paths Toward Agile Documentation

In a nutshell, documentation is deemed uncool and old school. Especially on flat agile teams, there’s a lack of ownership of who’s in charge of docs. And, even worse than testing, software documentation is the biggest hot potato in the software development lifecycle. Nobody wants to take responsibility for it.

Most of all, in the rapid world of agile development, documentation gets put off until the last minute and just ends up endlessly on your backlog. Oh, and of course, there’s not enough money to do it right.

But I’m here to advocate that, for the most part, you’re wrong. There always has to be time for documentation because you should always be developing with your users in mind.

Read more at The New Stack

How Open Source Is Advancing the Semantic Web

The Semantic Web, a term coined by World Wide Web (WWW) inventor Sir Tim Berners-Lee, refers to the concept that all the information in all the websites on the internet should be able to interoperate and communicate. That vision, of a web of knowledge that supplies information to anyone who wants it, is continuing to emerge and grow.

In the first generation of the WWW, Web 1.0, most people were consumers of content, and if you had a web presence it was comprised of a series of static pages conveyed in HTML. Websites had guest books and HTML forms, powered by Perl and other server-side scripting languages, that people could fill out. While HTML provides structure and syntax to the web, it doesn’t provide meaning; therefore Web 1.0 couldn’t inject meaning into the vast resources of the WWW.

Read more at OpenSource.com

Iptables Basics

Yesterday I tweeted “hey, I learned some stuff about iptables today”! A few people replied “oh no, I’m sorry”. iptables has kind of a reputation for being hard to understand (and I’ve also found it intimidating) so I wanted to write down a few things I learned about iptables in the last few days. I don’t like being scared of things and understanding a few of the basics of iptables seems like it shouldn’t be scary!

I have been looking at Kubernetes things, and Kubernetes creates 5 bajillion iptables rules, so it has been time to learn a little bit about iptables.

The best references I’ve found for understanding iptables so far have been:

  • the iptables man page
  • iptables.info (which is GREAT, it explains all kinds of stuff like “what does MASQUERADE even mean” that is not explained in the iptables man page)

Read more at Julia Evans

How to Create a Docker Swarm

If you’ve worked with Docker containers, you already understand how powerful they can be. But did you know you can exponentially up the power of Docker by creating a cluster of Docker hosts, called a Docker swarm? Believe it or not, this process is really simple. All you need is one machine to serve as a Docker swarm manager and a few Docker hosts to join the swarm as nodes.

I want to walk you through the process of enabling a Docker swarm manager and then joining two hosts as nodes. I will be demonstrating with the Ubuntu Server 16.04 platform for all machines. Do note this process will be very similar on nearly all Linux platforms. I will assume you already have Docker installed on all machines and will demonstrate with my manager at IP address 192.168.1.139 and my nodes at IP address (docker-node-1 at 192.168.1.177, docker-node-2 at 192.168.1.178, and ZOMBIE-KING at 192.168.1.162). Each node must also be running Docker.

Read more at TechRepublic

Top 5 Linux Penetration Testing Distributions

Linux penetration testing distributions are useful and versatile tools that can help you to get the most out of your Linux system while simultaneously avoiding the malicious threats of the internet. Of course, the reason for using a Linux pen testing distribution may seem obvious to anyone who understands what penetration testing is or performs security auditing professionally, it’s often less clear to people outside of the security industry that a wealth of open source tools exist to help them perform there own security testing.

As usual with Linux there is plenty of choice! With plenty of penetration testing distributions out there to choose from, this can prove challenging for beginners or people who are from outside the security industry. Overall the standard of Linux distros has increased over the years, in the beginning these distros were essentially Linux live cd’s with scripts / precompiled binaries dropped in a directory. Today distros like Kali are setting the standard, all scripts and tools are packaged and updated using the Debian distributions package manager. 

However, with great choice, comes a great level of… indecisiveness 🙂

Narrowing down your decision and uncovering the best distro for the job can present some real difficulties.

Fortunately, we’re here to help. In this list, we’ve compiled what we believe to be some of the best options available today to help you get the most out of your security auditing.

Kali Linux (Authors Choice)

Kali Linux Penetration Testing Tools

Developed by Offensive Security, Kali Linux is the rewrite of BackTrack and has certainly earned its place at the top of our list for its incredible capabilities as an operating system to aid in hacking purposes. This OS is a Debian-based system that features over 500

Pen testing applications and tools already installed. This gives you an impressive start on your security toolbox and leaves little room for you to want more. The flexible tools it comes with are updated on a regular basis, metasploit framework is a packaged install and kept up to date by Rapid7 directly. Kali supports many different platforms, including VMware and ARM. Additionally, Kali Linux is also a workable solution for computer forensics, as it includes a live boot feature that offers the ideal environment to detect vulnerabilities and take care of them appropriately.

In addition, Kali Linux has also just released a new version—of which we’re thoroughly impressed, and think you will be too. Kali Linux 2017.1 brings new exciting features and updates in comparison to older versions and other options. Updated packages, better and increased hardware support, and countless updated tools. If you want to be completely up-to-date and have the best of the best in terms of your Linux penetration testing distro, then you might like Kali Linux’s new release as much as we do.

Parrot Security OS

Parrot Linux Penetration Testing Tools

Parrot Security OS is another one of our top choices when it comes to selecting the right Linux penetration testing distribution for your needs. Like Kali Linux, it’s another Debian-based OS option that packs a lot into its programming. Developed by the team at Frozenbox’s, Parrot Security is an option that’s cloud-friendly. The operating system is designed to specialize in ethical hacking, computer forensics, pen testing, cryptography, and more. Compared to other OS options on the market for these purposes, Parrot Security OS is a lightweight operating system that offers the utmost efficiency to users.

Parrot Security OS is the ideal blend of the best of Frozenbox OS and Kali Linux. Moreover, this incredibly customizable operating system is ideal for hacking and comes with a strong support community. If you run into trouble, this is one of the most user-friendly options when it comes to finding a right solution to get the OS to help you accomplish your goals.

Backbox

Backbox Linux Penetration Testing Tools

Backbox is our favorite Linux operating system for penetration testing that is not Debian-based. This is an Ubuntu-based operating system ideal for assessing the security of your computer and conducting penetration testing. Backbox Linux comes with a wide array of options in the way of security analysis tools, which can be applied for analysis of web applications, networks, and more. As a fast, easy to use, and efficient operating system, Backbox Linux is famous in the hacker’s community. The OS includes a complete desktop environment with software applications that are updated on a regular basis, always keeping you up to date and supplied with the most stable versions of all your most important programs.

If you are big on penetration testing and security assessment, then you will be happy to hear that these are exactly the things that Backbox’s OS specializes in. As one of the best distro in its field, Backbox always has its sights set on the best known ethical hacking tools and is always providing users with the latest stable versions available of an impressive array of tools for network analysis. The interface is designed with the goal of minimalism, and utilizes a XFCE environment for its desktop. The result is an effective, fast, customizable, comprehensive user experience with a helpful support community to back it.

BlackArch

BlackArch Linux Penetration Testing Tools

If you are an ethical hacker or a researcher looking for a complete Linux distribution to cater to all your needs, then BlackArch Linux just might be the penetration testing distribution you want to set your sights on. The design was originally derived from Arch Linux, and users also have the option and capability to install the BlackArch Linux components over the top of it.

BlackArch, as an operating system, offers users over 1400 tools to use that are thoroughly tested prior to being added to the OS’s arsenal of tools and codebase. In addition, the developers are in a constant process of increasing the system’s capabilities, which is giving it a reputation that allows it to sit at the cool kid’s table of operating systems for hacking purposes. Even more good news about this distro? The list of tools groups, and tools contained within those groups, is constantly growing. Not only that, but if you are already a user of Arch Linux, you can set up the BlackArch tools collection on top of it to get the most out of your OS.

Fedora Security Spin

Fedora Security Spin - Pen Testing Tools

Fedora Security Spin was designed to be a variation of Fedora that is designed specifically for security testing and auditing. In addition, it can also be used for the purposes of teaching. This distro is designed to provide students and teachers alike with the support they need during learning or practicing security methodologies involving web application security, information security, forensics analysis, and more.

This just goes to show that not all Linux penetration testing distributions are made equal and there’s no one-size-fits-all answer when it comes to determining the best one on the market. If you’re more into the ethical hacking side of things, then you may find that Kali Linux or Parrot Security OS is more your style. However, if you are teaching others or still in the process of learning, or if you are more interested in forensics analysis than hacking, then you can’t go wrong with Fedora Security Spin.

The Verdict

We know there are plenty of options for you to choose from when it comes to choosing the best Linux penetration testing distributions. While this is by no means a comprehensive list and there are plenty of other admirable programs out there worthy of a shout out—Pentoo, Weakerth4n, and Matriux, to name a few—these are our favorites distros available today. After thorough trial and testing, although the list of operating systems worth their salt goes on and on, they are certainly not all made equal.

If you’re looking for the best of the best in terms of your penetration testing distro for your Linux system, you can’t go wrong with any of the top 5 we’ve included on our list. To narrow down your choice to the one and only match made in heaven that’s right for you, we recommend asking yourself the following questions:

  • What do I want to accomplish with a penetration testing distro?
  • What features do I need from a penetration testing distro to help me accomplish my goals?

Whether you are an aspiring information security expert, have already earned that title, or if you’re just looking to delve into the field to see what it can do, finding a decent Linux operating system that complements your goals is a necessity. Depending on your purposes, there are countless options for you to choose from, which is why it’s important to keep your goals in mind and narrow your options to those that will be able to help you accomplish your purposes.

The list we’ve compiled here, however, has something for everyone. Regardless of what you’re looking for, we’re confident that you will be able to find one that suits your needs. After a bit of research on each, you should find that one is standing out from the crowd in no time—and that will likely be your ideal Linux penetration testing distribution. 

Internet of Things – The Next Gen Tech

Ready for the next gen tech? The next gen tech is going to surprise us with new innovations. With it flourishing globally, the Internet and its peripherals have come to a long way since last 20 years. Starting from the modem dial-up on demand to broadband, then to Web 2.0 and a 4G mobile internet of today. It has continued to drive in new trends for all. BUT, since change is the only thing that’s constant, we wonder what’s next?

You accept or reject it, but the world is evolving speedily towards a perfect world where everything will be connected to the Internet. Every one of us is eagerly waiting for the next big thing that will change our life to a great extent – ‘Internet of Things’ (IoT). Yes, it’s the one that has already been quite familiar to us but it’s going to comfort our living further.

Let’s try to understand what exactly the ‘Internet of Things’ is. In simple terms, it’s the networking of devices and objects used daily, connected via the internet and managed in clusters, with a software. The objects allocated with unique identifiers that can communicate over the network. It’s a belief that networking of daily objects and devices to allow then to receive and send data is not only the next big thing but the biggest enhancement in the world of Internet since its start.

The next question that strikes our mind is what things or devices would be based on IoT? Currently, we all are using some various objects wirelessly – such as the wireless printer, laptop, desktop, tablet, household lights and lamps which can be controlled via the Internet, is nothing but an IoT device. Work is been put into connecting almost everything like the cars, household appliances, city infrastructure (traffic management systems, industry sending data about power usage and waste output a city, etc.), personal electronic devices (personal finance management, nutrition and fitness management, etc.) and industrial machinery, AND controlled via the Internet. Sounds strange! But you will soon see all these things working on automatic networks where no human efforts would be required.

Internet of things will connect an infinite variety of devices and sensors to develop new and innovative future applications. These applications will need support based on elastic, reliable and agile platforms. The Internet of things just can’t be imagined without the cloud computing platforms. It is expected that in the next few years, everything right from clothes to locks and door mats will find their way onto the internet as manufacturers expand their products through Internet connecting them to form IoT.  It means there will be loads of data flying around that needs to be processed rapidly for everyone to enjoy the supply of the particular service without running out of stock.

Cloud computing is one of the best platforms to support Internet of things as one will need to serve the users who can be anywhere and at any time, day or night. There are various data centers managed by cloud hosting providers at different locations that make it perfect for increased analysis. Some cloud providers offer pay-as-you-go service which means that you can expand for peaks and then decrease paying only for the period you used the service. In addition to this, these providers can serve millions of users with their great infrastructure along with providing different products for shared load balancing security and so on.

The collaboration of both, cloud as well as IoT, will also help in lifting the startups from the ground-zero without inputting upfront capital in the resources toward infrastructure. Also, companies will be able to research on accurate data that will predict consumer preferences and behavior. With these insights, manufacturers or retailers will be able to define customized special solutions for the particular audience and deliver it to their smartphone during shopping. We can expect nothing less that miracles from cloud computing in order to promote real-time data access.

Are you thinking that you aren’t into IoT yet? Then look around and you will be surprised to find there are many devices – from gas pumps to washing machines that can access and transmit data over the Internet. The ATM is the perfect example of IoT. Even our smartphones make a part of an IoT. So, let’s prepare ourselves to embrace the Internet of Things that will take off your life to luxury and happiness.