Home Blog Page 761

Create Your Own “Neural Paintings” using Deep Learning

Neural networks can work on so many things. They can understand our voices, interpret images, and translate conversations, but did you know they can also paint? The image above shows some generated results using neural paintings.

Today, I’m going to walk you through how to do that. First off, make sure you have an up to date copy of Ubuntu (14.04 is what I used). You should have at least a few extra gigs of hard drive space and a RAM at least bigger than 6gb (more ram for bigger output size). To run Ubuntu as a virtual machine you can use vagrant together with virtualbox.
First make sure you have git installed. To download and install git, just open a terminal and run:

$ sudo apt-get install git

 

Step 1: Install torch7

Torch is a scientific computing framework with wide support for machine learning algorithms. Torch is the main package in Torch7 where data structures for multi-dimensional tensors and mathematical operations over these are defined. Additionally, it provides many utilities for accessing files, serializing objects of arbitrary types and other useful utilities.

In terminal run these commands (you might need to use sudo with them):

$ cd ~/
$ curl -s https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash
$ git clone https://github.com/torch/distro.git ~/torch --recursive
$ cd ~/torch; ./install.sh

Now we need to source and refresh our environment variables, run:

$ source ~/.bashrc

 

Step 2: Install loadcaffe

$ sudo apt-get install libprotobuf-dev protobuf-compiler
$ luarocks install loadcaffe

Alternatively, if you face a problem, try this:

$ git clone git@github.com:szagoruyko/loadcaffe.git
$ ~/torch/install/bin/luarocks install loadcaffe/loadcaffe-1.0–0.rockspec

 

Step 3: Install neural-style

This is a torch implementation of the paper A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. The paper presents an algorithm for combining the content of one image with the style of another image using convolutional neural networks.

First clone neural-style from GitHub:

$ cd ~/
$ git clone https://github.com/jcjohnson/neural-style.git

Next download the neural network models:

$ cd neural-style
$ sh models/download_models.sh

 

Step 4: Running it

Now make sure that you have at least 6gb RAM (if you are using a virtual machine, make sure you have set the appropriate amount of RAM for it). Then check if neural style is working with this command:

$ th neural_style.lua -gpu -1 -print_iter 1

Note that you are running this in CPU mode, running it in GPU mode is out of scope of this article.

To see instructions about how to use neural style, run:

$ th neural_style.lua ?

Now lets work on some test commands to check if our neural styles are working. First make sure you are in the directory of neural networks, if you followed all the instructions above, you should be in ~/neural-network, now run:

th neural_style.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/shipwreck.jpg -gpu -1 -image_size 256

Note I have dialed down the image size for it to complete in less time. When this command is finished the default output file name will be out.png located in the same directory.

14 Best IDEs for C++ Programming or Source Code Editors on Linux

C++, an extension of well known C language, is an excellent, powerful and general purpose programming language that offers modern and generic programming features for developing large-scale applications ranging from video games, search engines,…

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Read the complete article: http://www.tecmint.com/best-linux-ide-editors-source-code-editors/

Explore Advanced Vagrant Features

Learn about Vagrant’s complex features, such as synced folders, provisioning scripts and running multiple virtual machines at the same time.

ODPi Advances Hadoop Standards with Open Source Runtime Specification

The ODPi’s Hadoop runtime specification for big data apps has been adopted by data analytics vendors, which simplifies the open source big data ecosystem.

The ODPi, a Linux Foundation collaborative project building, has moved one step closer to building a better integrated big data ecosystem based on open source technology. This week, it announced that several major Hadoop distributors have adopted its runtime standard.  

The ODPi’s next step is to finalize an operations specification, which will simplify Hadoop installation and management, according to the organization. That will debut later this year, the group says.

Read more at The VAR Guy

5 SSH Hardening Tips

When you look at your SSH server logs, chances are they are full of attempted logins from entities of ill intent. Here are 5 general ways (along with several specific tactics) to make your OpenSSH sessions more secure.

1. Make Password Auth Stronger

Password logins are convenient, because you can log in from any machine anywhere. But they are vulnerable to brute-force attacks. Try these tactics for strengthening your password logins.

  • Use a password generator, such as pwgen. pwgen takes several options; the most useful is password length (e.g., pwgen 12 generates a 12-character password).

  • Never reuse a password. Ignore all the bad advice about not writing down your passwords, and keep a notebook with your logins written in it. If you don’t believe me that this is a good idea, then believe security guru Bruce Schneier. If you’re reasonably careful, nobody will ever find your notebook, and it is immune from online attacks.

  • You can add extra protection to your login notebook by obscuring the logins recorded in your notebook with character substitution or padding. Use a simple, easily-memorable convention such as padding your passwords with two extra random characters, or use a single simple character substitution such as # for *.

  • Use a non-standard listening port on your SSH server. Yes, this is old advice, and it’s still good. Examine your logs; chances are that port 22 is the standard attack point, with few attacks on other ports.

  • Use Fail2ban to dynamically protect your server from brute force attacks.

  • Create non-standard usernames. Never ever enable a remote root login, and avoid “admin”.

2. Fix Too Many Authentication Failures

When my ssh logins fail with “Too many authentication failures for carla” error messages, it makes me feel bad. I know I shouldn’t take it personally, but it still stings. But, as my wise granny used to say, hurt feelings don’t fix the problem. The cure for this is to force a password-based login in your ~/.ssh/config file. If this file does not exist, first create the ~/.ssh/ directory:

$ mkdir ~/.ssh
$ chmod 700 ~/.ssh

Then create the ~/.ssh/config file in a text editor and enter these lines, using your own remote HostName address:

HostName remote.site.com
PubkeyAuthentication=no

3. Use Public Key Authentication

Public Key authentication is much stronger than password authentication, because it is immune to brute-force password attacks, but it’s less convenient because it relies on RSA key pairs. To begin, you create a public/private key pair. Next, the private key goes on your client computer, and you copy the public key to the remote server that you want to log into. You can log in to the remote server only from computers that have your private key. Your private key is just as sensitive as your house key; anyone who has possession of it can access your accounts. You can add a strong layer of protection by putting a passphrase on your private key.

Using RSA key pairs is a great tool for managing multiple users. When a user leaves, disable their login by deleting their public key from the server.

This example creates a new key pair of 3072 bits strength, which is stronger than the default 2048 bits, and gives it a unique name so you know what server it belongs to:

$ ssh-keygen -t rsa -b 3072 -f id_mailserver

This creates two new keys, id_mailserver and id_mailserver.pub. id_mailserver is your private key — do not share this! Now securely copy your public key to your remote server with the ssh-copy-id command. You must already have a working SSH login on the remote server:

$ ssh-copy-id -i  id_rsa.pub user@remoteserver
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
user@remoteserver's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'user@remoteserver'"
and check to make sure that only the key(s) you wanted were added.

ssh-copy-id ensures that you will not accidentally copy your private key. Test your new key login by copying the example from your command output, with single quotes:

$ ssh 'user@remoteserver'

It should log you in using your new key, and if you set a password on your private key, it will prompt you for it.

4. Disable Password Logins

Once you have tested and verified your public key login, disable password logins so that your remote server is not vulnerable to brute force password attacks. Do this in the /etc/sshd_config file on your remote server with this line:

PasswordAuthentication no

Then restart your SSH daemon.

5. Set Up Aliases — They’re Fast and Cool

You can set up aliases for remote logins that you use a lot, so instead of logging in with something like “ssh -u username -p 2222 remote.site.with.long-name”, you can use “ssh remote1”. Set it up like this in your ~/.ssh/config file:

Host remote1
HostName remote.site.with.long-name
Port 2222
User username
PubkeyAuthentication no

If you are using public key authentication, it looks like this:

Host remote1
HostName remote.site.with.long-name
Port 2222
User username
IdentityFile  ~/.ssh/id_remoteserver

The OpenSSH documentation is long and detailed, but after you have mastered basic SSH use, you’ll find it’s very useful and contains a trove of cool things you can do with OpenSSH.

 

NFV and Cloud Driving Changes in Core Network Licensing Models

As telecom operators move towards NFV, SDN and cloud architectures, licensing models will need to adapt to new deployment methods

One of the greatest advantages of network functions virtualization and cloud network architectures is the ability for network operators to dynamically scale the capacity of their networks to better mirror demand without having a significant amount of capacity laying dormant, waiting for busy-hour spikes in traffic. With NFV/cloud deployments, capacity changes based on need. All of the efficiency offered by elastic demand would disappear if the capacity was licensed in a traditional manner. Therefore, there needs to be licensing and revenue models that reflect this new reality.

For decades, networking and communications equipment consisted of physical “boxes” or nodes, often times with physical interfaces or ports that denominated the capacity associated with the equipment. Capacity was purchased accordingly:…

Read more at RCR Wireless

DevOps Done Right: Five Tips for Implementing Database Infrastructures

But while DevOps can provide many improvements throughout all stages of the development lifecycle, it is imperative to avoid some of the common pitfalls. A fully-firing database is core to any DevOps strategy, because slow data equates to slow results, something the methodology is trying to eradicate. If your organisation is looking to embrace DevOps, here are five top tips on how to optimise databases for DevOps to meet your organisational goals.



Read more: ITProPortal

 

Make Peace With Your Processes: Part 5

In previous articles in this series, we’ve whet our whistles with a quick look at the Process Table and pseudo filesystems, and we talked about /dev and /proc. Now let’s explore a few useful but unrelated command lines, which may save the day at some point.

Counting Processes

You might be concerned that a single member of a cluster of identical machines is beginning to give up the ghost. You could check how many processes the system has created since its last reboot by using this command:

# grep processes /proc/stat

With your shovel at the read, now take a look inside the “/proc/$PID/cmdline” file (replacing “$PID” for your Process ID), and there you will find some buried goodies. This file retains the entire and complete command line for a process (note that zombie processes are an exception, so don’t fret if your mileage varies).

Alongside that file, in the PID’s directory, is the “cwd” symbolic link (or symlink) to the current working directory of that process.You may need to be logged in as root to see this. To discover the current working directory of that process, you can run this command:

# cd /proc/$PID/cwd; /bin/pwd

You do probably need to be root sometimes, because the symlink is hidden to normal users. I won’t claim to understand all of the intricacies of these pseudo files, but if you use the commands “cat” or “less” to view some of these files, then usually a little more light is shed on their raison d’ etre.

One such pseudo file (which is less mysterious, thanks to its name) is the disk input/output statistics file, named “io”. You can run the following command and see its output in Listing 1:

# cat io


rchar: 0

wchar: 0

syscr: 0

syscw: 0

read_bytes: 342

write_bytes: 11

cancelled_write_bytes: 0

Listing 1: Here we can see what this process has been up to in relation to disk activities.

Among many others, one particularly useful addition is the “maps” file. In Listing 2, we can see the memory regions and their access permissions for a process by using:

# cat maps


7eff839c7000-7eff839de000 r-xp 00000000 fd:01 3221       /lib64/libpthread-2.12.so

7eff839de000-7eff83bde000 ---p 00017000 fd:01 3221       /lib64/libpthread-2.12.so

7eff83bde000-7eff83bdf000 r--p 00017000 fd:01 3221       /lib64/libpthread-2.12.so

7eff83bdf000-7eff83be0000 rw-p 00018000 fd:01 3221       /lib64/libpthread-2.12.so

7eff843ac000-7eff843b3000 r-xp 00000000 fd:01 3201       /lib64/libcrypt-2.12.so

7eff843b3000-7eff845b3000 ---p 00007000 fd:01 3201       /lib64/libcrypt-2.12.so

7eff845b3000-7eff845b4000 r--p 00007000 fd:01 3201       /lib64/libcrypt-2.12.so

7eff845b4000-7eff845b5000 rw-p 00008000 fd:01 3201        /lib64/libcrypt-2.12.so

7eff82fb4000-7eff83025000 r-xp 00000000 fd:01 3478        /lib64/libfreebl3.so

7eff83025000-7eff83224000 ---p 00071000 fd:01 3478        /lib64/libfreebl3.so

7eff83224000-7eff83226000 r--p 00070000 fd:01 3478        /lib64/libfreebl3.so

7eff83226000-7eff83227000 rw-p 00072000 fd:01 3478        /lib64/libfreebl3.so

Listing 2: An extremely abbreviated sample of the “maps” file for a process.

Apparently, the legend for the permissions is as follows:

r = read

w = write

x = execute

s = shared

p = private (copy on write)

The “maps” file can be useful to see how a process is interacting with the system’s files. Or, maybe you’re curious as to which libraries a process needs and you have forgotten the correct options to add to a tool like the super-duper “lsof” that I discussed in the previous article.

Any eagle-eyed console champions will spot that the file sizes, having used “ls” to list the

files in the directory, all appear as zero bytes. Clearly, these pseudo files are different animals than we’re used to.

Kernel Support

Let’s now move onto other benefits of /proc and not just on a per-process basis. For example, you might look at the filesystems which were compiled into the kernel by checking out the “/proc/filesystems” file.

In Listing 3, you can see (in abbreviated form) what filesystems our kernel supports and even then it’s a sizeable list.

nodev   sysfs

nodev   rootfs

nodev   bdev

nodev   proc

nodev   cgroup

nodev   cpuset

nodev   tmpfs

nodev   devtmpfs

nodev   binfmt_misc

nodev   debugfs

nodev   securityfs

nodev   sockfs

nodev   usbfs

nodev   pipefs

nodev   anon_inodefs

nodev   inotifyfs

nodev   devpts

nodev   ramfs

nodev   hugetlbfs

nodev   pstore

nodev   mqueue

Listing 3: Here we can see an abbreviated list of the types of filesystems supported by the kernel without having to make any tricky changes.

You may have heard of an excellent lightweight utility called “vmstat”, which reports back dutifully with screeds of useful memory statistics.

You may not, at this juncture, fall off your seat in surprise when I say that in order to retrieve this useful information, the following files are used. Note the asterisk for all PIDs etc.

/proc/meminfo

/proc/stat

/proc/*/stat

Another aspect that the kernel deals with on a system is the hardware ports — you know, the kind that accept a keyboard or a mouse. Have a peek at Listing 4, as seen in the “/proc/ioports” file, again abbreviated for ease.

0000-001f : dma1

0020-0021 : pic1

0040-0043 : timer0

0050-0053 : timer1

0060-0060 : keyboard

0064-0064 : keyboard

0070-0071 : rtc0

0080-008f : dma page reg

00a0-00a1 : pic2

00c0-00df : dma2

00f0-00ff : fpu

Listing 4: Displaying an abbreviated output from the “/proc/ioports” file.

As expected, we can also look at network information. I’ll leave you to try some of these yourself. You might want to begin by trying this:

# ls /proc/net

Or, for live ARP information (the Address Resolution Protocol), you can use the “cat” command in front of this:

# cat proc/net/arp

If that output looks familiar, then think of how many utilities you’ve used in the past that refer to this file.

But, why stop with ARP? SNMP statistics are also readily available:

# cat /proc/net/snmp

There’s a whole heap (no memory puns intended) of other options that we could explore, but instead let’s look at changing some of the “sysctl” settings I mentioned earlier.

Sadly, the majority of the powerful changes you can make to a running system’s kernel are for another day (there’s simply too many to cover), but let’s take the popular “/proc/sys” directory for a quick spin before we finish.

What if the system-wide limit of the number of open files for every process on the system is holding you up when you’re trying to launch your latest ground-breaking application? This setting can cause all sorts of system headaches if it’s abused (and it appears that not only the “root” user can change it on occasion). Warnings aside, it’s very useful to arm yourself with this quick fix if it’s ever necessary.

Try this command yourself on a test box:

# echo 999999 > /proc/sys/fs/file-max

Once you’ve conquered that particular outage-inducing issue, taking all the glory as you go, how about checking that the new setting has stuck? To do this, you can treat it like any other file:

# cat /proc/sys/fs/file-max

I should have, of course, mentioned that you should check the original value before making any changes (so you can revert the value if need be). My computer used “6815744”.

What about another setting? This usually uses a start/stop daemon to make the change, on some systems, but let’s do it on the fly:

# echo "ChrisLinux" > /proc/sys/kernel/hostname

In case that you’re wondering, you will probably struggle to edit these files with an editor (you may get “file has changed” errors) hence the use of “echo” as above.

On other distributions, you can affect the same change with:

# hostname "ChrisLinux"

I’m sure you get the idea; it’s hardly computer science after all.

End of File

I have briefly looked at the very clever Unix-like process table, procfs, and /dev. I have also covered how to read and sometimes manipulate tunable settings and how they integrate with components running on Unix-like systems.

I hope these articles have given you some insight to know what to look if you encounter issues in the future. Above everything else, however, it’s important to find the time to fire up a test machine, dig into otherwise buried file paths, and tweak the available settings to increase your knowledge.

Incidentally, should you ever find yourself at a loss on a rainy afternoon, try running this command:

# /sbin/sysctl -a

That should definitely provide some food for thought. It helpfully lists all of the tunable kernel parameters on a system, each of which can be altered with “echo”; simply read the “dots” as “slashes” to find them inside /proc. Just don’t make ill-advised, arbitrary kernel changes on a production machine, or you will likely rue the day!

If you weren’t aware of some of the settings that I’ve covered here, then I hope you will enjoy your newly gained sys admin knowledge and use it creatively in the future.

Read the previous articles in this series:

Part 1

Part 2

Part 3

Part 4

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

 

Using Nano-Segmentation Apcera Looks to Bring Cloud Trust to Docker Container Deployment

Highly secure trusted cloud platform provider Apcera, Inc. today announced the release of its own approach to securely managing Docker containers in production at scale. The product is an enterprise-ready orchestration framework called the Apcera Trusted Cloud Platform and it is designed to address today’s gaps in container deployment, management and scalability with an eye for trust and security.

Docker containers continue to ease into the DevOps lifecycle of enterprise application deployment. According to Datadog, a cloud monitoring-firm, research released shows that Docker adoption is up 30 percent in the past 12 months from 8.2 percent of the company’s customers to 10.7 percent. 

Read more at Silicon Angle

Can IBM Really Make a Business Out of Blockchain?

On Tuesday, IBM announced the formal launch of a so-called “Bluemix Garage” in New York, where developers can experiment with financial-tech software and explore new forms of blockchain innovation. 

According to Jerry Cuomo, vice president of blockchain and cloud at IBM, the plan will succeed because the company offers a full-suite of tools that allow developers to get up and running quickly while also benefiting from a mentoring environment at the Bluemix Garage. 

While the initiatives Cuomo described may pan out, the most important choice by IBM will likely prove to be its decision to embrace an open-source development model. Specifically, it is a big contributor of code to theHyperLedger project, a joint collaboration that also involves IntelCisco, and JP Morgan, and it is being shepherded by the Linux Foundation.

Read more at Fortune