Home Blog Page 564

9 Ways to Harden Your Linux Workstation After Distro Installation

Learn how to work from anywhere and keep your data, identity, and sanityDOWNLOAD NOW

So far in this series, we’ve walked through security considerations for your SysAdmin workstation from choosing the right hardware and Linux distribution, to setting up a secure pre-boot environment and distro installation. Now it’s time to cover post-installation hardening.

What you do depends greatly on your distribution of choice, so it is futile to provide detailed instructions in a blog series such as this one. However, here are some essential steps you should take:

  • Globally disable firewire and thunderbolt modules

  • Check your firewalls to ensure all incoming ports are filtered

  • Make sure root mail is forwarded to an account you check

  • Set up an automatic OS update schedule, or update reminders

In addition, you may also consider some of these nice-to-have steps to further harden your system:

  • Check to ensure sshd service is disabled by default

  • Configure the screensaver to auto-lock after a period of inactivity

  • Set up logwatch

  • Install and use rkhunter

  • Install an Intrusion Detection System

As I’ve said before, security is like driving on the highway — anyone going slower than you is an idiot, while anyone driving faster than you is a crazy person. The guidelines in this series are merely a basic set of core safety rules that is neither exhaustive, nor a replacement for experience, vigilance, and common sense. You should adapt these recommendations to suit your environment.

Blacklisting modules

To blacklist a firewire and thunderbolt modules, add the following lines to a file in /etc/modprobe.d/blacklist-dma.conf:

blacklist firewire-core 

blacklist thunderbolt

The modules will be blacklisted upon reboot. It doesn’t hurt doing this even if you don’t have these ports (but it doesn’t do anything either).

Root mail

By default, root mail is just saved on the system and tends to never be read. Make sure you set your /etc/aliases to forward root mail to a mailbox that you actually read, otherwise you may miss important system notifications and reports:

# Person who should get root’s mail 

root:                  bob@example.com

Run newaliases after this edit and test it out to make sure that it actually gets delivered, as some email providers will reject email coming in from nonexistent or non-routable domain names. If that is the case, you will need to play with your mail forwarding configuration until this actually works.

Firewalls, sshd, and listening daemons

The default firewall settings will depend on your distribution, but many of them will allow incoming sshd ports. Unless you have a compelling legitimate reason to allow incoming ssh, you should filter that out and disable the sshd daemon.

systemctl disable sshd.service 

systemctl stop sshd.service

You can always start it temporarily if you need to use it.

In general, your system shouldn’t have any listening ports apart from responding to ping. This will help safeguard you against network-level 0-day exploits.

Automatic updates or notifications

It is recommended to turn on automatic updates, unless you have a very good reason not to do so, such as fear that an automatic update would render your system unusable (it’s happened in the past, so this

fear is not unfounded). At the very least, you should enable automatic notifications of available updates. Most distributions already have this service automatically running for you, so chances are you don’t have to do anything. Consult your distribution documentation to find out more.

You should apply all outstanding errata as soon as possible, even if something isn’t specifically labeled as “security update” or has an associated CVE code. All bugs have the potential of being security bugs and erring on the side of newer, unknown bugs is generally a safer strategy than sticking with old, known ones.

Watching logs

You should have a keen interest in what happens on your system. For this reason, you should install logwatch and configure it to send nightly activity reports of everything that happens on your system. This won’t prevent a dedicated attacker, but is a good safety-net feature to have in place.

Note, that many systemd distros will no longer automatically install a syslog server that logwatch needs (due to systemd relying on its own journal), so you will need to install and enable rsyslog to make sure your /var/log is not empty before logwatch will be of any use.

Rkhunter and IDS

Installing rkhunter and an intrusion detection system (IDS) like aide or tripwire will not be that useful unless you actually understand how they work and take the necessary steps to set them up properly (such as, keeping the databases on external media, running checks from a trusted environment, remembering to refresh the hash databases after performing system updates and configuration changes, etc). If you are not willing to take these steps and adjust how you do things on your own workstation, these tools will introduce hassle without any tangible security benefit.

We do recommend that you install rkhunter and run it nightly. It’s fairly easy to learn and use, and though it will not deter a sophisticated attacker, it may help you catch your own mistakes.

The first part of this series has walked through distro installation, and some pre- and post-installation security guidelines. In the next article, cover some of the best storage options to back up your workstation and then we’ll dive into some more general best practices around web browser security, SSH and private keys, and more.

Workstation Security

Read more:

3 Security Features to Consider When Choosing a Linux Workstation

How to Choose the Best Linux Distro for SysAdmin Workstation Security

4 Security Steps to Take Before You Install Linux

Security Tips for Installing Linux on Your SysAdmin Workstation

LLVM-Powered Pocl Puts Parallel Processing on Multiple Hardware Platforms

Open source implementation of OpenCL automatically deploys code across numerous platforms, speeding machine learning and other jobs.

LLVM, the open source compiler framework that powers everything from Mozilla’s Rust language to Apple’s Swift, emerges in yet another significant role: an enabler of code deployment systems that target multiple classes of hardware for speeding up jobs like machine learning.

To write code that can run on CPUs, GPUs, ASICs, and FPGAs—hugely useful with machine learning apps—it’s best to use the likes of OpenCL, which allows a program to be written once, then automatically deployed across different types of hardware.

Read more at InfoWorld

Game of Nodes: Network Operators vs. Cloud Operators

There’s a whirlwind of information on the topic of network commoditization. Seriously. Just search in your favorite search engine for “SDN,” “NFV,” or “telco cloud.” You will find dozens of open source projects, communities, forums, architectural definitions, standards, standard bodies, news articles, press releases, and blog sites dedicated to the aforementioned.

With such a myriad of information, you’d think people out there would have a deep understanding of why these topics are creating so much noise. I make it a point to ask everyone I meet this simple question: “In a few sentences, what does telco cloud, NFV, or SDN mean to you?” 

Read more at The New Stack

Android Apps on Linux PCs: Now Anbox Tool Runs Smartphone Software Natively

A new open-source project, Anbox, from a Canonical engineer lets you run Android apps natively on Ubuntu and other Linux-powered desktops.

It differs from several existing projects that allow Android apps to run on PCs. Instead of using emulators, Anbox employs Linux namespaces to run Android in a container on the same kernel as the host operating system, allowing Android software to run like native apps on the host.

Fels explains in a blogpost that he began the project 2015 with “the idea of putting Android into a simple container based on LXC and bridging relevant parts over to the host operating system while not allowing any access to real hardware or user data”.

Read more at ZDNet

A 1986 Bulletin Board System Has Brought the Old Web Back to Life in 2017

Today, many can be forgiven for thinking that the digital communications revolution kicked off during the mid-1990s, when there was simply an explosion of media and consumer interest in the World Wide Web. Just a decade earlier, however, the future was now for the hundreds of thousands of users already using home computers to communicate with others over the telephone network. The online culture of the 1980s was defined by the pervasiveness of bulletin board systems (BBS), expensive telephone bills, and the dulcet tones of a 1200 baud connection (or 2400, if you were very lucky). While many Ars readers certainly recall bulletin board systems with pixelated reverence, just as many are likely left scratching their heads in confusion (“what exactly is a BBS, anyway?”). 

It’s a good thing, then, that a dedicated number of vintage computing hobbyists are resurrecting these digital communities that were once thought lost to time. With some bulletin board systems being rebooted from long-forgotten floppy disks and with some still running on original 8-bit hardware, the current efforts of these seasoned sysops (that is, system administrators) provide a very literal glimpse into the state of online affairs from more than three decades ago.

Read more at Ars Technica

Singularity Containers for HPC, Reproducibility, and Mobility

Containers are an extremely mobile, safe and reproducible computing infrastructure that is now ready for production HPC computing. In particular, the freely available Singularity container framework has been designed specifically for HPC computing. The barrier to entry is low and the software is free.

At the recent Intel HPC Developer Conference, Gregory Kurtzer (Singularity project lead and LBNL staff member) and Krishna Muriki (Computer Systems Engineer at LBNL) provided a beginning and advanced tutorial on Singularity. One of Kurtzer’s key takeaways: “setting up workflows in under a day is commonplace with Singularity”.

Singularity was designed so that applications which run in a container have the same “distance” to the host kernel and hardware as natively running applications as shown below.

Read more at The Next Platform

Micro – A Modern Terminal Based Text Editor with Syntax Highlighting

Micro is a modern, easy-to-use and intuitive cross-platform terminal-based text editor that works on Linux, Windows and MacOS. It is written in GO programming language and designed to utilize the full capabilities of modern Linux terminals.

It is intended to replace the well known nano editor by being easy to install and use on the go. It has well aims to be pleasant to use around the clock (because you either prefer to work in the terminal, or you need to operate a remote machine over ssh).

Read more at Tecmint

Trivial Transfers with TFTP, Part 3: Usage

In previous articles, we introduced TFTP and discussed why you might want to use it, and we looked at various configuration options. Now let’s try and move some files around. You can do this with some comfort now that you know how to secure your server a little better.

Testing 1, 2, 3

To test your server, you obviously need a client to connect with. Thankfully, we can install one very easily:

# apt-get install tftp

Red Hat derivatives should manage a client install as so:

# yum install tftp

If you look up your server’s IP address using a command like the one below, then it’s possible to connect to your TFTP server from anywhere (assuming that your TCP Wrappers configuration or IPtables rules let you, of course).

# ip a

Once you know the IP address, it’s very simple to get going. You can connect like this to the server from the client:

# tftp 10.10.10.10

Now you should be connected. (If it didn’t work, check your firewalling or you might “telnet” to port 69 on your TFTP’s server IP address). Next, you can run a “status” command as follows:

tftp> status

Connected to 192.168.0.9.
Mode: netascii Verbose: off Tracing: off
Rexmt-interval: 5 seconds, Max-timeout: 25 seconds

At this point, I prefer to use verbose output by simply typing this command:

tftp> verbose

You can opt to download binaries or plain text files by typing this for plain text:

tftp> ascii

Or, you can force binaries to download correctly with:

tftp> binary

To give us some content to download, I created a simple text file like this:

# echo hello > HERE_I_AM

This means that the file HERE_I_AM contains the word “hello”. I then moved that file into our default TFTP directory, which we saw in use previously in the main config file, /etc/inetd.conf. That directory — from which our faithful daemon is serving — is called /srv/tftp, as we can see in Listing 3.

Because this is just a plain text file, there’s little need to enable binary mode, and we’ve already written verbose, so now it’s just a case of transferring our file.

If you’re at all familiar with FTP on the command line, then you’ll have no difficulty picking up the parlance. It is simply “get” to receive and “put” to place. My sample file HERE_I_AM can be retrieved as follows.

tftp> get HERE_I_AM

getting from 192.168.0.9:HERE_I_AM to HERE_I_AM [netascii]

Received 7 bytes in 0.1 seconds [560 bits/sec]

The above example offers the verbose output when that mode is enabled. You can glean useful information, such as that we’re not using binary mode but just “netascii” mode. Additionally, you can see how many bytes were transferred and how quickly. In this case, the data was seven bytes in size and took a tenth of a second, at half a kilobyte (or so) a second, to complete.

Compare and contrast that to the non-verbose mode output, and I’m sure you’ll agree it’s worth using:

tftp> get HERE_I_AM

Received 7 bytes in 0.0 seconds

If you feel the need to obfuscate your TFTP server’s port number then, after editing the /etc/services file, you need to connect with your client software, like so:

# tftp 10.10.10.10 11111

Additionally, don’t be too frightened of requesting multiple files on one line. Achieve just that by using this syntax:

# get one.txt two.txt three.txt four.txt five.txt

If you run into problems, there are a couple troubleshooting options to explore. You might for example have a saturated network link due to a broadcast storm or a misbehaving device causing weird network oddities. You will be pleased to learn, however, that you can adjust timeouts. First, from the TFTP client prompt, we can set timeouts on a per-packet basis as so:

tftp> rexmt 10

This shows us setting the retransmission timeouts on a per-packet basis to 10 seconds.

To set the total-transfer timeout, for the entire transaction, adjust the following setting, like this:

tftp> timeout 30

Another useful tool for debugging is the “trace” functionality. It can be enabled as follows:

tftp> trace
Packet tracing on.

Now, each transfer will look very noisy, as below, which should help with your troubleshooting:

tftp> get HERE_I_AM
sent RRQ <file=HERE_I_AM, mode=netascii>

received DATA <block=1, 512 bytes> 

sent ACK <block=1> 

received DATA <block=2, 512 bytes>

sent ACK <block=2> 

received DATA <block=3, 512 bytes>

[..snip..]

From the above information, you should be able to tell at which point a transfer fails and perhaps discern a pattern of behavior.

Incidentally, if you want to quit out of the TFTP client prompt, then hitting the “q” key should suffice.

In this section, we have seen how to configure and receive files from a TFTP Server. As mentioned it’s a good idea to firewall port 69 from the outside world if you experiment with this software and especially if you deploy it in production.

I recommend that you either lock down your server so it can only be accessed by a few machines by using IPtables or TCP Wrappers. Although it’s an older technology, TFTP is undoubtedly still a useful tool to have in your toolbox, and it shouldn’t be dismissed as only being useful for passing configuration to workstations or servers as they boot up.

Systemd

Incidentally, to start and stop the TFTP server (or restart it after making a config change) on modern systemd systems you can run this command, substituting “stop” with “restart” or “start” where needed.

# systemctl stop openbsd-inetd

On older systems, there’s no need to remind you that in most cases one of these commands will suffice.

# /etc/init.d/openbsd-inetd start

# service openbsd-inetd restart

Simply change openbsd-inetd to the name of the appropriate inetd or xinetd scripts for your distribution. Remember that you can find out service names by running a command like this:

# ls /etc/init.d

This even works on modern systemd versions but you need to look closely at the resulting file listing because let’s face it conduits to “inetd” might be considered a throwback in some circles and have strange filenames.

EOF

If you decide to take your life into your own hands and serve TFTP across the Internet, let me offer one word of warning, well, one acronym actually: NAT. If Network Address Translation is involved in remote connections, then you may struggle with TFTP transfers because of the fact that they use UDP. You need the NAT router to act as a slightly more advanced proxy to make this work. You might look at the renowned security software, pfSense, which can apparently assist.

We have looked at a number of of TFTP’s features. Clearly, there are specific circumstances when the excellent TFTP is a useful tool that can be used quickly and effectively with little configuration.

At other times admittedly, TFTP won’t quite cut the mustard. In such cases, sFTP and standard FTP might be more appropriate. Before searching for those packages, however, have a quick look to see if the features you need are present within TFTP’s toolkit. You might be pleasantly surprised at what you find. After all, many of the tools we use today herald from the time from which TFTP came.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in Linux System Administration! Check out the Essentials of System Administration course from The Linux Foundation.

Read previous articles:

Trivial Transfers with TFTP, Part 1: Why Use It?

Trivial Transfers with TFTP, Part 2: Configuration

Open Source Project Directors in Cloud, Blockchain, IoT, SDN to Speak at Open Source Summit in Japan

Executive directors from top open source projects in cloud computing, blockchain, Internet of Things, and software-defined networking will keynote next month at Open Source Summit Japan, The Linux Foundation has announced. The full agenda, now available on the event website, also features a panel of Linux kernel developers and The Linux Foundation Executive Director Jim Zemlin.

LinuxCon, ContainerCon and CloudOpen have combined under one umbrella name in 2017 – Open Source Summit. More than 600 open source professionals, developers and operators will gather May 31-June 2  in Tokyo to collaborate, share information, and learn about the latest in open technologies, including Linux, containers, cloud computing and more.

Confirmed keynote speakers at this year’s event include:

  • Brian Behlendorf, Executive Director, Hyperledger

  • Philip DesAutels, Sr. Director of IoT, The Linux Foundation

  • Arpit Joshipura, General Manager, Networking, The Linux Foundation

  • Abby Kearns, Executive Director, Cloud Foundry Foundation

  • Jim Zemlin, Executive Director, The Linux Foundation

  • Linux Kernel Panel with Alice Ferrazzi, Gentoo Kernel Project Leader; Greg Kroah-Hartman, Linux Foundation Fellow; Steven Rostedt, Red Hat; and Dan Williams, Intel

Session highlights include:

  • TensorFlow in the Wild: From Cucumber Farmer to Global Insurance Firm – Kazunori Sato, Google

  • Fast releasing and Testing of Gentoo Kernel Packages and Future Plans of the Gentoo Kernel Project – Alice Ferrazzi, Gentoo Kernel Project Leader

  • Testing at Scale – Andrea Frittoli, IBM

  • Device-DAX: Towards Software Defined Memory – Dan Williams, Intel

  • Kubernetes Ground Up – Vishnu Kannan, Google

View the full agenda of sessions.

Linux.com readers get 5% off the “attendee” registration with code LINUXRD5. Register now and save $150 through April 16.

Automotive Linux Summit (ALS) is co-located with Open Source Summit Japan. Attendees may add on registration for ALS at no additional charge.                               

Applications for diversity and needs-based scholarships are also being accepted.

Top 5 Programming Languages for DevOps

I’ve been focused on infrastructure for the majority of my career, and the specific technical skills required have shifted over time. In this article, I’ll lay out five of the top programming languages for DevOps, and the resources that have been most helpful for me as I’ve been adding those development skills to my infrastructure toolset.

Knowing how to rack and stack servers isn’t an in-demand skill at this stage. Most businesses aren’t building physical datacenters. Rather, we’re designing and building service capabilities that are hosted in public cloud environments. The infrastructure is configured, deployed, and managed through code. This is the heart of the DevOps movement—when an organization can define their infrastructure in lines of code, automating most (if not all) tasks in the datacenter becomes possible.

Read more at OpenSource.com