Home Blog Page 744

ServerMania – Discover High Availability Cloud Computing, Powered by OpenStack

Cloud computing is fast growing in the world of computer and Internet technology, many companies, organizations and even individuals are opting for shared pool of computing resources and services. For starters, cloud computing is a type of Internet-based computing where users consume hosted services on shared server resources.

There are fundamentally three types of cloud computing available today: privatepublic and hybrid cloud computing…
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Configuring a Single Ubuntu Installation as a Dual-Boot Option and a VirtualBox Appliance under Windows 10

I often need to use Windows 10 and Ubuntu on the same machine within a single login session, so I run Ubuntu as a virtual machine in Oracle VirtualBox. But I also like to be able to boot my computer natively into Ubuntu, so a dual-boot configuration is optimal. To get the best of both worlds, I install Ubuntu in a dual-boot configuration alongside Windows, and configure VirtualBox to access the Ubuntu disk partitions as a raw disk image. This allows me to boot directly into Ubuntu, or boot the same Ubuntu installation from within Windows using VirtualBox.

Read the full article

How to Create a Partition Larger than 2TB on RHEL6

In this article, you will learn how to create linux file system more than 2TB on RedHat Linux…

Read the full article: https://mikent.wordpress.com/2012/07/01/how-to-create-partition-larger-than-2tb-on-rhel6a/

Demystify GNU/Linux boot process with Systemd

The boot process in Systemd

Basically there to ways of booting GNU/Linux the initramfs way or using a disk partition specifier in your kernel configuration. I won’t explain how the later works but you can specify the root device as a kernel parameter eg. in grub like root=/dev/sda1 or whatever fits your systems root partition. The good thing about initrd images it is a filesystem residing in your RAM and it contains just what you specify. From kernel modules, to binaries and for sure a init script.

It starts with a kernel

Linux can be build with drivers as loadable modules. First of all we start with a clean kernel source tree. Then you may configure the kernel features and drivers. You can navigate with Up and Down, show informations with ? or toggle with <Space>. Linux kernel configuration allows you to load a template file. Me, I prefere using debian style kernel configuration. Note you should immediately save the template as .config

# make mrproper
# make menuconfig

Now we are ready to build the linux image. After we install the kernel modules in /lib/modules/<kernel-version>. Then we copy the kernel to /boot directory. It is assumed the kernel to be x86_64 architecture compatible.

# make -j5
# make modules_install
# cp -v arch/x86_64/boot/bzImage /boot/vmlinuz-<kernel-version>

About initrd images

The initramfs are usually compressed the kernel supports various compression formats. As prior mentioned you might want to put in kernel modules available in early boot stage or certain binaries like `switch_root` to change root fs from initird image to your HDD. A final call to `init` is performed in order to systemd to overtake the control of boot process. The mkinitramfs script takes as parameter the kernel version and generates an initrd image for you. You should invoke it in the /boot directory as root.

# mkinitramfs <kernel-version>

The compatibility layer to Sys-V

The most comfortable thing about Systemd is its compatibility layer to system five. That’s why I’m telling you what is not systemd. /etc/inittab, /etc/rc.d and /etc/init.d are all related to Sys-V.

Service and target files

In Systemd every command is cabaple running as service. These services can be launched by the systemd message bus and are described by a .service file. Common places for those files are either /lib/systemd/system or /usr/lib/systemd/system.

Here is a sample file from my system /lib/systemd/system/systemd-logind.service. The most important key is ExecStart=/lib/systemd/systemd-logind from Service section. It tells systemd what command to launch. In this case it starts the Systemd login manager.


#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

[Unit]
Description=Login Service
Documentation=man:systemd-logind.service(8) man:logind.conf(5)
Documentation=http://www.freedesktop.org/wiki/Software/systemd/logind
Documentation=http://www.freedesktop.org/wiki/Software/systemd/multiseat
Wants=user.slice
After=nss-user-lookup.target user.slice

# Ask for the dbus socket. If running over kdbus, the socket will
# not be actually used.
Wants=dbus.socket
After=dbus.socket

[Service]
ExecStart=/lib/systemd/systemd-logind
Restart=always
RestartSec=0
BusName=org.freedesktop.login1
CapabilityBoundingSet=CAP_SYS_ADMIN CAP_MAC_ADMIN CAP_AUDIT_CONTROL CAP_CHOWN CAP_KILL CAP_DAC_READ_SEARCH CAP_DAC_OVERRIDE CAP_FOWNER CAP_SYS_TTY_CONFIG
WatchdogSec=1min

# Increase the default a bit in order to allow many simultaneous
# logins since we keep one fd open per session.
LimitNOFILE=16384

The .target files specifies prerequisites when a goal of the target has been accomplished. So Systemd can proceed with further targets. Targets can have After and Before prerequisites of targets or services. Here’s a sample .target file /lib/systemd/system/multi-user.target.

#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

[Unit]
Description=Multi-User System
Documentation=man:systemd.special(7)
Requires=basic.target
Conflicts=rescue.service rescue.target
After=basic.target rescue.service rescue.target
AllowIsolate=yes

A true multi-user system

The file /lib/systemd/system/systemd-user-sessions.service could look like following. Please consider `man systemd-user-sessions` in order to get understanding of ExecStart and ExecStop.

#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

[Unit]
Description=Permit User Sessions
Documentation=man:systemd-user-sessions.service(8)
After=remote-fs.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/lib/systemd/systemd-user-sessions start
ExecStop=/lib/systemd/systemd-user-sessions stop

You might want to enable or disable services with the `systemctl` command. To enable root only logins issue the following:

# systemctl disable systemd-user-sessions

 

Zephyr OS, a ‘Secure IoT RTOS,’ You Say?

Earlier this year, the Zephyr project was launched to a world justifiably skeptical of self-anointed “secure” technologies, jaded by the sloganeering of all things IoT, and seemingly saturated by the proliferation and fragmentation of no-size-fits-all microcontroller RTOS platforms. Given those circumstances it is only reasonable that I find myself asked what we could possibly be thinking in launching a new “secure IoT RTOS platform.” Or, should I say, why are we launching yet another one.

As the NXP representative on the Zephyr Project Governance Board, and as an advocate from the outset of the essential value of this initiative, I’m going to give my take on this question, and on what I think are the key considerations.

Read more at Design News

This Week in Open Source News: OSS is the Enterprise’s New Norm, Bulgaria’s Government Mandates Open Source, & More

1) Splice Machine’s decision to go open source reminds us that OSS is the new normal. 

Has Open Source Become the Default Business Model for Enterprise Software?– ZDNet

The Bulgarian government kicks off a new open source software “experiment.”
2) Bulgaria’s government now requires all software written for their use to be open source. Sam Dean weighs the pros and cons.

As it Mandates Open Source, is Bulgaria Opening Questionable Doors?– OStatic

3) Here’s what you should learn from the breach of unpatched Ubuntu forum software.

The Hacking of Ubuntu Linux Forums: Lessons Learned– eWeek

4) “Anything made by a human is vulnerable” writes Howard Soloman about industry misgivings over The Dao project, a blockchain effort.

As a Blockchain-Based Project Teeters, Questions About the Technology’s Security– IT World Canada

5) Linux to be bumped to “a Web-based native version of Skype” due to a Skype rebuild.

Microsoft Kills P2P Skype, Native OS X, Linux Clients– The Register

Container Image Signing

Red Hat engineers have been working to more securely distribute container images. In this post we look at where we’ve come from, where we need to go, and how we hope to get there.

History

When the Docker image specification was introduced it did not have a cryptographic verification model. The most significant reason (for not having one) was the lack of a reliable checksum hash of image content. Two otherwise identical images could have different checksum values. Without a consistent tarsum mechanism, cryptographic verification would be very challenging. With Docker version 1.10, checksums are more consistent and could be used as a stable reference for cryptographic verification. The version 2 image format provides an image manifest digest hash value that is useful for this.

Read more at Red Hat Blog

Fix Bugs, Go Fast, and Update: 3 Approaches to Container Security

Containers are becoming the central piece of the future of IT. Linux has had containers for ages, but they are still maturing as a technology to be used in production or mission-critical enterprise scenarios. With that, security is becoming a central theme around containers. There are many proposed solutions to the problem, including identifying exactly what technology is in place, fixing known bugs, restricting change, and generally implementing sound security policies. This article looks at these issues and how organizations can adapt their approach to security to keep pace with the rapid evolution of containers.

During his talk at the Cloud Foundry Summit, Justin Smith, Director at Pivotal Software, Inc., who is also involved with Cloud Foundry security mentioned the 2015 Worldwide Threat Assessment which now lists cyberattacks as the number one threat. Ahead of terrorism! “Devices, designed and fielded with minimal security requirements and testing, and an ever-increasing complexity of networks could lead to widespread vulnerabilities in civilian infrastructures and US Government systems,” the report said.

The tech community is less worried about an Armageddon-style cyber attack. “We’re more worried about a bunch of moderate attacks against a bunch of companies. It just devastates the economy. Death by a thousand paper cuts,” said Smith.

Security is fixing bugs

Companies with stakes in the container landscape adopt different approaches towards security.  Lars Herrmann, general manager of Red Hat told me in an interview, “Containers define a different way for the organization to collaborate inside the organization. That’s really the disruptive potential but, in order to do this, we need a certain kind of technology to enable that transformation.”

He said that security is an important aspect of it because we cannot go into production with lots of different applications without having a solid understanding as to how do we manage security in this environment.

As Linus Torvalds once said: security means bugs. And, no software is immune to bugs. Herrmann said that even the developers’ own code might have security issues. So companies need to think about all aspects of the process.

To protect organizations, Red Hat has partnered with Black Duck so that customers can identify and detect which open source technology is in which version, and then correlate that with their back-end database based on what they know about the technology in that version. “We’ll see more solutions that are driving more insights and are making a more fine-grained risk assessment,” Herrmann said.

Scanning is only one part of the picture; the second part is dealing with it. Herrmann said developers could implement policies such that if there is a container in an environment, on a registry that doesn’t meet security criteria, they can automatically trigger workflow and rebuild that container with the latest runtime components to address that security issue.

That’s just one piece of the security jigsaw puzzle.

Security means going faster

Pivotal’s Smith is critical of the way organizations approach security. He said “…to get safer, you go faster. That’s the exact opposite of how organizations think today. What I want in my time at Pivotal and my time as part of the Cloud Foundry community is to build the system that gives no quarter for malware in the data center.”

According to Smith, organizations need to make it harder for attackers to compromise their systems. It should be like playing a video game for the malware author, where you have get to level 100, but you can never get past level five because there’s not enough time.

“What if every server inside my data center had a maximum lifetime of two hours?” Smith said. That situation will frustrate attackers, because it limits the time needed to exploit known vulnerabilities.

When I asked Herrmann about this approach, he partially agreed that going faster may sound safer, but it heavily depends on the nature of attack. There is no single factor. “How long a system is potentially exposed is one factor. How badly it is exposed is another factor. How many layers of protection you have is certainly another relevant factor, just to name a few. “

Herrmann added that he didn’t actually believe that just because a given container only lives for a couple seconds, nobody would be able to compromise it. “I wouldn’t assume that because the attackers have the same technologies available as you have,” he said.

Another problem with the “going faster makes you safer” approach, Herrman said, is that “just because I do everything in a continuous integration fashion, I’m always pulling the latest from everywhere… that also means constantly pulling a lot of different risks from lots of different places. Yeah, sometimes that might work well, but what if it doesn’t? What do you do then? Well, just rebuilding the whole thing with broken components is probably not going to fix your problem. Eventually, somebody has to deliver a fix.”

Build systems that can be updated

Leading kernel developer Greg Kroah-Hartman is of the opinion that companies should build systems that are able to be updated. A major security hole is that of unpatched systems running in production. Kroah-Hartman added that, although the Linux community works quickly to fix the bugs so that vendors can push them to their users, vendors don’t always do a good job.

“We have a very bad history of keeping bugs alive for a long time. Somebody did a check of it; most known bugs live for 5 years in systems. These are things that people know and know how to exploit. They’re not closed. That’s a problem in our infrastructure,” said Kroah-hartman.

Another issue involves the mindset that once a system is set up, you shouldn’t touch it. Herrmann defended that approach and explained that traditional security processes have very much relied on the principle of restricting change. You should work under the assumption that, if you can minimize the risk in every single change, you can minimize the risk in your resulting system. That leads to the idea of “don’t touch a running system,” and all that. There are whole philosophies built around it: Don’t a let a bad change happen. Instead, have guardrails on every change so you can separate the bad change from the good change.

There is no silver bullet

The bottom line is that there is no silver bullet. Container technologies are evolving fast. As they become mature and continue to evolve, the nature of threats will change over time, involving the security of computers, the security of storage, and the security of networking, for example.

Organizations need to be proactive about security instead of hoping for the best. Organizations need to have a holistic approach towards containers, and they need to borrow from different philosophies: Restrict changes, go fast, and have systems that update quickly. It’s less about software and more about culture.

Easily Encrypt your Flash Drives with Linux

If you travel with sensitive data, you know there are always risks that your information could be lost or stolen. Depending on the nature of your data, that could be a disaster. To that end, you might want to consider encrypting those flash drives. Once encrypted, a passphrase will be required to gain access to said data. No passphrase, no data.

You might think this would be a challenge, or require extensive use of the command line. Fortunately (for those that prefer the GUI way of things), there are two tools that make this process incredibly simple. I’m going to introduce you to those tools, so that you can encrypt your flash drives with ease.

This process can also be used on any drive. You might even consider encrypting all of your external backup drives, in the event of theft. When you encrypt external drives, modern Linux file platforms will prompt you for their password when the devices are automatically or manually mounted. This makes for a very clean and hassle-free system.

Now, let’s get to the encryption. I’ll be using Ubuntu 16.04 to demonstrate the encryption of a pre-existing and a new partition.

Installing the necessary tools

The first thing you must do is install two tools. The first tool is, depending upon your distribution, already located on your platform. That tool is gnome-disk-utility. This is the GUI that we’ll be using to create and encrypt partitions on our flash drive. The second tool is the cryptsetup tool. Cryptsetup is a utility for for setting up encrypted filesystems with the help of Device Mapper and dm-crypt.

Both of these tools are found in the standard repositories, so installation can be done with a single command. To install both gnome-disk-utility and cryptsetup, open up a terminal window and issue the following command:

sudo apt-get install -y gnome-disk-utility cryptsetup

Type your sudo password and hit the Enter key. The installation should go off without a hitch. Now that you have everything installed, let’s encrypt.

Encrypting an existing partition

The first process we’ll undertake is the encryption of a pre-existing partition. The tool we will use is gnome-disks. Before we begin this particular process, it is crucial that you back up the data on your external drive. Do not continue on until the data has been backed up (otherwise, you run the risk of losing your data).

With your data backed up, unmount the external drive, but leave it plugged in.

From your desktop menu, search for (and launch) the app labeled Disks. You can also issue the command gnome-disks from a terminal window. When the utility starts, you can select the flash drive in question from the left navigation pane (Figure 1).

Figure 1: GNOME Disks ready to encrypt your data.

Once you’ve selected the correct external (in this case flash) drive, you then click on the partition you want to encrypt. With the partition selected, click on the gear icon and then select Format Partition. In the resultant window (Figure 2), select these two options:

  • Don’t overwrite existing data

  • Encrypted, compatible with Linux systems

You’ve probably figured out the one limitation for this process already. If not, know this: The encryption you’re about to apply will only be readable from other Linux systems. If you need to read the encrypted data from the Windows platform, you will be out of luck. One other caveat is that any machine used to read the encrypted device will need to have cryptsetup installed as well (otherwise, it won’t be able to mount the encrypted partition).

Give the encrypted partition a name and enter (and verify) the encryption passphrase. Make sure the passphrase is strong and then click Format. You will be prompted again to verify the action and click Format a second time. Depending upon the size of the drive (and the data it houses), this can take some time. When the process completes, the partition will appear with a lock icon in the lower right corner (Figure 3).

Figure 3: An encrypted partition showing in Disks.

Remember when I mentioned backing up the data on your drive? Now is when you’re going to be glad you did. I’ve tested this numerous times and, even when selecting Don’t overwrite existing data, the data is always overwritten. Because of this, you’ll now need to copy that data back onto your now-encrypted drive.

Encrypting a new partition

Let’s return to GNOME Disks and create a brand new, encrypted filesystem on our flash drive. To do this, insert the drive in question and then open Disks. Select the flash drive from the left navigation and then select the free space on the drive. Click the + button and then, in the resulting window (Figure 4), set the following options:

  • Partition size: Set the desired size for your new partition

  • Erase: Don’t overwrite existing data

  • Type: Encrypted, compatible with Linux systems

  • Name: Give the new partition a name

  • Passphrase: Set the encryption passphrase

Figure 4: Creating a new, encrypted partition.

Click Create and the new, encrypted partition will be formatted.

Congratulations, your flash drive (or external drive) is now encrypted. You can view that data on any Linux machine that has cryptsetup installed. Do note, when you plug in that encrypted drive, you will be prompted for the passphrase as well as how long to remember the passphrase (Figure 5). I highly recommend selecting Forget password immediately. By selecting that option, the passphrase will not be retained in the keyring once you’ve ejected the drive. Otherwise, you could leave yourself open to someone slipping the drive in and gaining access to your data.

Figure 5: Choose wisely when selecting how long to retain your encryption passphrase.

Your encryption awaits

Although this method does have it’s drawbacks (only readable on a Linux system that happens to also include cryptsetup), this process makes encrypting partitions on external drives incredibly simple.

Remember to back up your data before undertaking these steps; otherwise, you run the risk of losing said data. Now, you can enjoy having your data secured under a layer of encryption.

Learn more about security and system administration with the Essentials of System Administration course from The Linux Foundation.

AT&T, Orange team up to Create Open SDN, NFV Standards

AT&T and Orange are joining hands in a new agreement to collaborate on new open source activities related to SDN and NFV, aiming at developing new standards carriers can follow as they implement virtualization in their networks.  

Set on achieving three goals — simplify technological integration, increase operational efficiency and reduce costs – the two service providers said that they will identify forums for industry standardization discussions to drive standardization efforts forward.

Read more at Fierce Telecom