Home Blog Page 792

Cloud Foundry’s Security Strategy: Rotate, Repair, Repave

Most enterprises are addressing security at the wrong tempo: They roll out what they assume are secured applications and infrastructure, and then are slow to make any changes, fretful that reconfigurations might open security holes.

But the threat landscape is an always changing, and the enterprises need to change their security practices to reflect this fluidity, said Justin Smith, a Pivotal security engineer heavily involved Cloud Foundry security, speaking at the opening of the Cloud Foundry Summit, taking place this week in San Francisco.

“To get safer, you have to go faster, and that is the exact opposite of how organizations work today,” Smith said. “Continual change is a concept we have to embrace in enterprise security.”

Read more at The New Stack

7 Essential Skill-Building Courses for the Open Source Jobs Market

Dice and The Linux Foundation recently released an updated Open Source Jobs Report that examines trends in open source recruiting and job seeking. The report clearly shows that open source professionals are in demand and that those with open source experience have a strong advantage when seeking jobs in the tech industry. Additionally, 87 percent of hiring managers say it’s hard to find open source talent.

The Linux Foundation offers many training courses to help you take advantage of these growing job opportunities. The courses range from basic to advanced and offer essential open source knowledge that you can learn at your own pace or through instructor-led classes.

This article looks at some of the available training courses and other resources that can provide the skills needed to stay competitive in this hot open source job market.  

Networking Courses            

The Open Source Jobs Report highlighted networking as a leading emergent technology — with 21 percent of hiring managers saying that networking has the biggest impact on open source hiring. To build these required networking skills, here are some courses to consider.

Essentials of System Administration

This introductory course will teach you how to administer, configure, and upgrade Linux systems. You’ll learn all the tools and concepts necessary to efficiently build and manage a production Linux infrastructure including networking, file system management, system monitoring, and performance tuning. This comprehensive, online, self-paced course also forms the basis for the Linux Foundation Certified System Administrator skillset.

Advanced Linux System Administration and Networking

The need for sys admins with advanced administration and networking skills has never been greater. This course is designed for system administrators and IT professionals who need to gain a hands-on knowledge of Linux network configuration and services as well as related topics such as basic security and performance.

Software Defined Networking with OpenDaylight

Software Defined Networking (SDN) is a rapidly emerging technology that abstracts networking infrastructure away from the actual physical equipment. This course is designed for experienced network administrators who are either migrating to or already using SDN and/or OpenDaylight, and it provides an overview of the principles and methods upon which this technology is built.

Cloud Courses

Cloud technology experience is even more sought after than networking skills — with 51 percent of hiring managers stating that knowledge of OpenStack and CloudStack has a big impact on open source hiring decisions.

Introduction to Cloud Infrastructure Technologies

As companies increasingly rely on cloud products and services, it can be overwhelming to keep up with all the technologies that are available. This free, self-paced course will give you a fundamental understanding of today’s top open source cloud technology options.

Essentials of OpenStack Administration

OpenStack adoption is expanding rapidly, and there is high demand for individuals with experience managing this cloud platform. This instructor-led course will teach you everything you need to know to create and manage private and public clouds with OpenStack.

OpenStack Administration Fundamentals

This online, self-paced course will teach you what you need to know to administer private and public clouds with OpenStack. This course is also excellent preparation for the Certified OpenStack Administrator exam from the OpenStack Foundation.

Open Source Licensing and Compliance

A good working knowledge of open source licensing and compliance is critical when contributing to open source projects or integrating open source software into other projects. The Compliance Basics for Developers course teaches software developers why copyrights and licenses are important and explains how to add this information appropriately. This course also provides an overview of the various types of licenses to consider.    

Along with these — and many other — training courses, the Linux Foundation also offers free webinars and ebooks on various topics. The free resources listed below can help you get started building your career in open source:

 

/linux-com_ctas_may2016_v2_opensource.jpg?itok=Hdu0RIJn

Ciena Says Toolkit Makes DevOps Easier

Ciena is unveiling a new software toolkit for its Blue Planet orchestration platform designed to help network operators embrace a DevOps approach to adding new services and features to their virtualized network infrastructure. The new tools can be used by telecom operators’ own personnel or in conjunction with third-party developers or vendors.

The Blue Planet DevOps Toolkit is intended to help network operators break their dependence on professional services from vendors or systems integrators and begin taking advantage of their investment in SDN and NFV to reduce costs in adding new features or services, or making changes. It is also targeted for use by vendor partners of Ciena and systems integrators.

Read more at Light Reading

Cloud Foundry ‘Dojo’ Opening in Seattle, Hosted by HP Enterprise

Seattle will be one of only seven cities in North America and the UK to host a Cloud Foundry “Dojo,” giving programmers a six-week bootcamp to attain the ability to contribute source code to the popular open-source platform for developing cloud apps. Hewlett Packard Enterprise will host the Cloud Foundry Dojo in Seattle.

“By opening a Dojo in Seattle, we will draw on a rich ecosystem of cloud providers and developers in the rapidly growing Puget Sound market to increase the growth of the Cloud Foundry project,” said Bill Hilf, senior vice president and general manager for HPE’s cloud business, in a post announcing the news.

Read more at Geek Wire

Linux System Monitoring and More with Auditd

One of the keys to protecting a Linux system is to know what’s going on inside it — what files change, who accesses what and when, and which applications get run. Incrond was used up until some years ago for the former, but, despite rumors to the contrary, development seems to have stopped since about four years ago. Nonetheless, you can still download and use it and try out the examples I talked about in a tutorial published elsewhere.

The newer systemd contains some features that allow for monitoring, but it is a bit clumsy, and the feedback you get is far from detailed enough for a forensic analysis or to run an event-specific application.

These days, your best bet to monitor all your stuff is probably auditd. Auditd is also a good option because, apart from running comprehensive checks, the auditing itself happens at the kernel level, below userspace, which makes it much harder to subvert. This is an advantage over shell-based auditing systems, which will not give accurate information if the system is already compromised before they run.

Audit is actively developed by Red Hat and is available for most, if not all, major distributions. If it is not already installed on your system, you can find it by searching in your system’s repositories. In Debian-based systems, the package is simply called audit, while in RPM-based systems, it shows up as auditd. In most Red Hat-related systems, such as Fedora and CentOS, auditd is usually installed by default.

Auditd is made up of several components, but for our purposes today, what you need are: auditd itself, which is the actual daemon that monitors the system, and aureport, a tool that generates reports culled from the auditd’s logs.

Installation

First things first, though. Install the audit or auditd package using your distribution’s software manager and check that it is running. Most modern Linux distributions run auditd as a systemd service, so you can use

> systemctl status auditd.service

to see if it’s active once installed. If it is there, but not running, you can jumpstart it with

> systemctl start auditd.service

or configure it to run at boot with

> systemctl enable auditd.service

Before checking reports and so on, let it run for a while, so it can fill up its logs with events.

Reporting System

Right out of the box, auditd already logs some stuff it deems critical, no extra configuration needed. You can check what it is looking up by running aureport without any arguments — note that you must be root or have root privileges (i.e., use sudo) to access audit’s toolbox:

> aureport
Summary Report 
====================== 
Range of time in logs: 18/05/16 09:47:34.453 - 22/05/16 11:28:03.168 
Selected time for report: 18/05/16 09:47:34 - 22/05/16 11:28:03.168 
Number of changes in configuration: 195 
Number of changes to accounts, groups, or roles: 30 
Number of logins: 5 
Number of failed logins: 0 
Number of authentications: 136 
Number of failed authentications: 9 
Number of users: 4 
Number of terminals: 12 
Number of host names: 2 
Number of executables: 13 
.
.
.

So, this is already interesting! Take a look at the line that says Number of failed authentications: 9. If you see a large number here, somebody may be trying to access your machine by brute-forcing a user’s password.

Let’s dig in a little deeper:

> aureport -au
Authentication Report 
============================================ 
# date time acct host term exe success event 
============================================ 
1. 18/05/16 09:47:56 sddm ? ? /usr/lib/sddm/sddm-helper yes 187 
2. 18/05/16 09:48:09 paul ? ? /usr/lib/sddm/sddm-helper yes 199 
3. 18/05/16 09:41:29 root ? pts/1 /usr/bin/su yes 227 
4. 18/05/16 09:57:16 root ? pts/2 /usr/bin/su yes 231 
5. 18/05/16 10:01:57 root ? ? /usr/sbin/groupadd yes 235 
.
.
.

The -au option allows you to see details pertaining to authentication attempts: aureport gives you dates and times, the account being accessed, and whether the authentication was successful.

If you narrow down the output down by filtering it through grep:

> aureport -au | grep no
37. 18/05/16 12:18:24 root ? pts/0 /usr/bin/su no 217 
38. 18/05/16 12:18:38 root ? pts/0 /usr/bin/su no 218 
47. 18/05/16 12:41:15 root ? pts/1 /usr/bin/su no 262 
66. 18/05/16 14:09:55 root ? pts/4 /usr/bin/su no 220 
67. 18/05/16 14:10:05 root ? pts/4 /usr/bin/su no 221 
102. 20/05/16 12:37:07 root ? pts/5 /usr/bin/su no 191 
117. 21/05/16 12:10:39 root ? pts/0 /usr/bin/su no 229 
129. 21/05/16 17:59:08 root ? pts/1 /usr/bin/su no 208 
134. 21/05/16 18:32:05 root ? pts/0 /usr/bin/su no 248

you get all the failed authentication attempts. For our little experiment, remember the last line of the report, line 134, and the time and date, 21/05/16 18:32:05, of the last failed access attempt to access the root account.

Let’s now look at the users report:

> aureport -u -i
User ID Report 
==================================== 
# date time auid term host exe event 
==================================== 
1. 18/05/16 09:47:35 unset ? ? /usr/lib/systemd/systemd 136 
2. 18/05/16 09:47:35 unset ? ? /usr/lib/systemd/systemd-update-utmp 137 
3. 18/05/16 09:47:35 unset ? ? /usr/lib/systemd/systemd 138 
4. 18/05/16 09:47:45 unset ? ? /usr/lib/systemd/systemd 139 
5. 18/05/16 09:47:45 unset ? ? /usr/lib/systemd/systemd 140 
.
.
.

The -u option tells aureport to show user activity, and -i tells it to show the user names instead of their ID numbers.

Again, aureport gives us a lot to sift through. What we want to know is who last tried to access root, but failed. If we copy the date and time of the last authentication attempt, that is, 21/05/16 18:32:05 and use that with grep to filter out some of the data, we get:

> aureport -u -i|grep "21/05/16 18:32:05"
2324. 21/05/16 18:32:05 paul pts/3 ? /usr/bin/su 201

Whoops! It was me. Let me clarify that I was not up to anything nefarious. It’s just that I am a klutz and often mistype the root password in my own computer.

With a bit of command-line fu, you could do all of the above in one fell swoop:

> accessdate=`aureport -au | grep no | tail -1 | cut -d ' ' -f 2,3`; 
  aureport -u -i | grep "$accessdate"; unset accessdate
2324. 21/05/16 18:32:05 paul pts/3 ? /usr/bin/su 201

Or you could use a temporary intermediate file to show all failed authentication attempts:

> aureport -au | grep no | cut -d ' ' -f 2,3 > accessdates.log; 
  aureport -u -i | grep -f accessdates.log; rm accessdates.log
986. 18/05/16 14:09:55 paul pts/4 ? /usr/bin/su 220 
987. 18/05/16 14:10:05 paul pts/4 ? /usr/bin/su 221 
1577. 20/05/16 12:37:07 paul pts/5 ? /usr/bin/su 191 
1785. 21/05/16 12:10:39 paul pts/0 ? /usr/bin/su 229 
2050. 21/05/16 17:59:08 paul pts/1 ? /usr/bin/su 208 
2090. 21/05/16 18:32:05 paul pts/0 ? /usr/bin/su 248 

As I said, I am a bit of a klutz.

More to Come

There is, of course, a lot more to auditd. I haven’t even touched upon what it is most often used for, that is, customized monitoring of files and directories. I’ll be looking at how to do that and much more in future installments.

Why Dynamic Hyperconvergence Is the Gateway to the Software-Defined Data Center

Computing and storage platforms leveraged by most organizations today are not equipped to keep up with the breakneck pace of global business, nor are they able to handle the challenges associated with the massive growth of data. As more CIOs and IT decision makers look to reduce infrastructure costs while increasing efficiency and agility to accommodate the needs of the business, the move toward software-defined technologies marks the beginning of the journey to evolve IT for many organizations.

These technologies are all leading to one place: the software-defined data center (SDDC), which is defined by the use of software to provision and optimize all elements of IT, such as networking, storage, compute and virtualization resources. Hyperconverged infrastructure (HCI) is a powerful, emergent technology that is the first step on the path to the SDDC.

Read more at The Stack

Public Cloud Computing Vendors: A Look at Strengths, Weaknesses, Big Picture

A Cowen & Co. survey of cloud computing customers found that providers are differentiating themselves on cost, support, APIs and other factors. Bottom line: The cloud game won’t be winner take all.

Public cloud vendors are establishing unique characteristics that indicate the market won’t be a zero-sum game that’ll support multiple players.

Cowen & Co. conducted a survey of 314 public cloud customers and found Amazon Web Services is the top dog with Microsoft Azure a strong No. 2. Meanwhile, IBM and Google Cloud Platform (GCP), which is grabbing more workloads, are above average in quality of IT support.

Read more at ZDNet

Architecting Containers Part 5: Building a Secure and Manageable Container Software Supply Chain

In Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization we investigated the level of effort necessary to containerize different types of workloads. In this article I am going to address several challenges facing organizations that are deploying containers – how to patch containers and how to determine which teams are responsible for the container images. Should they be controlled by development or operations?

In addition, we are going to take a look at what is really inside the container – it’s typically a repository made up of several layers, not just a single image. Since they are typically layered images, we can think of them as a software supply chain. This software supply chain can address mapping your current business processes and teams to the container build process to meet the needs of a production container deployment in a flexible and manageable way.

Read more at Red Hat Blog

How to Create a Local Red Hat Repository

There are many reasons you may want a local Red Hat Enterprise Linux repository. Bandwidth is a major factor as downloading updates from the Internet can be time and bandwidth consuming. Whatever your reason, this tutorial will walk you through the process of getting your local repository setup.

Disruption in the Networking Hardware Marketplace

When we talk about disruption, it’s hard to point to any specific moment and be able to say it’s the one that changed things. You can point to the center of the ripples in the pond, and you’d be right to say those ripples were caused by the stone that fell in that spot, but that’s ignoring the arc of the stone, and whatever caused it to travel through the air. Learning what a stone is, then, would be the first step in understanding how it sent ripples through the pond.

The idea behind software-defined networking (SDN) is to abstract physical elements from networking hardware and control them with software.  Part of this is decoupling network control from forwarding functions so you can program it directly, but the main idea is that this separation allows for a dynamic approach to networking – something that the increasing disaggregation in IT makes a necessity.

Read more at SDxCentral.