Home Blog Page 390

Through the Looking Glass: Security and the SRE

Even as modern software becomes increasingly distributed, rapidly iterative, and predominantly stateless, today’s approach to security remains predominantly preventative, focused and dependent on state in time. It lacks the rapid iterative feedback loops that have made modern product delivery successful. The same feedback loops should exist between the changes in product environments and the mechanisms employed to keep them secure. Security measures should be iterative and agile enough to change their behavior as often as the software ecosystem in which they operate.

Security controls are typically designed with a particular state in mind (i.e., production release on Day 0). Meanwhile, the system ecosystem that surrounds these controls is changing rapidly every day. Microservices, machines, and other components are spinning up and spinning down; component changes are occurring multiple times a day through continuous delivery, external APIs are constantly changing on their own delivery schedules, etc. Security tools and methods must be flexible enough to match the constant change and iteration in the environment. Without a security feedback loop, the system will eventually drift into security failure, just as a system without a development feedback loop would drift into unreliable operational readiness.

Read more at OpenSource.com

Juniper’s OpenContrail SDN Rebranded as Tungsten Fabric

Sometimes, rebranding is a good thing. Juniper Networks‘ OpenContrail was an excellent open-source software-defined network (SDN) program. But, it was perceived as being too much under Juniper’s thumb to draw many outside developers. Realizing this, Juniper spun OpenContrail out into a community-controlled project under the The Linux Foundation. That left the name, so Juniper and the community decided to rebrand it: Tungsten Fabric.

Like its direct ancestor, Tungsten Fabric is a scalable, multi-cloud networking platform. It provides a single point of control, observability, and analytics for networking and security. The program is also integrated with many cloud technology stacks, including KubernetesMesosVMware, and OpenStack.

Read more at ZDNet

How to Monitor Network Protocol Traffic on your Data Center Linux Servers

With Linux in your data centers, you value the ability to monitor different network protocols on your servers. With this gathered information, you can troubleshoot issues or tweak your servers such that they outperform the original specs. Most Linux administrators might recall ntop. It was the de facto standard, text-based tool for monitoring network protocols. That tool has been deprecated, in favor of ntopng (aka The Network Time Protocol Reference Implementation, refactored, the Next Generation). This particular tool takes ntop to the next level, by giving it a web-based interface that is exponentially more powerful and easier to use. I’m going to walk you through the process of installing ntopng on the Ubuntu Server 16.04 platform. The process does require you to install via the command line, so be prepared to type a bit, or copy and paste.

Installation

We’ll be installing the stable build, as opposed to installing the outdated version from the standard repository. The steps for this are as follows:

Read more at Tech Republic

12 Kubernetes Distributions Leading the Container Revolution

Kubernetes has become the project to turn to if you need container orchestration at scale. The open source container orchestration system out of Google is well-regarded, well-supported, and evolving fast.

Kubernetes is also sprawling, complex, and difficult to set up and configure. Not only that, but much of the heavy lifting is left to the end user. The best approach, therefore, isn’t to grab the bits and try to go it alone, but to seek out a complete container solution that includes Kubernetes as a supported, maintained component.

Here I’ve listed the 12 most prominent Kubernetes offerings—what amount to distributions that incorporate Kubernetes plus container tools, in the same sense that various vendors offer distributions of the Linux kernel and its userland.

Read more at InfoWorld

4 New Training Courses Help You Keep Pace with Open Source Networking

Open source networking is transforming how enterprises today develop, deploy, and scale their networks and services. To help you keep pace with the evolution that is taking place in enterprise networking, The Linux Foundation has expanded its training offerings to include four new open networking courses:

These courses, which provide both introductory and advanced knowledge of ONAP and OPNFV technologies, are open for immediate enrollment.  

In the Introduction to ONAP course (LFS163x), taught by Amar Kapadia, you will learn how the ONAP platform uses SDN and NFV to orchestrate and automate physical and virtual network services. In this free course, Kapadia, the author of Understanding OPNFV, offers a high-level look at the ONAP project and provides a guide for participating in and benefiting from the ONAP community.

The free NFV Acceleration: An Introduction to OPNFV (LFS164x) course offers an introduction to the OPNFV project. It describes the various challenges that OPNFV solves and provides an overview of related projects and industry use cases.

ONAP Fundamentals (LFS263) provides basic hands-on knowledge of the ONAP project. The course includes lab exercises to run on the Google Cloud Platform to help you achieve a deeper understanding of ONAP’s functional areas.

OPNFV Fundamentals (LFS264) introduces students to the basics of OPNFV. Starting with an overview of NFV and OPNFV technology, the course looks at challenges that OPNFV solves and then discusses integrating and testing OPNFV projects. The course also includes deployment and testing exercises to run on the Google Cloud Platform.

Pre-enrollment discount

LFS163x and LFS164x course are now available for free on edX.org. LFS263 and LFS264 are open for pre-enrollment through The Linux Foundation, with the courses becoming fully available in May. Purchases for those courses made during the pre-enrollment period reflect a discounted fee of $99 each ($199 standard).

Protecting Code Integrity with PGP — Part 7: Protecting Online Accounts

So far in this tutorial series, we’ve provided practical guidelines for using PGP, including basic concepts and steps for generating and protecting your keys.  If you missed the previous articles, you can catch up below. In this final article, we offer additional guidance for protecting your online accounts, which is of paramount importance today.

Part 1: Basic Concepts and Tools

Part 2: Generating Your Master Key

Part 3: Generating PGP Subkeys

Part 4: Moving Your Master Key to Offline Storage

Part 5: Moving Subkeys to a Hardware Device

Part 6: Using PGP with Git

Checklist

  • Get a U2F-capable device (ESSENTIAL)

  • Enable 2-factor authentication for your online accounts (ESSENTIAL)

    • GitHub/GitLab

    • Google

    • Social media

  • Use U2F as primary mechanism, with TOTP as fallback (ESSENTIAL)

Considerations

You may have noticed how a lot of your online developer identity is tied to your email address. If someone can gain access to your mailbox, they would be able to do a lot of damage to you personally, and to your reputation as a free software developer. Protecting your email accounts is just as important as protecting your PGP keys.

Two-factor authentication with Fido U2F

Two-factor authentication is a mechanism to improve account security by requiring a physical token in addition to a username and password. The goal is to make sure that even if someone steals your password (via keylogging, shoulder surfing, or other means), they still wouldn’t be able to gain access to your account without having in their possession a specific physical device (“something you have” factor).

The most widely known mechanisms for 2-factor authentication are:

  • SMS-based verification

  • Time-based One-Time Passwords (TOTP) via a smartphone app, such as the “Google Authenticator” or similar solutions

  • Hardware tokens supporting Fido U2F

SMS-based verification is easiest to configure, but has the following important downsides: it is useless in areas without signal (e.g. most building basements), and can be defeated if the attacker is able to intercept or divert SMS messages, for example by cloning your SIM card.

TOTP-based multi-factor authentication offers more protection than SMS, but has important scaling downsides (there are only so many tokens you can add to your smartphone app before finding the correct one becomes unwieldy). Plus, there’s no avoiding the fact that your secret key ends up stored on the smartphone itself — which is a complex, globally connected device that may or may not have been receiving timely security patches from the manufacturer.

Most importantly, neither TOTP nor SMS methods protect you from phishing attacks — if the phisher is able to steal both your account password and the 2-factor token, they can replay them on the legitimate site and gain access to your account.

Fido U2F is a standard developed specifically to provide a mechanism for 2-factor authentication and to combat credential phishing. The U2F protocol will store each site’s unique key on the USB token and will prevent you from accidentally giving the attacker both your password and your one-time token if you try to use it on anything other than the legitimate website.

Both Chrome and Firefox support U2F 2-factor authentication, and hopefully other browsers will soon follow.

Get a token capable of Fido U2F

There are many options available for hardware tokens with Fido U2F support, but if you’re already ordering a smartcard-capable physical device, then your best option is a Yubikey 4, which supports both.

Enable 2-factor authentication on your online accounts

You definitely want to enable this option on the email provider you are using (especially if it is Google, which has excellent support for U2F). Other sites where this functionality should be enabled are:

  • GitHub: it probably occurred to you when you uploaded your PGP public key that if anyone else is able to gain access to your account, they can replace your key with their own. If you publish code on GitHub, you should take care of your account security by protecting it with U2F-backed authentication.

  • GitLab: for the same reasons as above.

  • Google: if you have a google account, you will be surprised how many sites allow logging in with Google authentication instead of site-specific credentials.

  • Facebook: same as above, a lot of online sites offer the option to authenticate using a Facebook account. You should 2-factor protect your Facebook account even if you do not use it.

  • Other sites, as you deem necessary. See dongleauth.info for inspiration.

Configure TOTP failover, if possible

Many sites will allow you to configure multiple 2-factor mechanisms, and the recommended setup is:

  • U2F token as the primary mechanism

  • TOTP phone app as the secondary mechanism

This way, even if you lose your U2F token, you should be able to re-gain access to your account. Alternatively, you can enroll multiple U2F tokens (e.g. you can get another cheap token that only does U2F and use it for backup reasons).

Further reading

By this point you have accomplished the following important tasks:

  1. Created your developer identity and protected it using PGP cryptography.

  2. Configured your environment so your identity is not easily stolen by moving your master key offline and your subkeys to an external hardware device.

  3. Configured your git environment to ensure that anyone using your project is able to verify the integrity of the repository and its entire history.

  4. Secured your online accounts using 2-factor authentication.

You are already in a good place, but you should also read up on the following topics:

  • How to secure your team communication (see the document in this repository). Decisions regarding your project development and governance require just as much careful protection as any committed code, if not so. Make sure that your team communication is trusted and the integrity of all decisions is verified.

  • How to secure your workstation (see the document in this repository). Your goal is to minimize risky behaviour that would cause your project code to be contaminated, or your developer identity to be stolen.

  • How to write secure code (see various documentation related to the programming languages and libraries used by your project). Bad, insecure code is still bad, insecure code even if there is a PGP signature on the commit that introduced it.

TLS 1.3 Is Approved: Here’s How It Could Make the Entire Internet Safer

​The IETF has finally given the okay to the TLS 1.3 protocol, which will speed up secure connections and make snooping harder for attackers.

  • TLS 1.3 has been approved for use, which will make all secure internet connections faster and safer.
  • The security and speed improvements brought by TLS 1.3 are due to the elimination of unnecessary handshake steps and the forced use of newer encryption methods.

Transport Layer Security (TLS) version 1.3 has been approved by the Internet Engineering Task Force (IETF), making it the new industry standard for secure connections.

Read more at Tech Republic

Opening ONS Keynote Demonstrates Kubernetes Enabling ONAP On Any Public, Private, or Hybrid Cloud

ONAP and Kubernetes, two of the fastest growing and in demand open source projects, are coming together at Open Networking Summit this week. To ensure ONAP runs on Kubernetes in any environment, ONAP is now a part of the new Cross-Cloud CI project that integrates, tests and deploys the most popular cloud native projects.

The opening ONS keynote from Arpit Joshipura, GM Networking & Orchestration at Linux Foundation, will demonstrate and test ONAP 1.1.1 and 1.95 Kubernetes deployed across all public, private clouds and bare metal. For end users, the integration of open networking and cloud native technologies provides seamless portability of applications.

Read more at The Linux Foundation

Identity Management from the Cloud

Offers for identity management as a service (IDaaS) are entering the market and promising simplicity. However, many lack functionality, adaptability, and in-depth integration with existing systems. We look at how IT managers should consider IDaaS in their strategy.

Identity and access management (IAM) is a core IT discipline located between IT infrastructure, information security, and governance (Figure 1). For example, IAM tools help with the management of users and their access rights across systems and (cloud) services, to provide easy access to applications (preferably with a single sign-on experience), to handle strong authentication, and to protect shared user accounts….

In the market for IDaaS or cloud IAM, a rapidly growing number of offers focus on a number of different features. Moreover, these products are not easy to compare. The most important types of cloud IAM services are described here.

Cloud single sign-on (SSO) solutions are probably the best-known services. Their most important feature for users is an SSO to various cloud services. One of the most important value propositions is their predefined integration with hundreds, or even thousands, of different cloud services. Access is typically through a kind of portal that contains the icons of the various connected cloud services.

Read more at ADMIN Magazine

The Evolution of Systems Requires an Evolution of Systems Engineers

The systems we worked on when many of us first started out were the first generations of client-server applications. They were fundamentally different from the prior generation: terminals connecting to centralized apps running on mainframe or midrange systems. Engineers learned to care about the logic of their application client as well as the server powering it. Connectivity, the transmission of data, security, latency and performance, and the synchronization of state between the client and the server became issues that now had to be considered to manage those systems.

This increase in sophistication spawned commensurate changes to the complexity of the methodologies and skills required to manage those systems. New types of systems meant new skills, understanding new tools, frameworks, and programming languages. 

Since the first generation of client-server systems, we’ve seen significant evolution. … Each iteration of this evolution has required the technology, systems, and skills we need to build and manage that technology to change. In almost every case, those changes have introduced more complexity. The skills and knowledge we once needed to manage our client-server systems versus these modern distributed systems with their requirements for resilience, low latency, and high availability are vastly different. So, what do we need to know now that we didn’t before?

Read more at O’Reilly