Home Blog Page 391

Protecting Code Integrity with PGP — Part 7: Protecting Online Accounts

So far in this tutorial series, we’ve provided practical guidelines for using PGP, including basic concepts and steps for generating and protecting your keys.  If you missed the previous articles, you can catch up below. In this final article, we offer additional guidance for protecting your online accounts, which is of paramount importance today.

Part 1: Basic Concepts and Tools

Part 2: Generating Your Master Key

Part 3: Generating PGP Subkeys

Part 4: Moving Your Master Key to Offline Storage

Part 5: Moving Subkeys to a Hardware Device

Part 6: Using PGP with Git

Checklist

  • Get a U2F-capable device (ESSENTIAL)

  • Enable 2-factor authentication for your online accounts (ESSENTIAL)

    • GitHub/GitLab

    • Google

    • Social media

  • Use U2F as primary mechanism, with TOTP as fallback (ESSENTIAL)

Considerations

You may have noticed how a lot of your online developer identity is tied to your email address. If someone can gain access to your mailbox, they would be able to do a lot of damage to you personally, and to your reputation as a free software developer. Protecting your email accounts is just as important as protecting your PGP keys.

Two-factor authentication with Fido U2F

Two-factor authentication is a mechanism to improve account security by requiring a physical token in addition to a username and password. The goal is to make sure that even if someone steals your password (via keylogging, shoulder surfing, or other means), they still wouldn’t be able to gain access to your account without having in their possession a specific physical device (“something you have” factor).

The most widely known mechanisms for 2-factor authentication are:

  • SMS-based verification

  • Time-based One-Time Passwords (TOTP) via a smartphone app, such as the “Google Authenticator” or similar solutions

  • Hardware tokens supporting Fido U2F

SMS-based verification is easiest to configure, but has the following important downsides: it is useless in areas without signal (e.g. most building basements), and can be defeated if the attacker is able to intercept or divert SMS messages, for example by cloning your SIM card.

TOTP-based multi-factor authentication offers more protection than SMS, but has important scaling downsides (there are only so many tokens you can add to your smartphone app before finding the correct one becomes unwieldy). Plus, there’s no avoiding the fact that your secret key ends up stored on the smartphone itself — which is a complex, globally connected device that may or may not have been receiving timely security patches from the manufacturer.

Most importantly, neither TOTP nor SMS methods protect you from phishing attacks — if the phisher is able to steal both your account password and the 2-factor token, they can replay them on the legitimate site and gain access to your account.

Fido U2F is a standard developed specifically to provide a mechanism for 2-factor authentication and to combat credential phishing. The U2F protocol will store each site’s unique key on the USB token and will prevent you from accidentally giving the attacker both your password and your one-time token if you try to use it on anything other than the legitimate website.

Both Chrome and Firefox support U2F 2-factor authentication, and hopefully other browsers will soon follow.

Get a token capable of Fido U2F

There are many options available for hardware tokens with Fido U2F support, but if you’re already ordering a smartcard-capable physical device, then your best option is a Yubikey 4, which supports both.

Enable 2-factor authentication on your online accounts

You definitely want to enable this option on the email provider you are using (especially if it is Google, which has excellent support for U2F). Other sites where this functionality should be enabled are:

  • GitHub: it probably occurred to you when you uploaded your PGP public key that if anyone else is able to gain access to your account, they can replace your key with their own. If you publish code on GitHub, you should take care of your account security by protecting it with U2F-backed authentication.

  • GitLab: for the same reasons as above.

  • Google: if you have a google account, you will be surprised how many sites allow logging in with Google authentication instead of site-specific credentials.

  • Facebook: same as above, a lot of online sites offer the option to authenticate using a Facebook account. You should 2-factor protect your Facebook account even if you do not use it.

  • Other sites, as you deem necessary. See dongleauth.info for inspiration.

Configure TOTP failover, if possible

Many sites will allow you to configure multiple 2-factor mechanisms, and the recommended setup is:

  • U2F token as the primary mechanism

  • TOTP phone app as the secondary mechanism

This way, even if you lose your U2F token, you should be able to re-gain access to your account. Alternatively, you can enroll multiple U2F tokens (e.g. you can get another cheap token that only does U2F and use it for backup reasons).

Further reading

By this point you have accomplished the following important tasks:

  1. Created your developer identity and protected it using PGP cryptography.

  2. Configured your environment so your identity is not easily stolen by moving your master key offline and your subkeys to an external hardware device.

  3. Configured your git environment to ensure that anyone using your project is able to verify the integrity of the repository and its entire history.

  4. Secured your online accounts using 2-factor authentication.

You are already in a good place, but you should also read up on the following topics:

  • How to secure your team communication (see the document in this repository). Decisions regarding your project development and governance require just as much careful protection as any committed code, if not so. Make sure that your team communication is trusted and the integrity of all decisions is verified.

  • How to secure your workstation (see the document in this repository). Your goal is to minimize risky behaviour that would cause your project code to be contaminated, or your developer identity to be stolen.

  • How to write secure code (see various documentation related to the programming languages and libraries used by your project). Bad, insecure code is still bad, insecure code even if there is a PGP signature on the commit that introduced it.

Opening ONS Keynote Demonstrates Kubernetes Enabling ONAP On Any Public, Private, or Hybrid Cloud

ONAP and Kubernetes, two of the fastest growing and in demand open source projects, are coming together at Open Networking Summit this week. To ensure ONAP runs on Kubernetes in any environment, ONAP is now a part of the new Cross-Cloud CI project that integrates, tests and deploys the most popular cloud native projects.

The opening ONS keynote from Arpit Joshipura, GM Networking & Orchestration at Linux Foundation, will demonstrate and test ONAP 1.1.1 and 1.95 Kubernetes deployed across all public, private clouds and bare metal. For end users, the integration of open networking and cloud native technologies provides seamless portability of applications.

Read more at The Linux Foundation

TLS 1.3 Is Approved: Here’s How It Could Make the Entire Internet Safer

​The IETF has finally given the okay to the TLS 1.3 protocol, which will speed up secure connections and make snooping harder for attackers.

  • TLS 1.3 has been approved for use, which will make all secure internet connections faster and safer.
  • The security and speed improvements brought by TLS 1.3 are due to the elimination of unnecessary handshake steps and the forced use of newer encryption methods.

Transport Layer Security (TLS) version 1.3 has been approved by the Internet Engineering Task Force (IETF), making it the new industry standard for secure connections.

Read more at Tech Republic

Identity Management from the Cloud

Offers for identity management as a service (IDaaS) are entering the market and promising simplicity. However, many lack functionality, adaptability, and in-depth integration with existing systems. We look at how IT managers should consider IDaaS in their strategy.

Identity and access management (IAM) is a core IT discipline located between IT infrastructure, information security, and governance (Figure 1). For example, IAM tools help with the management of users and their access rights across systems and (cloud) services, to provide easy access to applications (preferably with a single sign-on experience), to handle strong authentication, and to protect shared user accounts….

In the market for IDaaS or cloud IAM, a rapidly growing number of offers focus on a number of different features. Moreover, these products are not easy to compare. The most important types of cloud IAM services are described here.

Cloud single sign-on (SSO) solutions are probably the best-known services. Their most important feature for users is an SSO to various cloud services. One of the most important value propositions is their predefined integration with hundreds, or even thousands, of different cloud services. Access is typically through a kind of portal that contains the icons of the various connected cloud services.

Read more at ADMIN Magazine

The Evolution of Systems Requires an Evolution of Systems Engineers

The systems we worked on when many of us first started out were the first generations of client-server applications. They were fundamentally different from the prior generation: terminals connecting to centralized apps running on mainframe or midrange systems. Engineers learned to care about the logic of their application client as well as the server powering it. Connectivity, the transmission of data, security, latency and performance, and the synchronization of state between the client and the server became issues that now had to be considered to manage those systems.

This increase in sophistication spawned commensurate changes to the complexity of the methodologies and skills required to manage those systems. New types of systems meant new skills, understanding new tools, frameworks, and programming languages. 

Since the first generation of client-server systems, we’ve seen significant evolution. … Each iteration of this evolution has required the technology, systems, and skills we need to build and manage that technology to change. In almost every case, those changes have introduced more complexity. The skills and knowledge we once needed to manage our client-server systems versus these modern distributed systems with their requirements for resilience, low latency, and high availability are vastly different. So, what do we need to know now that we didn’t before?

Read more at O’Reilly

Node.js Is Now Available as a Snap on Ubuntu, Other GNU/Linux Distributions

Node.js, the widely-used open-source and cross-platform JavaScript runtime environment for executing server-side  JavaScript code, is now officially available as a Snap package for the Linux platform.

Now that Linux is the preferred development platform for developers visiting Stack Overflow, the need for running the latest versions of your favorite programming languages, frameworks and development environments has become more and more important, and Canonical’s Snappy technologies are the answer.

NodeSource, the organization behind Node.js, announced today they made a Snap package to allow Linux developers to more easily install the popular JavaScript runtime environment on their operating systems. Snap is a containerized, universal binary package format developed by Canonical for Ubuntu Linux.

Read more at Softpedia

Linux Foundation Launches LF Deep Learning Foundation to Accelerate AI Growth

As this week’s Open Network Summit gets underway, The Linux Foundation has debuted the LF Deep Learning Foundation, an umbrella organization focused on driving open source innovation in artificial intelligence, machine learning and deep learning. 

The goal of the LF Deep Learning Foundation is to make these new technologies available to developers and data scientists.

Founding members of LF Deep Learning include Amdocs, AT&T, B.Yond, Baidu, Huawei, Nokia, Tech Mahindra, Tencent, Univa, and ZTE. Through the LF Deep Learning Foundation, members are working to create a neutral space where makers and sustainers of tools and infrastructure can interact and harmonize their efforts and accelerate the broad adoption of deep learning technologies.



In tandem with the launch of LF Deep Learning, The Linux Foundation also debuted the Acumos AI Project, a platform that will drive the development, discovery, and sharing of AI models and AI workflows. AT&T and Tech Mahindra contributed the initial code for the Acumos AI Project.

Read more at Fierce Telecom

The Evolution of Open Networking to Automated, Intelligent Networks

The 2018 Open Networking Summit is happening this week in Los Angeles. Just prior to opening day, we talked with John Zannos, Chief Revenue Officer at Inocybe, to get his view on the state of open networking and changes in the foreseeable future. Zannos, is on the governing board of the Linux Foundation Networking effort, and has formerly served on the OpenStack and on the OPEN-O boards.

Inocybe has been involved with OpenDaylight since the beginning. The company is one of the top five contributors, and its engineering team is involved in helping solve some of the toughest questions associated with SDN and OpenDaylight. For example, company engineers lead the community effort focused on solving the problems associated with clustering, security, and service function chaining.

John Zannos, Chief Revenue Officer at Inocybe

Previously, Zannos ran Canonical’s cloud platform business and helped drive the NFV and SDN strategy within the company.  “I have seen the evolution of disaggregation, automation of open source in compute and we are seeing those same elements migrate to the network,” he said. “And, that’s what I thought we should talk about — how SDN and open networking are combining to deliver the promise of automated and intelligent networks.” Here are some insights Zannos shared with us.

Linux.com: What is the state of open networking now?

John Zannos: Open networking is here now. Over the last 10 years, there has been open source in the compute space: Linux, virtual machines, OpenStack, Kubernetes. We learned a lot over those 10 years and we are bringing the experience and hard learned lessons to open source in the network.

In the networking space, we have seen NFV as a way to bring virtualization to networking. And we are at a point now that there is leadership from large service providers like AT&T, China Mobile and Deutsche Telekom, and smaller ones like Cablevision in Argentina to name a few. Different members of the vendor community, like Nokia and smaller ones like Inocybe, are navigating how to incorporate open source into the network in a way that it helps accelerate end user adoption with service providers and enterprises, with the goal of achieving the end state of an intelligent and automated network.

At Inocybe, we are accomplishing this through our Open Networking Platform. The Open Networking Platform simplifies the consumption and management of open networking software such as OpenDaylight and OpenSwitch. It helps companies consume just the right amount of open source components they need for specific business use cases (ie Traffic Engineering). We create a purpose built open source software stack that is production-ready for the specific use case. It helps organizations automate the build, management and upgrade process, ultimately putting them on a path to automated and intelligent network.

At Open Networking Summit, we’ll be demonstrating how our Open Networking Platform can deploy a fully integrated OpenSwitch-based NOS and OpenDaylight-based SDN Controller on a variety of hardware platforms, eliminating the complexity from the controller down the stack, while preserving the ability to disaggregate the solution (Dell’s booth, number 43).

Linux.com: What are the evolutionary steps taken, and still ahead for Open Networking?

Zannos: The first step of this journey was disaggregation of network appliances, separating network hardware and software. The next step was to incorporate automation. An example of that is the use of SDN controllers, such as OpenDaylight, an open source project which automates the deployment and management of network devices.

The next two steps are a combination of data analytics and machines learning/AI. We are moving from collecting data to determine what is happening in the network and what will happen next, to machine learning/AI that will consume that information to determine what action to take. With these two steps we move from analysis to action to autonomous networking. We see open analytics projects like PNDA, which is part of the Linux Foundation Networking effort, moving us in this direction.  In the machine learning and AI space, AT&T and Tech Mahindra with the Linux Foundation have announced Acumos, which will enable developers to easily build, share, and deploy AI applications.

Ultimately, we are using collaborative innovation to help service providers and enterprises be able to use automated and intelligent networks quicker. What’s interesting is that open source creates a framework for companies that compete to collaborate and share information in a way that accelerates adoption to an intelligent, automated network. We are now at a point where we are starting to see those benefits.

Think of software-defined networking (SDN) as allowing for automation and flexibility, and open networking as allowing for collaborative innovation and transparency. When you combine SDN and open source networking you begin to drive the acceleration of adoption.

Linux.com:  You said the open networking community could learn from open source adoption in the compute space. What are those lessons to be learned?

Zannos: There are two things to be learned from the compute experience. We don’t want to create too many competing priorities in open networking, and we want to be careful not to stifle innovation. It is a tricky balance to manage.

There was a moment in OpenStack that we had too many competing projects and that ultimately diluted the impact of engineering resources in the community. We want to ensure that the developer and engineering resources that companies big and small bring to the open source communities, can stay focused on advancing the code base, in way that helps drive end user adoption. Competing priorities and projects can create confusion in the marketplace, and that slows down adoption. Companies weren’t sure if all these projects were going to survive. I believe we have learned from that experience. We are trying to be more thoughtful about helping projects form with a focus on accelerating time to adoption by end users where they can actually reap the benefits.  That’s exactly what we are trying to do with OpenDaylight, let it to continue to evolve, but also let it stabilize so customers can actually use it in production.

The second thing is to be sensitive of the fact that you don’t want to stifle competition. You do want to allow for innovation that comes from different and competing ideas. But, I think we have an opportunity to learn and improve from our experience to date.

I am optimistic that our experience as an industry and a community is building a strong foundation for open source adoption in the network. It is exciting to be part of what Inocybe and The Linux Foundation are doing in networking, because it’s an opportunity to collaborate and prioritize the efforts that will help drive adoption.

This article was sponsored by Inocybe and written by Linux.com.

Sign up to get the latest updates on ONS NA 2018!

How to Create an Open Source Stack Using EFK

Managing an infrastructure of servers is a non-trivial task. When one cluster is misbehaving, logging in to multiple servers, checking each log, and using multiple filters until you find the culprit is not an efficient use of resources.

The first step to improve the methods that handle your infrastructure or applications is to implement a centralized logging system. This will enable you to gather logs from any application or system into a centralized location and filter, aggregate, compare, and analyze them. If there are servers or applications, there should be a unified logging layer.

Thankfully, we have an open source stack to simplify this. With the combination of Elasticsearch, Fluentd, and Kibana (EFK), we can create a powerful stack to collect, store, and visualize data in a centralized location.

Read more at OpenSource.com

Linus Torvalds: Linux 4.16 Kernel Launches on Sunday. Possibly. Maybe.

After a series of release candidates, Linus Torvalds could well be ready to unleash version 4.16 of the Linux kernel onto the world at the weekend. That is unless he changes his mind about the RC build: “rc7 is much too big for my taste,” he says in his weekly update to the kernel mailing list.

Torvalds says that while he’s not planning for there to be an eighth release candidate, the current size is causing him to think about the best course of action. For those who have not been following the story, he also details what’s new in Linux 4.16.

Read more at Betanews