Home Blog Page 490

Linux Installation Types: Server Vs. Desktop

I have previously covered obtaining and installing Ubuntu Linux, and this time I will touch on desktop and server installations. Both types of installation address certain needs. The different installs are downloaded separately from Ubuntu. You can choose which one you need from Ubuntu.com/downloads.

Regardless of the installation type, there are some similarities. Both utilize the same kernel and package manager system. The package manager system is a repository of programs that are precompiled to run on almost any Ubuntu system. Programs are grouped into packages and then packages are installed. Packages can be added from the desktop system graphical user interface or from the server system command line.

Read more at Radio

2017 Jobs Report Highlights Demand for Open Source Skills

Dice® and The Linux Foundation have once again partnered to produce the annual Open Source Jobs Report, focusing on all aspects of open source software. The 2017 Open Source Jobs Survey and Report provides an overview of the trends for open source careers, motivation for professionals in the industry, and how employers attract and retain qualified talent.

Key Findings

  • Employers are struggling to hire open source professionals, with 89 percent of hiring managers finding it difficult to find talent.
  • Nearly half (47 percent) of companies are willing to pay for employees to become open source certified — up from 33 percent in 2016.

Read more at The Linux Foundation

When Good Containers Go Bad

Tim Mackey, a technical evangelist for Black Duck Software, engages with technical communities to help them solve application security problems. At Open Source Summit in Los Angeles, Mackey will be delivering a talk titled “A Question of Trust – When Good Containers Go Bad.”

Mackey says that as container adoption increases, the pace of information flow from Ops back to Dev needs to increase, too. If malicious actors have greater access to information or greater resources to create exploits than, for example, a multi-national financial services company, those malicious actors are in the driver’s seat when it comes to security.

In his talk, Mackey will deconstruct some significant vulnerabilities, examine how information flows, and explain when various “organizations” have an information advantage. “I also look at a few issues from the past and how they’re impacting the modern world. The end goal is to increase awareness of the types of issues we face and how to better protect ourselves and build better products moving forward,” said Mackey.

We talked with Mackey to learn more about his talk.

Linux.com: What’s the inspiration behind your talk? What are the areas you will be touching on?

Tim Mackey: Over the past several years we’ve seen major vulnerability after vulnerability disclosed against a variety of open source components. These disclosures have led some to question the role of open source technologies in modern application development. Rather than have a religious debate, I’ve chosen to focus on the attributes which make open source different from closed source commercial products and how information flow is a key challenge for us — particularly when it comes to security.

As part of that effort, I decompose multiple vulnerabilities to show how information flow is biased towards malicious actors. With such a bias, defenders are often at a disadvantage due to both awareness of issues and point in time decisions they make while performing triage. Minimally. this can lead to delays in mitigation, but at the extreme it can lead to a belief that a given vulnerability doesn’t represent a viable attack.

Linux.com: Have there been cases of containers gone bad? Especially when most are hosted on trusted platforms like DockerHub?

Mackey: Containers go bad everyday, and often without warning. This is probably best illustrated by example. Let’s assume we’re working for a very security conscious organization and have governance rules which dictate all applications must pass a static code analysis and have any exposed interfaces fuzzed. We also can assume that our public facing systems are subject to penetration testing and have sophisticated perimeter defenses. In this environment, we create a container image that passes all tests and is then deployed on this trusted platform and scaled out.

Now that our application has been deployed, let’s add to the mix a CVE, which is disclosed say within hours of the release of our application. All containers deployed using this image are now at an increased level of risk of compromise. Quantifying that risk is a challenge for most organizations, but the bigger challenge comes when you need to identify which container images are impacted by the CVE and trigger remediation plans. While there is a desire to trust perimeter defenses, they often need to be reconfigured to block newly malicious traffic and may themselves be vulnerable to the new CVE.

Linux.com: Security as it’s well understood is a process and not a product, so what advice can you give to the DevOps teams to add that process in their workflow?

Mackey: Identification of risk is a crucial component of security, and risk is a function of the composition of a container image. Once you know precisely what the composition of the image is, it becomes possible to identify any potential risks. Most organizations start with traditional application security models focusing on code  they create. The goal is to ensure the risk of what’s created by the organization is minimized, and continual code scans are resource intensive from a tooling and process perspective. This leaves a large gap in containerized environments stemming from the base image and any associated dependencies. Some key questions operations teams need to answer in order to minimize risk include:

  • What security risks might present in that base image, and how often is it updated?

  • If a patch is issued for that base image, what is the risk associated with consuming the patch?

  • How many versions behind tip can a project or component be before it becomes too risky to consume?

  • Given my tooling, how quickly will I be informed of component updates for dependencies which directly impact my containers?

  • Given the structure of a component or project, do malicious actors have an easy way to gain an advantage when it comes to issues raised against the component?

Linux.com: Most platforms come with quite a lot of security features, scanning, and mitigation, do you feel that it’s not enough?

Mackey: Fundamental security measures like SELinux of AppArmour, reduction in system capabilities, strict enforcement of image admission and restrictive network profiles are vital, but not sufficient. DevOps teams are tasked with responding to changing business priorities while balancing risk and operational efficiency. Tools performing container runtime scanning both impose a performance hit to cluster nodes and potentially expose data to scans which can limit their utility.

Mitigation measures are valuable, but without a clear understanding of the complete application environment, mitigation measures are challenged. In the end, operations teams recognize they’re under constant attack, and that malicious actors are both persistent and creative. Part of that creativity is an understanding that an attack vector which wasn’t viable a couple years ago might become viable through changes in application design or deployment systems. A perfect example of this is “Dirty Cow,” which like many race conditions only becomes more exploitable over time due to increases concurrency in modern processors.

Linux.com: Have you seen any lack of practices that make containers / microservices more vulnerable, and can you explain?

Mackey: There are a few items which I see far more often than I’d like, and many fall into the “point in time decision” camp. By way of example, consider a Dockerfile which specifies a version for the base image. Pinning the base image likely solved a problem a developer had, but is also unlikely to be revisited as new versions of the container are created. Over time, security debt builds and eventually that version is so old that APIs have changed and updating the image becomes a serious problem. Flipping the scenario around, the base image could be “latest,” which has its own set of problems. There the version in use could be radically different with each image and have a uncertain number of vulnerabilities. Related to this, the desire to update to the “latest” patch is also problematic when you recognize that a given patch may fix some issues and introduce others.

Another interesting problem can be seen when a container image is “shipped.” Development teams are charged with creating applications and packaging them up. Take a CI process, for example. It should always be configured to fail a build when an application contains known security issues. This ensures vulnerable applications can accidentally become deployed but also imposes a set of trust boundaries requiring us to both shift right and left. Developers on the left need to ensure they’re making correct decisions about the composition of their applications “as deployed.” Operations teams on the right need to ensure they’re both deploying what has been vetted during development, but are also actively monitoring for issues related to what “was deployed.” Only then can the two teams actively close the loop to ensure security issues are attended to as quickly as possible.

Check out the full schedule for Open Source Summit here. Linux.com readers save on registration with discount code LINUXRD5. Register now!

Observability: What’s In A Name?

“Is observability just monitoring with another name?”

“Observability: we changed the word because developers don’t like monitoring.”

There’s been a lot of hilarious snark about this lately. Which is great, who doesn’t love A+ snark? Figured I’d take the time to answer, at least once.

Yes, in practice, the tools and practices for monitoring vs observability will overlap a whole lot … for now. But philosophically there are some subtle distinctions, and these are only going to grow over time.*

Read more at Honeycomb.io

How to Get Started with the Foreman SysAdmin Tool

Foreman offers a powerful set of system management tools, from process automation to security compliance and more. Here’s how to get started.

Is your system management tool robust enough?

As your organization grows, so does your workload—and the IT resources required to manage it. There is no “one-size-fits-all” system management solution, but a centralized, open source tool such as Foreman can help you manage your company’s IT assets by provisioning, maintaining, and updating hosts throughout the complete lifecycle.

Foreman becomes even more powerful when integrated with other open source projects and plugins, and I will discuss these in more detail below. To get started, however, let’s consider key functions of an effective system management tool.

Read more at OpenSource.com

Unlike Oil and Water, Legacy and Cloud Mix Well

For all the hype about moving applications to the cloud and making legacy apps “cloud-native,” those of us in IT have a poorly-kept secret: legacy systems are alive and well – and they’re not going anywhere anytime soon. Though the cloud promises the cost savings and scalability that businesses are eager to adopt, many organizations are not yet ready to let go of existing applications that required massive investments and have become essential to their workflows.

The process of rewriting these often mission-critical apps for the cloud typically ends up being lengthy and expensive, with unexpected problems that vary from company to company. Some of the challenges an organization will face when rewriting applications include:

1. Latency Issues…

Read more at InsideHPC

Getting Started with GitHub

Github is an online platform built to promote code hosting, version control and collaboration among individuals working on a common project.  Projects can be handled from anywhere through the platform. (Hosting and reviewing code, managing projects and building software with other developers around the world) The GitHub platform offers project handling to both open-source and private projects. 

Features offered in regards to team project handling include; GitHub Flow and GitHub Pages. These functions make it easy for teams with regular deployments to in handling the workflow. GitHub pages, on the other hand, provides a place for showcasing open source projects, displaying resumes, hosting blogs among others.  

Individual projects can also be easily handled with the aid of GitHub as it provides essential tools for projects handling. It also makes it easier to share one’s project with the world.

Read more at LinuxandUbuntu

Xen Hypervisor Patched for Privilege Escalation and Information Leak Flaws

The Xen Project has fixed five new vulnerabilities in the widely used Xen virtualization hypervisor. The flaws could allow attackers to break out of virtual machines and access sensitive information from host systems.

According to an analysis by the security team of Qubes OS, an operating system that relies on Xen for its security model, most of the vulnerabilities stem from the mechanism that’s used to share memory between domains. Under Xen, the host system and the virtual machines (guests) run in separate security domains.

The most severe vulnerability is located in the memory management code for paravirtualized (PV) VMs and allows for a guest to escalate its privilege to that of the host,…

Read more at The New Stack

Want to be a Software Industry Influencer? Get Involved in Open Source

SD Times recently recognized The Linux Foundation among the top innovators and leaders in software development in its annual SD Times 100 list.

The LF was honored to be named a top Influencer, along with ten other industry heavyweights including Apple, Facebook, GitHub, Google, IBM, Intel, Microsoft, Netflix, Red Hat, and Slack.

Does this list look familiar? It should. Each of the companies on the influencers list makes significant contributions to the open source community (bonus points for those who know that most are also members of The Linux Foundation).

Open source has long been a de facto standard for development and the companies on the influencers list pioneered this approach with their own products and services. At the same time, they have led the IT revolution in massively scalable cloud computing, AI, social networking, and many other innovations, and continue to do so. This is not a coincidence.

Read more at The Linux Foundation

See Session Highlights for Upcoming OS Summit and Embedded Linux Conference in Prague

Check out the newly released conference schedules for Open Source Summit Europe and the co-located Embedded Linux Conference Europe, taking place simultaneously October 23-26 in Prague, Czech Republic. This year’s lineup features more than 200 sessions presented by experts from Comcast, Docker, Red Hat, Siemens AG, Amazon, and more.

Open Source Summit Europe combines LinuxCon, ContainerCon, and CloudOpen conferences with the all new Open Community Conference and Diversity Empowerment Summit and is the premier open source technical conference in Europe, gathering 2,000 developers, admins, and community leadership professionals to collaborate, share information and learn about the latest in open technologies.

The co-located Embedded Linux Conference Europe — now in its 12th year — is the place to collaborate with peers on all aspects of embedded Linux, from the hardware to user space development.

In addition to previously announced keynote speakers, more 200 educational sessions are on offer at Open Source Summit and Embedded Linux Conference.  

Session highlights at Open Source Summit Europe include:

  • Love What You Do, Everyday! – Zaheda Bhorat, Amazon Web Services

  • The Rise of Open Source in the Manufacturing Industry – Steffan Evers, Bosch Software Innovations GmbH

  • DIY Open-Source Data Lakes and You – Ashley Hathaway, Stitch Data

  • Detecting Performance Regressions In The Linux Kernel – Jan Kara, SUSE

  • Highway to Helm: Deploying Kubernetes Native Applications – Michelle Noorali, Microsoft

  • Deploying and Scaling Microservices with Docker and Kubernetes – Jérôme Petazzoni, Docker

  • printk() – The Most Useful Tool is Now Showing its Age – Steven Rostedt, VMWare

  • Every Day Opportunities for Inclusion and Collaboration – Nithya Ruff, Comcast

  • Beyond Your Code: Building A Successful Project Community – Ruth Suehle, Red Hat

  • Multi-repo, Multi-node Gating at Massive Scale – Monty Taylor, Red Hat

Session highlights at Embedded Linux Conference Europe include:

  • KEYNOTE: Jan Kiszka, Senior Key Expert, Siemens AG

  • Continuous Integration: Jenkins, libvirt and Real Hardware – Anna-Maria Gleixner, Linutronix GmbH

  • Linux-based RTOS Platform for Constructing Self-Driving Vehicles – Jim Huang, South Star Xelerator (SSX)

  • Orchestrated Android-Style System Upgrades for Embedded Linux – Diego Rondini, Kynetics

The complete Open Source Summit schedule can be viewed here, and the schedule for Embedded Linux Conference can be viewed here.

Registration is discounted to $800 through August 27, and academic and hobbyist rates are also available. Applications are also being accepted for diversity and needs-based scholarships. Linux.com readers receive an additional $40 off with code OSSEULDC20. Register Now!