Home Blog Page 336

Join Interactive Workshop on Cloud-Native Network Functions at Open Source Summit

ONAP and Kubernetes – two of the fastest-growing Linux Foundation projects – are coming together in the next generation of telecom architecture.  

ONAP provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions, and Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Telcos are now examining how these virtual network functions (VNFs) could evolve into cloud-native network functions (CNFs) running on Kubernetes.

In a three-hour interactive workshop on cloud-native network functions at Open Source Summit, Dan Kohn, Executive Director, Cloud Native Computing Foundation, and Arpit Joshipura, GM Networking & Orchestration, The Linux Foundation, will explain networking and cloud-native terms and concepts side by side.

“As the next-generation of telco architecture evolves, CSPs are exploring how their Virtual Network Functions (VNFs) can evolve into Cloud-native Network Functions (CNFs), ” said Joshipura. “This seminar will explore  what’s involved in migrating from VNFs to CNFs, with a specific focus on the roles played by ONAP and Kubernetes. We hope to see a broad swatch of community members from both the container and networking spaces join us for an engaging and informative discussion in Vancouver.”

Session highlights will include:

  • Migrating and automating network functions to virtual networking functions to CNFs
  • Overview of sub-projects focusing on this migration, including cross-cloud CI, ONAP/OVP, FD.io/VPP, etc.
  • The role for a service mesh, such as like Envoy, Istio, or Linkerd, in connecting CNFs with load balancing, canary deployments, policy enforcement, and more.
  • What is involved in telcos adopting modern continuous integration / continuous deployment (CI/CD) tools to be able to rapidly innovate and improve their CNFs while retaining confidence in the reliability.
  • Differing security needs of trusted (open source and vendor-provided) code vs. running untrusted code
  • The role for security isolation technologies like gVisor or Kata
  • Requirements of the underlying operating system
  • Strengths and weaknesses of different network architectures such as multi-interface pods and Network Service Mesh
  • Status of IPv6 and dual-stack support in Kubernetes

Additional registration is required for this session, but there is no extra fee. Space is limited in the workshop, so reserve your spot soon. And, if you plan to attend, please be willing to participate. Learn more and sign up now!

This article originally appeared at The Linux Foundation.

How an Open-Source Education Can Help Students Gain an Edge

Open-source software development is less structured and more free flowing, relying on the ideas and (sometimes brutally honest) input from developers from different backgrounds. Like a team of students working on a group project, open-source software is created from code written by many different contributors, some of whom may be halfway around the world, constantly iterating, innovating and having fun.

As such, an open-source curriculum can expose students to various like-minded people from different and diverse backgrounds while preparing them to be better technologists and work in a field that demands their services.

Preparing students for open organizations

It is also no coincidence that many of the organizations seeking open-source skills have adopted the methodologies that define open-source software development. They want young people with fresh ideas, but they also want people who know how to work with and seek input from others.

Read more at EdScoop

What’s in a Container Image: Meeting the Legal Challenges

Container technology has, for many years, been transforming how workloads in data centers are managed and speeding the cycle of application development and deployment.

In addition, container images are increasingly used as a distribution format, with container registries a mechanism for software distribution. Isn’t this just like packages distributed using package management tools? Not quite. While container image distribution is similar to RPMs, DEBs, and other package management systems (for example, storing and distributing archives of files), the implications of container image distribution are more complicated. It is not the fault of container technology itself; rather, it’s because container distribution is used differently than package management systems.

Talking about the challenges of license compliance for container images, Dirk Hohndel, chief open source officer at VMware, pointed out that the content of a container image is more complex than most people expect, and many readily available images have been built in surprisingly cavalier ways. (See the LWN.net article by Jake Edge about a talk Dirk gave in April.)

Read more at OpenSource.com

How to Use the gpg Command to Encrypt Linux Files

There are many reasons to encrypt files — even on a system that is well maintained and comparatively secure. The files may highly sensitive, contain personal information that you don’t want to share with anyone, or be backed up to some variety of online storage where you’d prefer it be extra secure.

Fortunately, commands for reliably encrypting files on Linux systems are easy to come by and quite versatile. One of the most popular is gpg.

gpg vs pgp and OpenPGP

Used both to encrypt files in place and prepare them to be sent securely over the Internet, gpg is related to, but not the same as, pgp and OpenPGP. While gpg is based on the OpenPGP standards established by the IETF, it is — unlike pgp — open source. Here’s the rundown:

  • OpenPGP is the IETF-approved standard that defines encryption technology that uses processes that are interoperable with PGP.
  • pgp is Symantec’s proprietary encryption solution.
  • gpg adheres to the OpenPGP standard and provides an interface that allows users to easily encrypt their files.

Read more at Network World

A Linux Sysadmin’s Guide to Network Management, Troubleshooting and Debugging

A system administrator’s routine tasks include configuring, maintaining, troubleshooting, and managing servers and networks within data centers. There are numerous tools and utilities in Linux designed for the administrative purposes.

In this article, we will review some of the most used command-line tools and utilities for network management in Linux, under different categories. We will explain some common usage examples, which will make network management much easier in Linux.

Network Configuration, Troubleshooting and Debugging Tools

1. ifconfig Command

ifconfig is a command line interface tool for network interface configuration and also used to initialize an interfaces at system boot time. Once a server is up and running, it can be used to assign an IP Address to an interface and enable or disable the interface on demand.

It is also used to view the status IP Address, Hardware / MAC address, as well as MTU (Maximum Transmission Unit) size of the currently active interfaces. ifconfig is thus useful for debugging or performing system tuning.

Here is an example to display status of all active network interfaces.

$ ifconfig
enp1s0    Link encap:Ethernet  HWaddr 28:d2:44:eb:bd:98  
inet addr:192.168.0.103  Bcast:192.168.0.255  Mask:255.255.255.0
inet6 addr: fe80::8f0c:7825:8057:5eec/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:169854 errors:0 dropped:0 overruns:0 frame:0
TX packets:125995 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000 
RX bytes:174146270 (174.1 MB)  TX bytes:21062129 (21.0 MB)
lo        Link encap:Local Loopback  
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:65536  Metric:1
RX packets:15793 errors:0 dropped:0 overruns:0 frame:0
TX packets:15793 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1 
RX bytes:2898946 (2.8 MB)  TX bytes:2898946 (2.8 MB)

To list all interfaces which are currently available, whether up or down, use the -a flag.

Read more at Tecmint

Open Source Networking Jobs: A Hotbed of Innovation and Opportunities

As global economies move ever closer to a digital future, companies and organizations in every industry vertical are grappling with how to further integrate and deploy technology throughout their business and operations. While Enterprise IT largely led the way, the advantages and lessons learned are now starting to be applied across the board. While the national unemployment rate stands at 4.1%, the overall unemployment rate for tech professionals hit 1.9% in April and the future for open source jobs looks particularly bright. I work in the open source networking space and the innovations and opportunities I’m witnessing are transforming the way the world communicates.

Once a slower moving industry, the networking ecosystem of today — made up of network operators, vendors, systems integrators, and developers — is now embracing open source software and is shifting significantly towards virtualization and software defined networks running on commodity hardware. In fact, nearly 70% of global mobile subscribers are represented by network operator members of LF Networking, an initiative working to harmonize projects that makes up the open networking stack and adjacent technologies.  

Demand for Skills

Developers and sysadmins working in this space are embracing cloud native and DevOps approaches and methods to develop new use cases and tackle the most pressing industry challenges. Focus areas like containers and edge computing are red hot and the demand for developers and sysadmins who can integrate, collaborate, and innovate in this space is exploding.

Open source and Linux makes this all possible, and per the recently published 2018 Open Source Jobs Report, fully 80% of hiring managers are looking for people with Linux skills while 46% are looking to recruit in the networking area and a roughly equal equal percentage cite “Networking” as a technology most affecting their hiring decisions.

Developers are the most sought-after position, with 72% of hiring managers looking for them, followed by DevOps skills (59%), engineers (57%) and sysadmins (49%). The report also measures the incredible growth in demand for containers skills which matches what we’re seeing in the networking space with the creation of cloud native virtual functions (CNFs) and the proliferation of Continuous Integration / Continuous Deployment approaches such as the XCI initiative in the OPNFV.

Get Started

The good news for job seekers in that there are plenty of onramps into open source including the free Introduction to Linux course. Multiple certifications are mandatory for the top jobs so I encourage you to explore the range of training opportunities out there. Specific to networking, check out these new training courses in the OPNFV and ONAP projects, as well as this introduction to open source networking technologies.

If you haven’t done so already, download the 2018 Open Source Jobs Report now for more insights and plot your course through the wide world of open source technology to the exciting career that waits for you on the other side!

Download the complete Open Source Jobs Report now and learn more about Linux certification here.

An Interview with Heptio, the Kubernetes Pioneers

I recently spent some time chatting with Craig McLuckie, CEO of the leading Kubernetes solutions provider Heptio. Centered around both developers and system administrators, Heptio’s products and services simplify and scale the Kubernetes ecosystem.

Petros Koutoupis: For all our readers who have yet to hear of the remarkable things Heptio is doing in this space, please start by telling us, who is Craig McLuckie?

Craig McLuckie: I am the CEO and founder of Heptio. My co-founder, Joe Beda, and I were two of the three creators of Kubernetes and previously started the Google Compute Engine, Google’s traditional infrastructure as a service product. He also started the Cloud Native Computing Foundation (CNCF), of which he is a board member.

PK: Why did you start Heptio? What services does Heptio provide?

CL: Since we announced Kubernetes in June 2014, it has garnered a lot of attention from enterprises looking to develop a strategy for running their business applications efficiently in a multi-cloud world.

Perhaps the most interesting trend we saw that motivated us to start Heptio was that enterprises were looking at open-source technology adoption as the best way to create a common platform that spanned on-premises, private cloud, public cloud and edge deployments without fear of vendor lock-in. Kubernetes and the cloud native technology suite represented an incredible opportunity to create a powerful “utility computing platform” spanning every cloud provider and hosting option, that also radically improves developer productivity and resource efficiency.

In order to get the most out of Kubernetes and the broader array of cloud native technologies, we believed a company needed to exist that was committed to helping organizations get closer to the vibrant Kubernetes ecosystem. Heptio offers both consultative services and a commercial subscription product that delivers the deep support and the advanced operational tooling needed to stitch upstream Kubernetes into modern enterprise IT environments.

Read more at Linux Journal

Serverless Testing in Production

The still-maturing ecosystem of serverless means that there is not a range of tools available for specific aspects of application deployment within this infrastructure style. But also, the nature of serverless as an events-driven architecture — where cloud providers or others are responsible for autoscaling and managing the resources necessary for compute — means that in many cases, it is difficult to usefully test for how things will occur in a production environment.

Charity Majors, co-founder and CEO of platform-agnostic DevOps monitoring tool Honeycomb.io, says that this inability to test in development is not unique to serverless. Given the nature of building and deploying distributed applications at scale, there is no possible way to test for every eventuality. While she agrees that the “I don’t test, but when I do, I test in production” meme may be worthy of an eye-roll, she does believe in the concept of “testing in production.”

“When I say ‘test in production,’ I don’t mean not to do best practices first,” explained Majors. “What I mean is, there are unknown unknowns that should be tested by building our systems to be resilient. Some categories of bug can only be noticed when applications at scale. …”

Read more at The New Stack

Three Graphical Clients for Git on Linux

Those that develop on Linux are likely familiar with Git. With good reason. Git is one of the most widely used and recognized version control systems on the planet. And for most, Git use tends to lean heavily on the terminal. After all, much of your development probably occurs at the command line, so why not interact with Git in the same manner?

In some instances, however, having a GUI tool to work with can make your workflow slightly more efficient (at least for those that tend to depend upon a GUI). To that end, what options do you have for Git GUI tools? Fortunately, we found some that are worthy of your time and (in some cases) money. I want to highlight three such Git clients that run on the Linux operating system. Out of these three, you should be able to find one that meets all of your needs.
I am going to assume you understand how Git and repositories like GitHub function, which I covered previously, so I won’t be taking the time for any how-tos with these tools. Instead, this will be an introduction, so you (the developer) know these tools are available for your development tasks.

A word of warning: Not all of these tools are free, and some are released under proprietary licenses. However, they all work quite well on the Linux platform and make interacting with GitHub a breeze.

With that said, let’s look at some outstanding Git GUIs.

SmartGit

SmartGit is a proprietary tool that’s free for non-commercial usage. If you plan on employing SmartGit in a commercial environment, the license cost is $99 USD per year for one license or $5.99 per month. There are other upgrades (such as Distributed Reviews and SmartSynchronize), which are both $15 USD per licence. You can download either the source or a .deb package for installation. I tested SmartGit on Ubuntu 18.04 and it worked without issue.

But why would you want to use SmartGit? There are plenty of reasons. First and foremost, SmartGit makes it incredibly easy to integrate with the likes of GitHub and Subversion servers. Instead of spending your valuable time attempting to configure the GUI to work with your remote accounts, SmartGit takes the pain out of that task. The SmartGit GUI (Figure 1) is also very well designed to be uncluttered and intuitive.

Figure 1: The SmartGit UI helps to simplify your workflow.

After installing SmartGit, I had it connected with my personal GitHub account in seconds. The default toolbar makes working with a repository, incredibly simple. Push, pull, check out, merge, add branches, cherry pick, revert, rebase, reset — all of Git’s most popular features are there to use. Outside of supporting most of the standard Git and GitHub functions/features, SmartGit is very stable. At least when using the tool on the Ubuntu desktop, you feel like you’re working with an application that was specifically designed and built for Linux.

SmartGit is probably one of the best tools that makes working with even advanced Git features easy enough for any level of user. To learn more about SmartGit, take a look at the extensive documentation.

GitKraken

GitKraken is another proprietary GUI tool that makes working with both Git and GitHub an experience you won’t regret. Where SmartGit has a very simplified UI, GitKraken has a beautifully designed interface that offers a bit more feature-wise at the ready. There is a free version of GitKraken available (and you can test the full-blown paid version with a 15 day trial period). After the the trial period ends, you can continue using the free version, but for non-commercial use only.

For those who want to get the most out of their development workflow, GitKraken might be the tool to choose. This particular take on the Git GUI features the likes of visual interactions, resizable commit graphs, drag and drop, seamless integration (with GitHub, GitLab, and BitBucket), easy in-app tasks, in-app merge tools, fuzzy finder, gitflow support, 1-click undo & redo, keyboard shortcuts, file history & blame, submodules, light & dark themes, git hooks support, git LFS, and much more. But the one feature that many users will appreciate the most is the incredibly well-designed interface (Figure 2).

Figure 2: The GitKraken interface is tops.

Outside of the amazing interface, one of the things that sets GitKraken above the rest of the competition is how easy it makes working with multiple remote repositories and multiple profiles. The one caveat to using GitKraken (besides it being proprietary) is the cost. If you’re looking at using GitKraken for commercial use, the license costs are:

  • $49 per user per year for individual

  • $39 per user per year for 10+ users

  • $29 per user per year for 100+ users

The Pro accounts allow you to use both the Git Client and the Glo Boards (which is the GitKraken project management tool) commercially. The Glo Boards are an especially interesting feature as they allow you to sync your Glo Board to GitHub Issues. Glo Boards are sharable and include search & filters, issue tracking, markdown support, file attachments, @mentions, card checklists, and more. All of this can be accessed from within the GitKraken GUI.
GitKraken is available for Linux as either an installable .deb file, or source.

Git Cola

Git Cola is our free, open source entry in the list. Unlike both GitKraken and Smart Git, Git Cola is a pretty bare bones, no-nonsense Git client. Git Cola is written in Python with a GTK interface, so no matter what distribution and desktop combination you use, it should integrate seamlessly. And because it’s open source, you should find it in your distribution’s package manager. So installation is nothing more than a matter of opening your distribution’s app store, searching for “Git Cola” and installing. You can also install from the command line like so:

sudo apt install git-cola

Or:

sudo dnf install git-cola

The Git Cola interface is pretty simple (Figure 3). In fact, you won’t find much in the way of too many bells and whistles, as Git Cola is all about the basics.

Figure 3: The Git Cola interface is a much simpler affair.

Because of Git Cola’s return to basics, there will be times when you must interface with the terminal. However, for many Linux users this won’t be a deal breaker (as most are developing within the terminal anyway). Git Cola does include features like:

  • Multiple subcommands

  • Custom window settings

  • Configurable and environment variables

  • Language settings

  • Supports custom GUI settings

  • Keyboard shortcuts

Although Git Cola does support connecting to remote repositories, the integration to the likes of Github isn’t nearly as intuitive as it is on either GitKraken or SmartGit. But if you’re doing most of your work locally, Git Cola is an outstanding tool that won’t get in between you and Git.

Git Cola also comes with an advanced (Directed Acyclic Graph) DAG visualizer, called Git Dag. This tool allows you to get a visual representation of your branches. You start Git Dag either separately from Git Cola or within Git Cola from the View > DAG menu entry. Git DAG is a very powerful tool, which helps to make Git Cola one of the top open source Git GUIs on the market.

There’s more where that came from

There are plenty more Git GUI tools available. However, from these three tools you can do some serious work. Whether you’re looking for a tool with all the bells and whistles (regardless of license) or if you’re a strict GPL user, one of these should fit the bill.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

NetSpectre Attack Could Enable Remote CPU Exploitation

Researchers from Graz University in Austria released new research on July 26 detailing how the Spectre CPU speculative execution vulnerability could be used over a remote network.

In a 14-page report, the researchers dubbed their attack method NetSpectre, which can enable an attacker to read arbitrary memory over a network. Spectre is the name that researchers have given to a class of vulnerabilities that enable attackers to exploit the speculative execution feature in modern CPUs. Spectre and the related Meltdown CPU vulnerabilities were first publicly disclosed on Jan. 3.

“Spectre attacks require some form of local code execution on the target system,” the Graz University researchers wrote. “Hence, systems where an attacker cannot run any code at all were, until now, thought to be safe.”

Read more at eWeek