Home Blog Page 612

How The Linux Foundation Goes Beyond the Operating System to Create the Largest Shared Resource of Open-Source Technology

Mark Hinkle relied on open-source products, such as Linux, Sendmail, and Apache, to build out the infrastructure for the ISP and hosting provider he worked for in the mid-1990s.

“That’s what really got me fascinated by the fact that people were working on solutions and sharing changes that furthered the use of technology,” he said.

After 20 years of working with various software vendors and open-source solution providers, Mark has seen the adoption of open-source technologies expand to reach even the largest enterprises.

“Open-source products are becoming the de facto standard. People don’t realize the breadth of technology that’s out there and the quality of those technologies,” said Mark, now the Vice President of Marketing for The Linux Foundation 

Read more at HostingAdvice.com

OpenContrail: An Essential Tool in the OpenStack Ecosystem

Throughout 2016, software-defined networking (SDN) rapidly evolved, and numerous players in the open source and cloud computing arenas are now helping it gain momentum. In conjunction with that trend, OpenContrail, a popular SDN platform used with the OpenStack cloud computing platform, is emerging as an essential tool around which many administrators will have to develop skillsets.

Just as administrators and developers have ramped up their skillsets surrounding essential tools like Ceph in the OpenStack ecosystem, they will need to embrace OpenContrail, which is fully open source and stewarded by the Apache Software Foundation.

With all of this in mind, Mirantis, one of the most active companies on the OpenStack scene, has announced commercial support for and contributions to OpenContrail. “With the addition of OpenContrail, Mirantis becomes a one-stop support shop for the entire stack of popular open source technologies used in conjunction with OpenStack, including Ceph for storage, OpenStack/KVM for compute and OpenContrail or Neutron for SDN,” the company noted.

According to a Mirantis announcement, “OpenContrail is an Apache 2.0-licensed project that is built using standards-based protocols and provides all the necessary components for network virtualization–SDN controller, virtual router, analytics engine, and published northbound APIs. It has an extensive REST API to configure and gather operational and analytics data from the system. Built for scale, OpenContrail can act as a fundamental network platform for cloud infrastructure.”

The news follows Mirantis’ acquisition of TCP Cloud, a company specializing in managed services for OpenStack, OpenContrail, and Kubernetes. Mirantis will use TCP Cloud’s technology for continuous delivery of cloud infrastructure to manage the OpenContrail control plane, which will run in Docker containers. As a part of the effort, Mirantis has also been contributing to OpenContrail.

Many contributors behind OpenContrail are working closely with Mirantis, and they have especially taken note of the support programs that Mirantis will offer.

“OpenContrail is an essential project within the OpenStack community, and Mirantis is smart to containerize and commercially support it. The work our team is doing will make it easy to scale and update OpenContrail and perform seamless rolling upgrades alongside the rest of Mirantis OpenStack,” said Jakub Pavlik, Mirantis’ director of engineering and OpenContrail Advisory Board member. “Commercial support will also enable Mirantis to make the project compatible with a variety of switches, giving customers more choice in their hardware and software,” he said.

In addition to commercial support for OpenContrail, we are very likely to see Mirantis serve up educational offerings for cloud administrators and developers who want to learn how to leverage it. Mirantis is already well-known for its OpenStack training curriculum and has wrapped Ceph into its training.

In 2016, the SDN category rapidly evolved, and it also became meaningful to many organizations with OpenStack deployments. IDC published a study of the SDN market recently and predicted a 53.9 percent CAGR from 2014 through 2020, at which point the market will be valued at $12.5 billion. In addition, the Technology Trends 2016 report ranked SDN as one of the best technology investments that organizations can make.

“Cloud computing and the 3rd Platform have driven the need for SDN, which will represent a market worth more than $12.5 billion in 2020. Not surprisingly, the value of SDN will accrue increasingly to network-virtualization software and to SDN applications, including virtualized network and security services. Large enterprises are now realizing the value of SDN in the datacenter, but ultimately, they will also recognize its applicability across the WAN to branch offices and to the campus network,” said Rohit Mehra, Vice President of Network Infrastructure at IDC.

Meanwhile, The Linux Foundation recently announced the release of its 2016 report “Guide to the Open Cloud: Current Trends and Open Source Projects.” This third annual report provides a comprehensive look at the state of open cloud computing, and includes a section on SDN.

The Linux Foundation also offers Software Defined Networking Fundamentals (LFS265), a self-paced, online course on SDN, and functions as the steward of the Open Daylight project, another important open source SDN platform that is quickly gaining momentum.

Open Networking Summit, to be held April 3-6 in Santa Clara, CA, brings enterprises, carriers, and cloud service providers together to share insights, highlight innovation and discuss the future of the community. Register Now >>

Unik: Unikernel Runtime for Kubernetes by Idit Levine, EMC

Idit Levine, CTO of the Cloud Management Division at Dell EMC, presented the open source project UniK and announced new features to make unikernel creation more attractive and viable, both for cloud computing and Internet of Things devices.

Free Open Source Networking and Orchestration Webinar From SDxCentral and The Linux Foundation

Open source development is accelerating networking technology in areas including software-defined networking, open standards, and orchestration. Projects such as OPNFV, OpenDaylight, and recently open sourced ECOMP with many others hosted by The Linux Foundation, are helping drive open source networking innovation.

To help you learn more and give you a sneak peek of Open Networking Summit in April, Arpit Joshipura, General Manager, Networking & Orchestration at The Linux Foundation, will hold a free webinar next week exploring the following topics:

  • How has networking evolved and where is it heading?

  • A sneak peek at the future architecture of enterprises and service providers

  • Why automation at the network and orchestration layers have simplified adjacent markets and industries

“We are entering phase three of open source software-defined networking which is about production-ready solutions deployed at scale,” said Joshipura. “In this webinar, you’ll learn how various open source components come together to create an end-to-end solution.”

This webinar will discuss open source innovations and technologies that enable end-to-end solutions for enterprises, carriers, and cloud. It will also describe open standards and open architectures in adjacent markets such as containers, cloud native, and IoT.

Join SDxCentral and The Linux Foundation for “Open Source Networking & Orchestration: From POC to Production” on Thursday, February 9, 2017 at 10:00am Pacific. Register now >>

UniK: Isolating Processes and Reducing Complexity

Unikernels aren’t a new concept; the stripped-down, library-specific application machine images have been around for decades.  But unikernels are enjoying a renaissance thanks to cloud computing; they offer major efficiencies in resource use and provide a tiny attack service for nefarious online activities. At CloudNativeCon in Seattle in November, Idit Levine presented the open source project UniK (pronounced “unique”) and announced new features to make unikernel creation more attractive and viable, both for cloud computing and Internet of Things devices.

Unikernels haven’t been popular, historically, because they’re not easy to create. Narrowing down the essential libraries and drivers for a unikernel from the full application stack has made unikernels a less attractive option for some developers. That’s the problem UniK is trying to solve, according to Levine, who is the CTO of the Cloud Management Division at Dell EMC, and a member of the technical advisory board for the Cloud Foundry Foundation.

“What we wanted to do is make it easy for you do it, so we will do all the hard work for you, and that’s exactly what UniK is about,” Levine said.

Just as Kubernetes creates application containers for clusters through a simple command interface, UniK is a tool to compile application sources into unikernels. By using unikernels instead of virtual machines, OS kernels can be avoided altogether, saving significant computing resources — and money.

Levine said unikernels mesh very well with microservices architecture; the unikernel runs a single process for a single user, and that’s the same philosophy as microservices, isolating processes and decoupling APIs.

“In order to make something like this we need to make some design choice, and our design choice was, we’re going to run only one single process,” Levine said. “If you’re running one single process and one user, which is what we’re doing in microservices architecture today, then we can be very, very smart about reducing a lot of the complexity.”

UniK offers builds for several different operating systems (MirageOS, IncludeOS, OSv, et al.), types of hardware (Intel chipsets or ARM for IoT devices), and cloud infrastructures (AWS, OpenStack, and Google Cloud), and now fully supports Kubernetes. Levine said the whole goal was to let developers choose what works best for them.

The project team is adding more compatibility all the time; Levine welcomed anyone to join the project and contribute.

For more information, watch the complete presentation below:

Want to learn more about Kubernetes? Get unlimited access to the new Kubernetes Fundamentals training course for one year for $199. Sign up now!

Arrive On Time With NTP — Part 3: Secure Setup

Earlier in this series, I provided a brief overview of NTP and then looked at important NTP options to lock down your servers. In this article, I’ll look at some additional security concerns.

Check out the pool

According to the excellent site NTP Pool Project website, which points users at a “big virtual cluster of Time Servers providing reliable easy to use NTP service for millions of clients,” there are 182 active servers for the UK pool in the IPv4 space and 99 available to IPv6 at the time of writing. For reference, they provide useful historical statistics and graphs to presumably keep an eye on any geographical areas which require more resilience, among other things. Figure 1 shows a graph of the available servers for the UK.

Figure 1: The historical number of NTP servers available in the pool to the UK. Copyright NTP Pool Project and Develooper, found at http://www.pool.ntp.org/zone/uk

Review your options

Let’s go back to the friendly restrict command, mentioned previously. It can unilaterally “ignore” everything from hosts or subnets, This should deny absolutely everything, indeed packets of all kinds, including ntpq and ntpdc queries.

I also mentioned Kiss of Death packets earlier and adding the kod option to a restrict line means that we will send a kiss-o’-death (KoD) packet if we want help reduce unwelcome packets and introduce rate-limiting of some description.

One other point to note is that using the limited option only denies clock updates if a requests comes up against the rate limits established by the discard command. The limited option doesn’t apply to ntpq and ntpdc queries, which might add more load from a user with nefarious intentions.

A common addition to the restrict line is nomodify. This denies ntpq and ntpdc queries that might attempt to modify the time on an NTP server. As you would expect, however, any queries that return information only are allowed.

You are unlikely to avoid seeing this option: the noquery flag makes certain that you deny all ntpq and ntpdc queries. Be aware, however, that offering up the correct time is still possible despite this being enabled.

If you want to avoid building relationships with other NTP servers (e.g., unless they are successfully authenticated with you), then the nopeer option will allow you to do this. According to the manual, “This includes broadcast, symmetric-active and many-cast server packets when a configured association does not exist.”

The noserve option is simple; it dutifully denies all packets from a machine (or range of machines) except for ntpq and ntpdc queries.

As you might expect the notrust switch enforces who can connect to your NTP server. In fact, it will deny traffic that isn’t cryptographically authenticated. Again quoting from the manual:

“Note carefully how this flag interacts with the auth option of the enable and disable commands. If auth is enabled, which is the default, authentication is required for all packets that might mobilize  an association. If auth is disabled, but the notrust flag is not present, an association can be mobilized whether or not authenticated. If auth is disabled, but the notrust flag is present, authentication is  required only for the specified address/mask range.”

One final option to pay attention immediately to is called version. This option will be certain to disallow traffic that doesn’t match the current NTP version of your server or client. This can clearly be useful for enforcing that up-to-date versions are used and therefore older security issues present less risk. A similar approach appears in OpenSSH where using the legacy “version 1” is far from recommended; therefore, it is necessary to explicitly enable that version to avoid falling into a potential trap and opening up security holes unnecessarily.

Localized infrastructure

One infrastructure recommendation relates to installing NTP servers and introducing “peering” yourself for improving resilience and capacity. Implementing a peer-to-peer infrastructure within your Stratum 2 servers means that those servers within the peer group allow each other to update their clocks. This, in turn, helps with load balancing and any additional load on their relevant upstream servers. This approach might also be called distributing the time horizontally.

As with all online architectural challenges, there are several other factors to consider. For example, keeping time diligently with three upstream servers and building a complex internal NTP fabric, upon which you come to heavily rely, is of little use if you only have one external Internet connection.

Perimeter Lockdown

Earlier, I promised to quickly examine how to lock down your firewalling with IPtables, rather than opening up UDP port 123 carte blanche. This might apply to your local NTP client, or if you added such rules to a perimeter firewall all of the clients on your LAN. I’ll look at limiting who you can speak to in order to ensure that only very select, predefined, time servers can connect with you.

I’ll use the “uk.pool.ntp.org” example for familiarity, but in reality you might lock down these rules to three individual servers and not a pool of servers. This is because, although you are assured of the reliability of multitudinous NTP servers being available, you may not want to trust them all. Due to the high churn rate, some could be compromised and cause you unwelcome headaches.

Outbound — egress — traffic rules are slightly more sophisticated than those we use to allow traffic into our machine or network. This is because we want to allow “NEW” time lookups to be performed and also pick up the response when an NTP server responds to such a request with what’s called an “ESTABLISHED” connection. You can achieve that as follows:

# iptables -A OUTPUT -o eth0 -p udp -d uk.pool.ntp.org --sport ntp -m state --state NEW,ESTABLISHED -j ACCEPT

Note that if this were my configuration, I would most likely tie these rules to specific IP addresses.

Conversely, to allow an inbound time check to occur, you can use this fractionally simpler line with just “ESTABLISHED” connections being allowed through:

# iptables -A INPUT -i eth0 -p udp -s uk.pool.ntp.org --sport ntp -m state --state ESTABLISHED -j ACCEPT

Alternative to NTP

Finally, to help clear up any confusion there’s an alternative of sorts to the venerable Network Time Protocol that is worth a quick mention. This option comes in the form of the Simple Network Time Protocol (SNTP). What is interesting is that both timekeeping solutions follow the same packet format and actually, according to RFC 4330, “the NTP and SNTP packet formats are the same, and the arithmetic operations to calculate the client time, clock offset, and roundtrip delay are the same.” So, be confused no longer if you think SNTP is more of a subset of NTP rather than a contender.

SNTP becomes most useful when a fully fledged NTP server infrastructure can’t be justified. An earlier RFC (RFC 2030) talks specifically about not expecting SNTP to be performant at passing on the time to other clients. It also mentions that at the furthest edge of the NTP tree (by that I mean the the leaves, or the highest stratum, as opposed to the branches which then connect to the trunk) is where SNTP implementations are best suited. The RFC also warns that without redundant network links and diverse, resilient configurations, SNTP should not be relied upon for serving the time, just receiving it.

Clearly, however, on lesser powered devices SNTP can be implemented quite happily to use fewer system resources. It should therefore be applicable to a number of scenarios, specifically where the IoT demands time lookups from otherwise low-powered equipment, such as refrigerators or other domestic appliances (which I suspect are likely to remain low-powered from a computing perspective for a relatively short period of time as demands on their functionality grows).

EOF

Although I’ve only run through an overview on keeping your clocks correct with NTP, I hope this introduction has explored some of the more important aspects of the subject. I have looked at the configuration, monitoring, and securing of NTP and extolled its value to the integrity of your infrastructure.

You should consider NTP absolutely critical and can now appreciate why it should be one of the first services — possibly directly after DNS — to explore in detail when building infrastructure. Armed with a smattering of IPtables and knowledge of where to find listings of public NTP servers, you are now suitably armed to begin that very task.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in Linux System Administration! Check out the Essentials of System Administration course from The Linux Foundation.

10 Best Linux Distros for Privacy Fiends and Security Buffs in 2017

The awesome operating system Linux is free and open source. As such, there are thousands of different ‘flavours’ available – and some types of Linux such as Ubuntu are generic and meant for many different uses.   But security-conscious users will be pleased to know that there are also a number of Linux distributions (distros) specifically designed for privacy. They can help to keep your data safe through encryption and operating in a ‘live’ mode where no data is written to your hard drive in use.  

Other distros focus on penetration testing (pen-testing) – these come with tools actually used by hackers which you can use to test your network’s security. In this article, we’re going to highlight 10 of the best offerings when it comes to both privacy and security.

Read more at Tech Radar

Beyond Exascale: Emerging Devices and Architectures for Computing

n this video from SC16, Thomas Theis from the Fu Foundation School of Engineering, Arts and Sciences presents: Beyond Exascale: Emerging Devices and Architectures for Computing. Research on new and emerging devices, circuits and architectures for computing, such as that pursued under the Nanoelectronics Research Initiative (NRI), can ultimately take high performance computing well beyond Exascale. Investing in exploratory research now can have a big impact beyond 2025.

Read more at insideHPC

Amazon’s Deep Learning Engine Is Now an Apache Project

Amazon Web Services has seemingly found open source religion over the past several months, including in the field of artificial intelligence. On Monday, the cloud computing arm of Amazon announced that MXNet, its framework of choice for building deep learning systems, has been accepted into the Apache Incubator program.

Read more at ArchiTECHt

5 New Guides for Working With OpenStack

OpenStack experience continues to be among the most in-demand skills in the tech world, with more and more organizations seeking to build and manage their own open source clouds. But OpenStack is a huge domain of knowledge, containing dozen of individual projects that are being actively developed at a rapid pace. Just keeping your skills up to date can be a challenge.

The good news is that there are lots of resources out there to keep you up to speed. In addition to the official project documentation, a variety training and certification programs, printed guides, and other resources, …

Read more at OpenSource.com