Home Blog Page 409

Enterprises Identify 10 Essential Tools for DevOps

In rough order of their appearance in the DevOps pipeline, based on real-world user feedback, here are the most essential DevOps tools:In some of these 10 major categories, just one tool rules the roost, such as Docker for application containers and Jenkins for CI/CD. In others, DevOps practitioners view a handful of tools as interchangeable, depending on personal preferences and the IT environment’s specific requirements.

1. Source code repository

The first step in a DevOps process is a version-controlled source code repository where developers check in, check out and manage code. Most CI and application deployment tools respond automatically to code commits in such repositories, and a DevOps process that doesn’t start with source code control is a nonstarter in the eyes of practitioners.

Read more at TechTarget

Xen Project Contributor Spotlight: Kevin Tian

The Xen Project is comprised of a diverse set of member companies and contributors that are committed to the growth and success of the Xen Project Hypervisor. The Xen Project Hypervisor is a staple technology for server and cloud vendors, and is gaining traction in the embedded, security and automotive space. This blog series highlights the companies contributing to the changes and growth being made to the Xen Project and how the Xen Project technology bolsters their business.

Name: Kevin Tian
Title: Principal Engineer of Open Source Technology Center
Company: Intel

When did you join the Xen Project and why/how is your organizations involved?
My journey with Xen Project has been ~13 years now (since 2005), with a focus on hardware-assisted virtualization using Intel® Virtualization Technology (Intel® VT). I’m acting as the maintainer for VT-x/VT-d sub-system in the Xen Project community. The Xen Project is the first open source virtualization project embracing Intel® VT and is a leading community in demonstrating new hardware virtualization features.

Read more at Xen Project

The Code.mil Open Source Initiative Got a Makeover

The Defense Department launched the Code.mil website on Tuesday, a new, streamlined portal for its similarly named Code.mil initiative, a collaborative approach to meeting the government’s open source policy.

The site features a suite of new tools, including checklists that links to offer guidance, and represents “an evolution of the Code.mil project,” according to Ari Chivukula, policy wrangler for the Defense Digital Service.

In 2016, then-President Barack Obama’s Federal Source Code Policy pushed agencies to use open source software

Read more at Nextgov

SRT in GStreamer

SRT, the open source video transport protocol that enables the delivery of high-quality and secure, low latency video, has been integrated into GStreamer.

By Olivier Crête, Multimedia Lead at Collabora.

Transmitting low delay, high quality video over the Internet is hard. The trade-off is normally between video quality and transmission delay (or latency). Internet video has up to now been segregated into two segments: video streaming and video calls. On the first side, streaming video has taken over the world of the video distribution using segmented streaming technologies such as HLS and DASH, allowing services like Netflix to flourish. On the second side, you have VoIP systems, which are generally targeted a relatively low bitrate using low latency technologies such as RTP and WebRTC, and they don’t result in a broadcast grade result. SRT bridges that gap by allowing the transfer of broadcast grade video at low latencies.

The SRT protocol achieves these goal using two techniques. First, if a packet is lost, it will retransmit it, but it will only do that for a certain amount of time which is determined by the configured latency, this means that the latency is bounded by the application. Second, it tries to guess the available bandwidth based on the algorithms from UDT, this means that it can then avoid sending at a rate that exceeds the link’s capacity, but it also makes this information available to the application (to the encoder) so that it can adjust the encoding bitrate to not exceed the available bandwidth ensuring the best possible quality. Using the combination of these techniques, we can achieve broadcast grade video over the Internet if the bandwidth is sufficient.

At Collabora, we’re very excited with the possibilities created by SRT, so we decided to integrate it into GStreamer, the most versatile multimedia framework out there!

Continue reading on Collabora’s blog.

5 Open Source Technology Trends for 2018

Technology is evolving faster than the speed of light. Well, not quite, but you get the picture. Blockchain, Artificial Intelligence, OpenStack, progressive web apps – they are all set to make an impact this year. You might be accustomed to navigating your forex trading platform or building a website in WordPress, but how familiar are you with the following? 

Artificial Intelligence

Thirty years ago, machine learning and AI were the stuff of science fiction. The notion that AI would one day be in your homes was a little frightening given that Terminator was in the cinema. Today, machine learning and Artificial Intelligence are evolving fast. Chatbots take care of front-line customer service and driverless cars are in production. Indeed, Andrew Ng from Baidu predicts that driverless cars will available from 2020 and in full production by 2021, so this is one area that is sure to expand in 2018.

Blockchain Technology

Bitcoin has hit the headlines numerous times in the last twelve months. After reaching the heady heights of a snip under $20,000 in December 2017, bitcoin has since taken a dramatic tumble. The cryptocurrency lost $67 billion in one week at the beginning of February, after major banks announced they were banning the use of credit cards to buy bitcoin. Other cryptocurrencies have followed suit, but the underlying blockchain technology is sound and analysts believe it will grow and prosper in 2018.

OpenStack

OpenStack is the future. This is an operating system that runs in the cloud and it has a lot of advantages. For starters, it offers a flexible ecosystem at low cost and it can easily support mission-critical applications. But, there are some challenges to be faced, most notably, OpenStack’s dependency on servers, virtualisation, and its complex structure. Nevertheless, OpenStack has the backing of several major software companies and acceptance rates are expected to soar in 2018.

Progressive Web Apps

Progressive web apps are the perfect combination of a website and an app. Progressive web apps don’t need to be downloaded, but they offer a better UX when compared to viewing a website. Progressive web apps update information in real time and they run in HTTPS. They are fast and responsive, and in today’s high-octane world where users demand convenience, this is essential. 

The Internet of Things

The Internet of Things allows the interconnection of everyday devices. It isn’t a new concept, but the reach of the IoT looks set to grow in 2018. Autonomous Decentralized Peer-to-Peer Telemetry is a major part of evolving IoT technology. It uses the principles of blockchain technology to deliver a de-centralised network that allows devices and “things” to communicate without a central command centre. It could prove to be a major evolution in the world of tech. 

Other technology trends worthy of a mention include the latest generation of programming languages. Rust is an exciting alternative to Python and C, so programmers predict Rust will become a viable choice in 2018. R is another up-and-coming opensource programming language. This is also one to watch in 2018. 

DNS and DHCP with Dnsmasq

Last week, we learned a batch of tips and tricks for Dnsmasq. Today, we’re going more in-depth into configuring DNS and DHCP, including entering DHCP hostnames automatically into DNS, and assigning static IP addresses from DHCP.

You will edit three configuration files on your Dnsmasq server: /etc/dnsmasq.conf, /etc/resolv.conf, and /etc/hosts. Just like the olden days when we had nice clean configuration files for everything, instead of messes of scripts and nested configuration files.

Use Dnsmasq’s built-in syntax checker to check for configuration file errors, and run Dnsmasq from the command-line rather than as daemon so you can quickly test configuration changes and log the results. (See last week’s tutorial to learn more about this.)

Taming Network Manager and resolv.conf

Disable Network Manager on your Dnsmasq server, and give its network interfaces static configurations. You also need control of /etc/resolv.conf, which in these modern times is usually controlled by other processes, such as Network Manager. In these cases /etc/resolv.conf is a symbolic link to another file such as /run/resolvconf/resolv.conf or /var/run/NetworkManager/resolv.conf. To get around this delete the symlink and then re-create the /etc/resolv.conf file. Now your changes will not be overwritten.

There are many ways to use Dnsmasq and /etc/resolv.conf together. My preference is to enter only 127.0.0.1 in /etc/resolv.conf, and enter all upstream nameservers in /etc/dnsmasq.conf. You don’t need to touch any client configurations because Dnsmasq will provide all network information to them via DHCP.

Local DHCP

This example configuration includes some typical global options, and then defines a single DHCP address range. Replace the italicized values with your own values.

# global options
domain-needed
bogus-priv
no-resolv
filterwin2k
expand-hosts
domain=mydomain.net
local=/mydomain.net/
listen-address=127.0.0.1
listen-address=192.168.10.4

# DHCP range
dhcp-range=192.168.10.10,192.168.10.50,12h
dhcp-lease-max=25

dhcp-range=192.168.10.10,192.168.10.50,12h defines a range of 40 available address leases, with a lease time of 12 hours. This range must not include your Dnsmasq server. You may define the lease time in seconds, minutes, or hours. The default is one hour and the minimum possible is two minutes. If you want infinite lease times then don’t specify a lease time.

dhcp-lease-max=25 defines how many leases can be active at one time. You can have large address pool available and then limit the number of active leases to prevent denial of service problems from hosts going nuts and demanding a lot of DHCP leases.

DHCP Zones and Options

You can define DHCP zones for different subnets, like this example that has an eth and a wifi zone, and then give each zone different options. This example shows how to define the zones:

dhcp-range=eth,192.168.10.10,192.168.10.50,12h
dhcp-range=wifi,192.168.20.10,192.168.20.50,24h

The default route advertised to all clients is the address of your Dnsmasq server. You can configure DHCP to assign each zone a different default route:

dhcp-option=eth,3,192.168.10.0
dhcp-option=wifi,3,192.168.20.0

How do you know that 3 is the default route option? Run dnsmasq --help dhcp to see all the IPv4 options. dnsmasq --help dhcp6 lists the IPv6 options. (See man 5 dhcp-options for more information on options.) You may also use the option names instead of the numbers, like this example for your NTP server:

dhcp-option=eth,option:ntp-server,192.168.10.5

Upstream Name Servers

Controlling which upstream name servers your network uses is one of the nicer benefits of running your own name server, instead of being stuck with whatever your ISP wants you to use. This example uses the Google public name servers. You don’t have to use Google; a quick Web search will find a lot of public DNS servers.

server=8.8.4.4
server=8.8.8.8

DNS Hosts

Adding DNS hosts to Dnsmasq is almost as easy as falling over. All you do is add them to /etc/hosts, like this, using your own addresses and hostnames:

127.0.0.1       localhost
192.168.10.2    webserver
192.168.10.3    fileserver 
192.168.10.4    dnsmasq
192.168.10.5    timeserver

Dnsmasq reads /etc/hosts, and these hosts are available to your LAN either by hostname or by their fully-qualified domain names. The expand-hosts option in /etc/dnsmasq.conf expands the hostnames to the domain= value, for example webserver.mydomain.net

Set Static Addresses from DHCP

This is my favorite thing. You may assign static IP addresses to your LAN hosts by MAC address, or by hostname. The address must fall in a range you have already configured with dhcp-range=:

dhcp-host=d0:50:99:82:e7:2b,192.168.10.46
dhcp-host=turnip,192.168.10.45

On most Linux distributions it is the default for dhclient to send the hostname. You can confirm this in dhclient.conf, with the send host-name option. Do not have any duplicate entries in /etc/hosts.

Here we are again at the end already. Check out these articles for more Dnsmasq features and howtos:

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

GPUs on Google’s Kubernetes Engine Are Now Available in Open Beta

The Google Kubernetes Engine (previously known as the Google Container Engine and GKE) now allows all developers to attach Nvidia GPUs to their containers.

GPUs on GKE (an acronym Google used to be quite fond of, but seems to be deemphasizing now) have been available in closed alpha for more than half a year. Now, however, this service is in beta and open to all developers who want to run machine learning applications or other workloads that could benefit from a GPU. As Google notes, the service offers access to both the Tesla P100 and K80 GPUs that are currently available on the Google Cloud Platform.

Read more at TechCrunch

The CNCF Takes Steps Toward Serverless Computing

Even though the idea of ‘serverless’ has been around since 2006, it is a relatively new concept. It’s the next step in the ongoing revolution of IT infrastructure that goes back to the days when one server used to run one application.

Many vendors and users who attended KubeCon Austin expressed a growing interest in serverless computing. Platform 9 conducted a survey at the event and Functions as a Service (FaaS) came up as the third most popular use case for communities. In a recent surveyconducted by the CNCF, 41% respondents said they are using serverless technology.

Being a new concept, there is a lot of curiosity and confusion around serverless computing. People are asking questions: What is it? Who is it for? Is it a replacement for IaaS, PaaS and containers? Does that mean the days of servers are over? What are the benefits? What are the drawbacks?

Read more at CNCF

Observability: The New Wave or Buzzword?

Monitoring tells you whether the system works. Observability lets you ask why it’s not working.

— Baron Schwartz (@xaprb) October 19, 2017

In Monitoring in the time of Cloud Native, Cindy Sridharan views monitoring (alerting and overview dashboards) as the tip of the observability iceberg with debugging beneath it. We’ve had alerting, dashboards, and debugging tools for decades, so why are we now deciding this needs an overarching name?

If we look at where the time is spent resolving a performance incident as the iceberg, a greater percentage of the iceberg is now in debugging:

Read more at Medium

FOSS Project Spotlight: LinuxBoot

Linux as firmware.

The more things change, the more they stay the same. That may sound cliché, but it’s still as true for the firmware that boots your operating system as it was in 2001 when Linux Journal first published Eric Biederman’s “About LinuxBIOS“. LinuxBoot is the latest incarnation of an idea that has persisted for around two decades now: use Linux as your bootstrap.

On most systems, firmware exists to put the hardware in a state where an operating system can take over. In some cases, the firmware and OS are closely intertwined and may even be the same binary; however, Linux-based systems generally have a firmware component that initializes hardware before loading the Linux kernel itself. This may include initialization of DRAM, storage and networking interfaces, as well as performing security-related functions prior to starting Linux. To provide some perspective, this pre-Linux setup could be done in 100 or so instructions in 1999; now it’s more than a billion.

Read more at Linux Journal