Home Blog Page 392

7 Steps to DevOps Hiring Success

Now, given the various routes to becoming a DevOps practitioner, how do hiring managers focus their search and selection process to ensure that they’re hitting the mark?

Decide on the background

Assess the strengths of your existing team. Do you already have some amazing software engineers but you’re lacking the infrastructure knowledge? Aim to close these gaps in skills. You may have been given the budget to hire for DevOps, but you don’t have to spend weeks/months searching for the best software engineer who happens to use Docker and Kubernetes because they are the current hot trends in this space. Find the person who will provide the most value in your environment and go from there.

Read more at OpenSource.com

The Programming Languages You Should Learn Now

Learning a programming language is not hard. In fact, if you’re experienced, you can learn the basics in under 24 hours. So if you’re in the market for a new lingua franca, such as to bolster your hirability, what you choose next might be influenced by your current language of choice.

Here are the languages I suggest you consider learning if you don’t already know them, based on the languages you already know.

Read more at InfoWorld

ONS 2018 Q&A: Dan Rodriguez, Intel

Ahead of the much anticipated 2018 Open Networking Summit, we spoke to Dan Rodriguez, vice president and general manager of the Communications Infrastructure Division within Intel’s Data Center Group, about the future of open source networking and for a preview of his keynote. To learn more, don’t miss his presentation at ONS on Tuesday, March 27 at 1:50 p.m.

Read more at The Linux Foundation

Simple Load Balancing with DNS on Linux

When you have server back ends built of multiple servers, such as clustered or mirrowed web or file servers, a load balancer provides a single point of entry. Large busy shops spend big money on high-end load balancers that perform a wide range of tasks: proxy, caching, health checks, SSL processing, configurable prioritization, traffic shaping, and lots more.

But you don’t want all that. You need a simple method for distributing workloads across all of your servers and providing a bit of failover and don’t care whether it is perfectly efficient. DNS round-robin and subdomain delegation with round-robin provide two simple methods to achieve this.

DNS round-robin is mapping multiple servers to the same hostname, so that when users visit foo.example.com multiple servers are available to handle their requests.

Subdomain delegation with round-robin is useful when you have multiple subdomains or when your servers are geographically dispersed. You have a primary nameserver, and then your subdomains have their own nameservers. Your primary nameserver refers all subdomain requests to their own nameservers. This usually improves response times, as the DNS protocol will automatically look for the fastest links.

Round-Robin DNS

Round-robin has nothing to do with robins. According to my favorite librarian, it was originally a French phrase, ruban rond, or round ribbon. Way back in olden times, French government officials signed grievance petitions in non-hierarchical circular, wavy, or spoke patterns to conceal whoever originated the petition.

Round-robin DNS is also non-hierarchical, a simple configuration that takes a list of servers and sends requests to each server in turn. It does not perform true load-balancing as it does not measure loads, and does no health checks, so if one of the servers is down, requests are still sent to that server. Its virtue lies in simplicity. If you have a little cluster of file or web servers and want to spread the load between them in the simplest way, then round-robin DNS is for you.

All you do is create multiple A or AAAA records, mapping multiple servers to a single host name. This BIND example uses both IPv4 and IPv6 private address classes:

fileserv.example.com.  IN  A  172.16.10.10
fileserv.example.com.  IN  A  172.16.10.11
fileserv.example.com.  IN  A  172.16.10.12

fileserv.example.com.  IN  AAAA  fd02:faea:f561:8fa0:1::10
fileserv.example.com.  IN  AAAA  fd02:faea:f561:8fa0:1::11
fileserv.example.com.  IN  AAAA  fd02:faea:f561:8fa0:1::12

Dnsmasq uses /etc/hosts for A and AAAA records:

172.16.1.10  fileserv fileserv.example.com
172.16.1.11  fileserv fileserv.example.com
172.16.1.12  fileserv fileserv.example.com
fd02:faea:f561:8fa0:1::10  fileserv fileserv.example.com
fd02:faea:f561:8fa0:1::11  fileserv fileserv.example.com
fd02:faea:f561:8fa0:1::12  fileserv fileserv.example.com

Note that these examples are simplified, and there are multiple ways to resolve fully-qualified domain names, so please study up on configuring DNS.

Use the dig command to check your work. Replace ns.example.com with your name server:

$ dig @ns.example.com fileserv A fileserv AAA

That should display both IPv4 and IPv6 round-robin records.

Subdomain Delegation and Round-Robin

Subdomain delegation combined with round-robin is more work to set up, but it has some advantages. Use this when you have multiple subdomains or geographically-dispersed servers. Response times are often quicker, and a down server will not respond, so clients will not get hung up waiting for a reply. A short TTL, such as 60 seconds, helps this.

This approach requires multiple name servers. In the simplest scenario, you have a primary name server and two subdomains, each with its own name server. Configure your round-robin entries on the subdomain servers, then configure the delegations on your primary server.

In BIND on your primary name server, you’ll need at least two additional configurations, a zone statement, and A/AAAA records in your zone data file. The delegation looks something like this on your primary name server:

ns1.sub.example.com.  IN A     172.16.1.20
ns1.sub.example.com.  IN AAAA  fd02:faea:f561:8fa0:1::20
ns2.sub.example.com.  IN A     172.16.1.21
ns2.sub.example.com.  IN AAA   fd02:faea:f561:8fa0:1::21

sub.example.com.  IN NS    ns1.sub.example.com.
sub.example.com.  IN NS    ns2.sub.example.com.

Then each of the subdomain servers have their own zone files. The trick here is for each server to return its own IP address. The zone statement in named.conf is the same on both servers:

zone "sub.example.com" {
    type master;
    file "db.sub.example.com";
};

Then the data files are the same, except that the A/AAAA records use the server’s own IP address. The SOA (start of authority) refers to the primary name server:

; first subdomain name server
$ORIGIN sub.example.com.
$TTL 60
sub.example.com  IN SOA ns1.example.com. admin.example.com. (
        2018123456      ; serial
        3H              ; refresh
        15              ; retry
        3600000         ; expire
)

sub.example.com. IN NS ns1.sub.example.com.
sub.example.com. IN A  172.16.1.20
ns1.sub.example.com.  IN AAAA  fd02:faea:f561:8fa0:1::20
; second subdomain name server
$ORIGIN sub.example.com.
$TTL 60
sub.example.com  IN SOA ns1.example.com. admin.example.com. (
        2018234567      ; serial
        3H              ; refresh
        15              ; retry
        3600000         ; expire
)

sub.example.com. IN NS ns1.sub.example.com.
sub.example.com. IN A  172.16.1.21
ns2.sub.example.com.  IN AAAA  fd02:faea:f561:8fa0:1::21

Next, make your round-robin entries on the subdomain name servers, and you’re done. Now you have multiple name servers handling requests for your subdomains. Again, BIND is complex and has multiple ways to do the same thing, so your homework is to ensure that your configuration fits with the way you use it.

Subdomain delegations are easier in Dnsmasq. On your primary server, add lines like this in dnsmasq.conf to point to the name servers for the subdomains:

server=/sub.example.com/172.16.1.20
server=/sub.example.com/172.16.1.21
server=/sub.example.com/fd02:faea:f561:8fa0:1::20
server=/sub.example.com/fd02:faea:f561:8fa0:1::21

Then configure round-robin on the subdomain name servers in /etc/hosts.

For way more details and help, refer to these resources:

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Raspberry Pi 3B+ Speeds Up Three Ways

Recently, the Raspberry Pi 3 Model B+ SBC touched down with the refreshing lack of hype and hoopla typical of Raspberry Pi product introductions. The modest launch may also be a tacit admission that this upgrade to the insanely popular Raspberry Pi 3 Model B checks off only one major wish-list item: the upgrade from 10/100 to 10/100/1000Mbps Ethernet. There’s still only 1GB of RAM, and there’s still no eMMC storage, let alone SATA, mini-PCIe, or M.2 expansion.

On the other hand, there’s a slightly faster processor, the WiFi has been upgraded to pre-certified, dual-channel 802.11ac, and the new Gigabit Ethernet (GbE) port offers Power-over-Ethernet (PoE). Considering the price remains at $35, many Raspberry Pi 3 owners will make the switch, although perhaps not in the same numbers as those who jumped from the RPi 2 to the 64-bit, wireless enabled RPi 3.

The Raspberry Pi 3B+ has the same 86 x 56 dimensions as the 3B, and all the features. The layout has shifted only slightly, and the 40-pin header supports existing Raspberry Pi HAT add-ons.

The Broadcom BCM2837 SoC has been replaced with a BCM2837B0 model that boosts the clock on the four Cortex-A53 cores from 1.2GHz to 1.4GHz. It is otherwise the same except for improvements in power regulation accuracy. Along with a new heat spreader and a new MaxLinear MxL7704 power management IC (PMIC), this is said to help the Pi run longer at top speed with overheating. The SoC is still more power hungry, however, and you’ll need a high-quality 2.5A power supply.

Soon, Raspberry Pi Trading will release a PoE HAT that will let you draw power through the GbE port. Enabled by a 4-pin PoE header, the Power-over-Ethernet capability makes it easier to remotely deploy Internet of Things devices away from a power supply. PoE has been around a long time, but has been on the upswing with the rise of IoT.

Gigabit Ethernet ports are common now on Linux hacker boards, appearing on over half of the 103 community backed SBCs rounded up by LinuxGizmos at the start of the year. Like most of the GbE ports on ARM boards, the RPi 3 B+’s model is hampered by the fact that it’s enabled via a USB 2.0 interface instead of PCIe. Still, it’s rated at up to 315Mbps, which RPi Trading says is three times the throughput of the 3B’s Fast Ethernet.

In any case, you have another fast communications option with a new Cypress CYW43455 WiFi chip with 802.11ac technology. The module adds support for 5GHz to go along with the earlier 2.4GHz, and bumps Bluetooth from 4.1 to 4.2.

Raspberry Pi Trading posted benchmarks showing that at 2.4GHz the B+ average is about 46Mbps, compared to 35Mbps on the B. On the new 5GHz frequency, that bandwidth is more than doubled. With WiFi, the RPi 3B+ is slightly ahead of the ARM hacker board average, and the timing is about right now that plenty of affordable dual-channel and -ac enabled WiFi routers are available.

The wireless chip also and adds metal shielding to enable pre-certification of the wireless module for FCC regulations. This is particularly welcome to commercial device developers building on the Raspberry Pi.

Raspberry Pi ecosystem expands with Arduino Create and webOS support

Although the Raspberry Pi 3 Model B+ may not be the huge breakthrough that we expect to see with next year’s Raspberry Pi 4, it’s certainly a welcome improvement that should help the platform continue to dominate. Recently, we’ve seen a rash of Raspberry Pi phones, handhelds, and tablets — SunFounder’s Raspad tablet has raised over $460K on Kickstarter with 13 days to go. We’re also seeing more and more commercial boards based on various RPi models, as well as ingenious gizmos like MIT’s new RPi-powered robot fish.

Raspberry Pi software support also continues to expand. If you’ve got a new IoT framework, AI technology or development platform, the Raspberry Pi will almost certainly be your first target device. Last week at the Embedded Linux Conference, Arduino announced that it was expanding its Arduino Create IDE beyond Arduino and x86 boards to support the Raspberry Pi and BeagleBone.

This week, LG released its first open source version of the old Palm- and HP-supported webOS Linux distribution. The first target device for webOS Open Source Edition 1.0 is the Raspberry Pi 3. And thanks to the Raspberry Pi tradition of backward compatibility, the B+ should stand in just fine.

Windows Embedded Compact Migration: What You Need to Know

With end of life dates for Windows CE and Mobile OS, we explore the key considerations when planning for the lack of support for Microsoft Embedded devices.

Part 1: Windows Embedded Compact Migration and End-of-Life

Windows Embedded Compact 6.0 has been available since 2006 and Windows Embedded Compact 7.0 dates back to 2011.

For device manufacturers and software developers, there are several important dates in the platforms’ product lifecycles, especially when ‘mainstream’ and ‘extended’ support ends.

Mainstream support is available during system development when problems occur that may require Microsoft assistance. Windows customers can raise a support ticket (for a minimal fee) to help identify and hopefully rectify any issues, whether in the OS configuration or OS itself.

This help can be very useful with some non-mainstream features; each time we’ve used this service it has proved invaluable identifying whether the OS has an issue or the system has been misconfigured.

Windows CE End of Life Support Phases

Sadly, after an OS enters the extended support phase, no patches will be applied, unless security related. After the extended support phase no patches are applied, security or functional.

The following table reveals the key dates for a number of Windows CE variants.

Many Microsoft customers can choose to migrate to the next version of Windows Embedded Compact or to another choice of OS/platform as the dates highlighted in yellow approach.

This decision can be influenced by the current hardware platform and the type and amount of supporting application software.

Some customers require ongoing security patches to support product sales including devices in the banking and medical sectors; while others are looking for the next generation platform.

Clearly, any development will need support during the design phase and throughout the product lifecycle – so selecting a platform with longevity is important.

CE5.0 and CE6.0 hardware resources are similar; CE7.0 offered larger images and higher RAM usage, as did 2013. This means that migrating to the next version of the operating system may not be viable on the current hardware platform.

The effort to port, test and re-validate the platform to extend product life for a few years can be a painful process.

This may be a good time to look at revamping the platform; adding features that are in demand for next generation devices, and updating the processor and other components nearing obsolescence.

One of these key upgrade considerations is also the operating system.

So what choices are available?

Windows CE has now been superseded by Windows IOT which has limited platform compatibility; the other two main contenders are Linux and Android. Android better serves multi-purpose devices, leaving Linux as the main choice for a single-purpose embedded device.

Our focus now is Embedded Linux porting.

This gives us an ideal focus for the migration from Microsoft to Linux at both the OS and application layers. See http://www.bytesnap.co.uk/software-development/embedded-linux-development/ for more information.

This post originally appeared on: http://www.bytesnap.co.uk/windows-embedded-compact-migration-end-of-life-support/

Submit a Proposal to Speak at Open Source Summit NA by April 29

Share your knowledge and expertise by speaking at Open Source Summit North America, August 29-31 in Vancouver BC. Proposals are being accepted through April 29th.

As the leading technical conference for professional open source, Open Source Summit gathers developers, sysadmins, DevOps professionals, architects and community members from across the globe for education and collaboration across the ecosystem.

This year’s tracks/content will cover the following areas:

  • Cloud Native Apps/Serverless/Microservices
  • Infrastructure & Automation (Cloud / Cloud Native / DevOps)
  • Linux Systems
  • Artificial Intelligence & Data Analytics
  • Emerging Technologies & Wildcard (Networking, Edge, IoT, Hardware, Blockchain)
  • Community, Compliance, Governance, Culture, Open Source Program Management (in the Open Collaboration Conference tracks)
  • Diversity & Inclusion (in the Diversity Empowerment Summit )
  • Innovation at Apache/In Apache Projects (in the Apache Software Foundation track)
  • Cloud & Container Apprentice Linux Engineer Tutorials Track (geared towards attendees new to using Linux and open source based cloud & container technologies)

Read more at The Linux Foundation

The Kernel Self-Protection Project Aims to Make Linux More Secure

The complexity of the Linux kernel means that it is likely to carry legacy ballast and bugs for an indefinite period of time. At the end of 2010 [4], Jonathan Corbet checked how long the safety-relevant bugs eliminated in that year had existed until discovered: 22 of the 80 loopholes examined had been in the code for more than five years!

Practical experience leads to an approach that simultaneously makes attacks more difficult and reduces the consequences of exploitable code weaknesses. This two-pronged approach is the goal of the Kernel Self-Protection [5] project.

Break-In Technology for Everyone

Viewed through the looking glass with sufficient hindsight, most attacks on programs work in a similar way. An attacker tries to add new program code to a running process, which the hijacked process then executes with its privileges. The added code can be SQL or shell commands, or typically, binary code in kernel attacks. In order to inject this code, attackers exploit programming errors that allow them to determine memory contents and manipulate the program counter.

Read more at Linux Magazine

Why Kubernetes Operators Are a Game Changer

Kubernetes StatefulSets gives you a set of resources to deal with stateful containers, such as: volumes, stable network ids, ordinal indexes from 0 to N, etc. Volumes are one of the key features which allow us to run stateful applications on top of Kubernetes, let’s see the two main types currently supported:

Ephemeral storages volumes

The behavior of ephemeral storages is different than what you are used to in Docker, in Kubernetes the volume outlives any containers that run within the Pod and the data is preserved across container restarts. But if the Pod get killed, the volume is automatically removed.

Read more at CouchBase

LF Networking, OCP Collaborate on Creating Open Source SDN, NFV Software Stacks

On the eve of next week’s ONS North America event, The Linux Foundation and the Open Compute Project (OCP) launched a collaborative effort to drive the development of open source networking based on software and hardware solutions.

Under the terms of the agreement, the two organizations will create stronger integration and testing, new open networking features, more scalability, a reduction in CAPEX/OPEX, greater harmonization with switch network operating systems and increased interoperability for NFV network transitions.

As the telecom and IT industries have virtualized more network functions, which has resulted in the disaggregation of hardware and software, the interest in open source at both layers has continued to rise.

Read more at Fierce Telecom