Home Blog Page 344

Becoming a Senior Developer: 9 Experiences You’ll Encounter

Many programming career guidelines stress the skills a software developer is expected to acquire. Such general advice suggests that someone who wants to focus on a technical track—as opposed to, say, taking a management path to CIO—should go after the skills needed to mentor junior developers, design future application features, build out release engineering systems, and set company standards.

That isn’t this article.

Being a developer—a good one—isn’t just about writing code. To be successful, you do a lot of planning, you deal with catastrophes, and you prevent catastrophes. Not to mention you spend plenty of time working with other humans about what your code should do.

Following are a number of markers you’ll likely encounter as your career progresses and you become a more accomplished developer. You’ll have highs that boost you up and remind you how awesome you are. You’ll also encounter lows that keep you humble and give you wisdom—at least in retrospect, if you respond to them appropriately.

Read more at HPE

5 Reasons Open Source Certification Matters More Than Ever

In today’s technology landscape, open source is the new normal, with open source components and platforms driving mission-critical processes and everyday tasks at organizations of all sizes. As open source has become more pervasive, it has also profoundly impacted the job market. Across industries the skills gap is widening, making it ever more difficult to hire people with much needed job skills. In response, the demand for training and certification is growing.

In a recent webinar, Clyde Seepersad, General Manager of Training and Certification at The Linux Foundation, discussed the growing need for certification and some of the benefits of obtaining open source credentials. “As open source has become the new normal in everything from startups to Fortune 2000 companies, it is important to start thinking about the career road map, the paths that you can take and how Linux and open source in general can help you reach your career goals,” Seepersad said.

With all this in mind, this is the first article in a weekly series that will cover: why it is important to obtain certification; what to expect from training options that lead to certification; and how to prepare for exams and understand what your options are if you don’t initially pass them.

Seepersad pointed to these five reasons for pursuing certification:

  • Demand for Linux and open source talent. “Year after year, we do the Linux jobs report, and year after year we see the same story, which is that the demand for Linux professionals exceeds the supply. This is true for the open source market in general,” Seepersad said. For example, certifications such as the LFCE, LFCS, and OpenStack administrator exam have made a difference for many people.

  • Getting the interview.One of the challenges that recruiters always reference, especially in the age of open source, is that it can be hard to decide who you want to have come in to the interview,” Seepersad said. “Not everybody has the time to do reference checks. One of the beautiful things about certification is that it independently verifies your skillset.”

  • Confirming your skills.Certification programs allow you to step back, look across what we call the domains and topics, and find those areas where you might be a little bit rusty,” Seepersad said. “Going through that process and then being able to demonstrate skills on the exam shows that you have a very broad skillset, not just a deep skillset in certain areas.”

  • Confidence. This is the beauty of performance-based exams,” Seepersad said. “You’re working on our live system. You’re being monitored and recorded. Your timer is counting down. This really puts you on the spot to demonstrate that you can troubleshoot.” The inevitable result of successfully navigating the process is confidence.

  • Making hiring decisions.As you become more senior in your career, you’re going to find the tables turned and you are in the role of making a hiring decision,” Seepersad said. “You’re going to want to have candidates who are certified, because you recognize what that means in terms of the skillsets.”

Although Linux has been around for more than 25 years, “it’s really only in the past few years that certification has become a more prominent feature,” Seepersad noted. As a matter of fact, 87 percent of hiring managers surveyed for the 2018 Open Source Jobs Report cite difficulty in finding the right open source skills and expertise. The Jobs Report also found that hiring open source talent is a priority for 83 percent of hiring managers, and half are looking for candidates holding certifications.

With certification playing a more important role in securing a rewarding long-term career, are you interested in learning about options for gaining credentials? If so, stay tuned for more information in this series.

Learn more about Linux training and certification.

SBC Clusters — Beyond Raspberry Pi

Cluster computers constructed of Raspberry Pi SBCs have been around for years, ranging from supercomputer-like behemoths to simple hobbyist rigs. More recently, we’ve seen cluster designs that use other open-spec hacker boards, many of which offer higher computer power and faster networking at the same or lower price. Farther below, we’ll examine one recent open source design from Paul Smith at Climbers.net that combines 12 octa-core NanoPi-Fire3 SBCs for a 96-core cluster.

SBC-based clusters primarily fill the needs of computer researchers who find it too expensive to book time on a server-based HPC (high performance computing) cluster. Large-scale HPC clusters are in such high demand, that it’s hard to find available cluster time in the first place.

Research centers and universities around the world have developed RPi-based cluster computing for research into parallel computing, deep learning, medical research, weather simulations, cryptocurrency mining, software-defined networks, distributed storage, and more. Clusters have been deployed to provide a high degree of redundancy or to simulate massive IoT networks, such as with Resin.io’s 144-pi Beast v2.

Even the largest of these clusters comes nowhere close to the performance of server-based HPC clusters. Yet, in many research scenarios, top performance is not essential. It’s the combination of separate cores running in parallel that make the difference. The Raspberry Pi based systems typically use the MPI (Messaging Passing Interface) library for exchanging messages between computers to deploy a parallel program across distributed memory.

BitScope, which is a leader in Pi cluster hardware such as its Bitscope Blade, has developed a system with Los Alamos National Laboratory based on its larger BitScope Cluster Module. The Los Alamos hosted system comprises five racks of 150 Raspberry Pi 3 SBCs. Multiply those 750 boards by the four Cortex-A53 cores on each Pi and you get a 3,000-core parallelized supercomputer.

The Los Alamos system is said to be far more affordable and power efficient than building a dedicated testbed of the same size using conventional technology, which would cost a quarter billion dollars and use 25 megawatts of electricity. There are now plans to move to a 4,000-core Pi cluster.

Most clusters are much smaller 5-25 board rigs, and are typically deployed by educators, hobbyists, embedded engineers, and even artists and musicians. These range from open source DIY designs to commercial hardware rack systems designed to power and cool multiple densely packed compute boards.

96-core NanoPi Fire3 cluster shows impressive benchmarks

The 96-core cluster computer recently detailed on Climbers.net is the largest of several cluster designs developed by Nick Smith. These include a 40-core system based on the octa-core NanoPC-T3, and others that use the Pine A64+, the Orange Pi Plus 2E, and various Raspberry Pi models. (All these SBCs can be found in our recently concluded hacker board reader survey.)

The new cluster, which was spotted by Worksonarm and further described on CNXSoft, uses FriendlyElec’s open-spec NanoPi Fire3.

The open source cluster design includes Inkscape code for laser cutter construction. Smith made numerous changes to his earlier clusters intended to increase heat dissipation, improve durability, and reduce space, cost, and power consumption. These include offering two 7W case fans instead of one and moving to a GbE switch. The Bill of Materials ran to just over £543 ($717), with the NanoPi Fire3 boards totaling £383, including shipping. The next biggest shopping item was £62 for microSD cards.

The $35 Fire3 SBC, which measures only 75x40mm, houses a powerful Samsung S5P6818. The SoC features 8x Cortex-A53 cores at up to 1.4GHz and a Mali-400 MP4 GPU, which runs a bit faster than the Raspberry Pi’s VideoCore IV.

Although the Fire3 has only twice the number of -A53 cores as the Raspberry Pi 3, and is clocked only slightly faster, Smith’s benchmarks showed a surprising 6.6x times faster CPU boost over a similar RPi 3 cluster. GPU performance was 7.5x faster.

It turned out that much of the performance improvement was due to the Fire3’s native, PCIe-based Gigabit Ethernet port, which enabled the clustered SBCs to communicate more quickly with one another to run parallel computing applications. By comparison, the Raspberry Pi 3 has a 10/100Mbps port.

Performance would no doubt improve if Smith had used the new Raspberry Pi 3 Model B+, which offers a Gigabit Ethernet port. However, since the B+ port is based on USB 2.0, its Ethernet throughput is only three times faster than the Model B’s 10/100 port instead of about 10 times faster for the Fire3.

Still, that’s a significant throughput boost, and combined with the faster 1.4GHz clock rate, the RPi 3 B+ should quickly replace the RPi 3 Model B in Pi-based cluster designs. BitScope recently posted an enthusiastic review of the B+. In addition to the performance improvements, the review cites the improved heat dissipation from the PCB design and the “flip chip on silicon” BGA package for the Broadcom SoC, which uses heat spreading metal. The upcoming Power-over-Ethernet capability should also open new possibilities for clusters, says the review.

Hacker board community sites are increasingly showcasing cluster designs — here’s a cluster case design for the Orange Pi One on Thingiverse — and some vendors offer cluster hardware of their own. Hardkernel’s Odroid project, for example, came out with a 4-board, 32-core Odroid-MC1 cluster computer based on an Odroid-XU4S SBC, a modified version of the Odroid-XU4, which won third place in our hacker board survey. The board uses the same octa-core -A15 Samsung Exynos5422 SoC. More recently, it released an Odroid-MC1 Solo version that lets you choose precisely how many boards you want to add.

The Odroid-MC1 products are primarily designed to run Docker Swarm. Many of the cluster systems are designed to run Docker or other cloud-based software. Last year Alex Ellis, for example, posted a tutorial on creating a Serverless Raspberry Pi cluster that runs Docker and the OpenFaaS framework. Indeed, as with edge computing devices running modified versions of cloud software, such as AWS Greengrass, cluster computers based on SBCs show another example of how the embedded and enterprise server worlds are interacting in interesting new ways using Linux.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

​The Return of Spectre

The return of Spectre sounds like the next James Bond movie, but it’s really the discovery of two new Spectre-style CPU attacks.

Vladimir Kiriansky, a Ph.D. candidate at MIT, and independent researcher Carl Waldspurger found the latest two security holes. They have since published a MIT paper, Speculative Buffer Overflows: Attacks and Defenses, which go over these bugs in great detail. Together, these problems are called “speculative execution side-channel attacks.”

These discoveries can’t really come as a surprise. Spectre and Meltdown are a new class of security holes. They’re deeply embedded in the fundamental design of recent generations of processors. To go faster, modern chips use a combination of pipelining, out-of-order execution, branch prediction, and speculative execution to run the next branch of a program before it’s called on. This way, no time is wasted if your application goes down that path. Unfortunately, Spectre and Meltdown has shown the chip makers’ implementations used to maximize performance have fundamental security flaws.

Read more at ZDNet

Container Adoption Starts to Outpace DevOps

The survey of 601 IT decision-makers conducted by ClearPath Strategies on behalf of the Cloud Foundry Foundation (CFF) finds that 32 percent of respondents have adopted containers and are employing DevOps processes. But the number of respondents who plan to adopt or evaluate containers in the next 12 months is 25 percent, while 17 percent are planning to adopt or evaluate DevOps processes. Overall, the survey finds that within the next two years, 72 percent of respondents either already are or expect to be using containers. That compares to 66 percent who say the same for DevOps.

The total number of organizations that have broadly adopted containers stands at 18 percent, while another 40 percent have deployed containers on a limited basis. Another 40 percent said they are still in the early stages.

Read more at Container Journal

Becoming a 10x Developer

When I was first learning to play water polo, a coach told me something I’ve never forgotten. He said, “Great players make everyone around them look like great players.” A great player can catch any pass, anticipating imperfect throws and getting into position. When they make a return pass, they throw the ball so that the other person can make the catch easily. 

A 10x engineer isn’t someone who is 10x better than those around them, but someone who makes those around them 10x better.

Over the years I’ve combined my personal experience with research about building and growing effective teams and turned that into a list of 10 ways to be a better teammate, regardless of position or experience level. While many things on this list are general pieces of advice for how to be a good teammate, there is an emphasis on how to be a good teammate to people from diverse backgrounds.

10 Ways to be a Better Teammate

  1. Create an environment of psychological safety
  2. Encourage everyone to participate equally
  3. Assign credit accurately and generously
  4. Amplify unheard voices in meetings
  5. Give constructive, actionable feedback and avoid personal criticism
  6. Hold yourself and others accountable
  7. Cultivate excellence in an area that is valuable to the team
  8. Educate yourself about diversity, inclusivity, and equality in the workplace
  9. Maintain a growth mindset
  10. Advocate for company policies that increase workplace equality

Read more at Kate Heddleston’s blog

Developer Recruitment Drives Open Source Funding

The latest 2018 Open Source Jobs Report points to several ways employers can help developers. For the study, the Linux Foundation and Dice surveyed over 750 hiring managers involved with recruiting open source professionals.

Due to the survey’s subject, it is not surprising almost half of hiring managers (48 percent) say their company decided to financially support or contribute open source projects to help with recruitment. Although this sounds incredibly compelling, it is fair to question how much hiring managers actually know about open source management. Since 57 percent of hiring managers say their company contributes to open source projects, a back-of-the-envelope calculation says that 84 percent of companies that contribute to open source are doing so at least in part to get new employees.

The New Stack and The Linux Foundation have teamed up to survey the community about ways to standardize and promote open source policies programmatically. We encourage readers to participate.

Read more at The New Stack

Certification Plays Big Role in Open Source Hiring

With high demand for Linux professionals and a shortage of workers with these skills, it’s small wonder that employers are willing not only to train their staff but also to help them get certified. Forty-two percent of employers report having trained existing workers on new open source technologies this year to meet their needs, compared to only 30 percent in 2017, according to the 2018 Open Source Jobs report.

The report, produced by Dice and The Linux Foundation, also found that 38 percent of companies are less likely to rely on outside consultants, compared with 47 percent in 2017. Consequently, they are turning to training to keep up in a fast-paced, ever-changing tech environment. Sixty-four percent of hiring managers say their employees are requesting or taking training courses on their own – the exact same percentage as last year.

Why? There is a strong belief that IT certifications are a reliable predictor of a successful employee, according to IT trade association CompTIA. In its own research, CompTIA found five reasons why 91 percent of employers believe IT certifications play a big role in the hiring process:

  • Certifications help fill open positions

  • Most companies have IT staff who have certifications

  • Certified IT pros make great employees

  • IT certifications are increasing in importance

  • Training alone is not enough

Certification as an incentive

Forty-two percent of employers are using training and certification opportunities as an incentive to retain employees, up from 33 percent last year and 26 percent in 2016, this year’s Open Source Jobs Report found. Underscoring the importance employers place on certifications: Nearly half (47 percent) of hiring managers say employing certified open source professionals is a priority for them, essentially the same number as last year.

The same percentage say they are more likely to hire a certified professional than one without a certification. An increasing number of companies are willing to pay for certifications, with 55 percent that reported they helped to cover the costs of certifications this year, up from 47 percent last year and 34 percent in 2016. Only 17 percent say they would not pay for certifications, a decline from 21 percent last year and 30 percent in 2016.

Certifications is a benefit that can be used as a recruiting tool, and employers that offer certification courses for full-time employees should mention it in job postings, the report stresses. Similarly, professionals seeking this benefit should make clear during the interview process their desire to continue their education and become certified while employed.

However, there continues to be debate over the value of certifications versus on-the-job experience. There are many seasoned tech professionals who claim years of experience is more important, yet the average certification now represents a 7.6 percent premium on an IT pro’s base salary, according to research firm Foote Partners, which publishes an annual IT Skills and Certifications Pay Index. Specifically, gains were seen in networking and communications and applications development and programming language certifications, the firm says.

A significant majority (80 percent) of open source professionals say certifications are useful to their careers, up slightly from 76 percent in the previous two years. The main reasons cited are that certifications enable employees to demonstrate technical knowledge to potential employers (stated by 45 percent of respondents), and certifications make professionals more employable in general (33 percent). Forty-seven percent of open source professionals plan to take at least one certification exam this year, up from 40 percent in 2017.

Vendor neutrality matters

Employers increasingly want vendor neutrality in their training providers, with 77 percent of hiring managers rating this as important, up from 68 percent last year and 63 percent in 2016. Almost all types of training have increased this year, with online/virtual courses being the most popular. Sixty-six percent of employers report offering this benefit, compared to 63 percent in 2017 and 49 percent in 2016. Forty percent of hiring managers say they are providing onsite training, up from 39 percent last year and 31 percent in 2016; and 49 percent provide individual training courses, the same as last year.

Additionally, employers say they increasingly see benefits from sending employees to conferences. Fifty-six percent of hiring managers said they pay for employees to attend technical conferences, up from 46 percent in 2017.

Download the complete Open Source Jobs Report now and learn more about Linux certification here.

A SysAdmin’s Guide to Network Management

If you’re a sysadmin, your daily tasks include managing servers and the data center’s network. The following Linux utilities and commands—from basic to advanced—will help make network management easier.

In several of these commands, you’ll see <fqdn>, which stands for “fully qualified domain name.” When you see this, substitute your website URL or your server (e.g., server-name.company.com), as the case may be.

Ping

As the name suggests, ping is used to check the end-to-end connectivity from your system to the one you are trying to connect to. It uses ICMP echo packets that travel back to your system when a ping is successful. It’s also a good first step to check system/network connectivity. You can use the ping command with IPv4 and IPv6 addresses.

Read more at OpenSource.com

Continuous Integration and Delivery Tool Basics

CI/CD tools are key to today’s agile, container-driven software production cycle. This “explain like I’m 5” overview helps you get started.

Once upon a time, back when waterfall was the primary software development methodology, you could spend months or years working on a single software project. With the help of agile methods, programming cycles were reduced to days or weeks.

How quaint all that looks now! Weeks? Ha! In today’s DevOps-driven world, we use continuous integration, continuous delivery, and continuous deployment (CI/CD) to update programs in time frames measured in days or hours. For example, with its CI/CD pipeline, Netflix takes less than half an hour to move its programs from code check-in to multi-region deployment.

Automation is a key part of the CI/CD process, so naturally there are several tools that aim to help you manage the tasks. To set the stage, let’s review the basics of CI/CD before I give you an overview of some popular CI/CD tools.

Read more at HPE