Home Blog Page 474

Apache “Optionsbleed” Vulnerability – What You Need to Know

Remember Heartbleed?

That was a weird sort of bug, based on a feature in OpenSSL called “heartbeat”, whereby a visitor to your server can send it a short message, such as HELLO, and then wait a bit for the same short message to come back, thus proving that the connection is still alive.

The Heartbleed vulnerability was that you could sneakily tell the server to reply with more data than you originally sent in, and instead of ignoring your malformed request, the server would send back your data…

…plus whatever was lying around nearby in memory, even if that was personal data such as browsing history from someone else’s web session, or private data such as encryption keys from the web server itself.

No need for authenticated sessions, remotely injected executable commands, guessed passwords, or any other sort of sneakily labyrinthine sequence of hacking steps.

Read more at Naked Security by Sophos

Clouds and Puppies at Open Source Summit: Day 3 in 5 Minutes

Yes, there were Puppies on Day 3 at the Open Source Summit, and they called it Puppy Pawlooza.  In this five-minute video summary, I’m joined by Jono Bacon, leading community strategist and curator of the Open Community Conference.

View on YouTube

The Cloud Native Computing Foundation (CNCF) kicked things off with a bunch of announcements Wednesday morning.  Aside from Oracle and Ticketmaster joining the foundation, both Lyft and Uber announced projects entering the CNCF.  Lyft’s project is Envoy, an edge and service proxy, and Uber’s is Jaeger, a distributed tracing system. 

The remainder of the day was filled with fascinating talks about building community, the role of certifications, and some astounding stats on the adoption of both Docker and orchestration tools like Kubernetes.  Oh, and, of course, Puppy Pawlooza, or should I say Open Source Cuddles?

For more daily summaries, you can also watch Day 1 and Day 2, and if you dig this content, check out my Open Source Craft channel on YouTube.

The 7 Stages of Becoming a Go Programmer

One day at work, we were discussing the Go programming language in our work chatroom. At one point, I commented on a co-worker’s slide, saying something along the lines of:

“I think that’s like stage three in the seven stages of becoming a Go programmer.”

Naturally, my co-workers wanted to know the rest of the stages, so I briefly outlined them. Here, expanded with more context, are the seven stages of becoming a Go programmer; see if you can see yourself on this pathway.

Stage 1: You believe you can make Go do object oriented programming

After your initial run on A Tour of Go, you start thinking “Now, how can I make this language behave more like an object oriented language…?” After all, you are used to that stuff. You want to make robust code. You want polymorphism.

Read more at OpenSource.com

Future Proof Your SysAdmin Career: Advancing with Open Source

For today’s system administrators, the future holds tremendous promise. In this ebook, we have covered many technical skills that can be big differentiators for sysadmins looking to advance their careers. But, increasingly, open source skillsets can also open new doors.

A decade ago, Red Hat CEO Jim Whitehurst predicted that open source tools and platforms would become pervasive in IT. Today, that prediction has come true, with profound implications for the employment market. Participating in open source projects — through developing code, submitting a bug report, or contributing to documentation — is an important way to demonstrate open source skills to hiring managers.

future proof ebook

“Successful open source projects thrive on a wide variety of contributions from people with all levels of coding skills and commitment. If just one person fixes a compiler warning, closes a bug, or adds to the documentation, pretty soon you’re talking real progress,” according to this New Relic article by Andy Lester.  

Additionally, market researchers have pointed to the connection between open source skillsets and improved employment outcomes. Knowledge of open source best practices, licensing requirements, and project management experience are all important skills that can be gained through working with open source projects. However, the collaboration and communication skills acquired through such participation are equally valuable.

Collaboration is key

Collaboration “is an increasingly important skill in today’s job environment because software is being built outside of a firm,” said Zemlin, Executive Director at The Linux Foundation in an article in PCWorld. “Someone who can collaborate within their company and across different organizations is highly sought after.”

Sysadmins should take note of how they can improve job prospects by contributing to open source projects. As open source technology becomes more pervasive, tech and DevOps workers are building out and overseeing their own open source projects. From Google, to Netflix to Facebook, companies are also releasing their open source creations to the community. Sysadmins who contribute to open source projects can showcase their fluency and experience in this space.

More information on tools to help you understand and contribute to open source projects can be found in this post. The bottom line is that open source is now part of the essential playbook for sysadmins, and seeking training and making contributions can greatly advance your prospects.

Conclusion

A key takeaway from this ebook is that complacency is the enemy. You may be a Linux wizard or a Microsoft-certified admin with years of experience, but staying competitive and advancing your career requires continuous improvement.

We’ve covered some of the skills that are highly valued in the job market now, but emerging skillsets for sysadmins will always be a moving target. As the landscape shifts for sysadmins, adding new skills and acquiring experience is essential.

Learn more about essential sysadmin skills: Download the Future Proof Your SysAdmin Career ebook now.

Read more:

Future Proof Your SysAdmin Career: An Introduction to Essential Skills 

Future Proof Your SysAdmin Career: New Networking Essentials

Future Proof Your SysAdmin Career: Locking Down Security

Future Proof Your SysAdmin Career: Looking to the Cloud

Future Proof Your SysAdmin Career: Configuration and Automation

Future Proof Your SysAdmin Career: Embracing DevOps

Future Proof Your SysAdmin Career: Getting Certified

Future Proof Your SysAdmin Career: Communication and Collaboration

Future Proof Your SysAdmin Career: Advancing with Open Source

 

Be Nice: Hyperledger’s Brian Behlendorf Offers Tips for Creating Sustainable Open Source Projects

So, what’s unique about projects like Linux that thrive where others fail? What’s the secret sauce that sustains one project over others? Is it the community? The license? The code? The organizations backing it?

We talked to open source veteran Brian Behlendorf, co-founder of the Apache Software Foundation (ASF) and current Executive Director of the Hyperledger project, for some answers to these questions. Here is an edited version of the interview conducted at Open Source Summit North America in Los Angeles.

What are the core components of sustainable open source projects?

Brian Behlendorf: By definition, any open source project that is still alive needs some critical mass of developers contributing to it. The Linux kernel is 25+ years old, and it still sees 5,000 new lines of code every day. It’s still such an incredibly active project.

In my book, that means you need this body of maintainers and contributors who are willing to continue to nurture the project even as it goes into adolescence and later life.

For me, the only way to more or less guarantee that happens is to see that there are companies out there who are making money off of open source software. They have embedded it at the core of their business. And even if it’s not what they do as a business, it’s still something that they need. So they’ll provide feedback, contribute, and continue to invest in shepherding it forward.

Read more at The Linux Foundation

GitLab v10 Integrates with Kubernetes

“This GitLab release provides capabilities to fully embrace the benefits of DevOps — specifically CI/CD and Kubernetes based application development,” said Sid Sijbrandij, CEO of GitLab. The sector is increasingly adopting cloud-native capabilities utilizing the Kubernetes open source container orchestration software in the market, he noted, resulting in an increasing need for automated processes. GitLab 10.0 steps in with both enterprise and community editions to fill that need.

Cloud native capabilities get a boost in GitLab 10.0 via expanded Kubernetes capabilities. With this latest integration, deploying to Kubernetes is designed to be a seamless process, utilizing GitLab CI to quickly way configure, deploy and utilize clusters regardless of where the server may be running. As part of its mission to be the development tool for Kubernetes and cloud-native software, GitLab has also joined forces with Cloud Native Computing Foundation (CNCF). 

Read more at The New Stack

Accelerate Application Modernization with Node.js

Node.js is much more than an application platform. In a 2016 Forrester report, the research firm talked with several Node.js users and developers to better understand the growth of Node within global enterprises across all a range of industries.

Forrester’s key takeaways:

Node.js Is Enterprise Ready: Node.js is no longer just a platform for supercool digital startups and web-scale companies.

Node.js Powers Digital Transformation: Digital transformations require large technology shifts. Node.js enables companies to reduce the risk of these transformations.

Node.js Is More Than An Application Platform: Node.js enables rapid experimentation with corporate data, application modernization, and even drives internet of things (IoT) solutions.

 

Read more at Codeburst.io

What is Edge Computing and How It’s Changing the Network

Edge computing is a “mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet,” according to research firm IDC.

It is typically referred to in IoT use cases, where edge devices would collect data – sometimes massive amounts of it – and send it all to a data center or cloud for processing. Edge computing triages the data locally so some of it is processed locally, reducing the backhaul traffic to the central repository.

Typically, this is done by the IoT devices transferring the data to a local device that includes compute, storage and network connectivity in a small form factor. Data is processed at the edge, and all or a portion of it is sent to the central processing or storage repository in a corporate data center, co-location facility or IaaS cloud.

Read more at Network World

Developers Need to Start Paying Attention to Licenses

Today’s applications are arguably the equivalent of a Girl Talk album in code. They are made up of code that comes from a variety of sources. For instance, they may use one or more frameworks and libraries each of which may also may rely on hundreds of modules (ex. npm, Ruby gems). Even portions the “original” code in a project may have originally been copy/pasted from documentation, a tutorial or *gasp* StackOverflow.

If Girl Talk’s music mashups were “a lawsuit waiting to happen“, why aren’t our applications?

Read more at Dev.to

How Containers Scale: Service Mesh vs. Traditional Architecture

Containers continue to be a hot topic. Some claim they are on the verge of a meteoric rise to dominate the data center. Others find them only suitable for cloud. And still others are waiting patiently to see if containers are the SDN of app infrastructure or not – highly touted by pundits but rarely put it into practice in production.

A quick perusal of research and surveys shows that containers certainly are gaining traction — somewhere.

Read more at DZone