Home Blog Page 588

The Promise of Blockchain Is a World Without Middlemen

The blockchain is a revolution that builds on another technical revolution so old that only the more experienced among us remember it: the invention of the database. First created at IBM in 1970, the importance of these relational databases to our everyday lives today cannot be overstated. Literally every aspect of our civilization is now dependent on this abstraction for storing and retrieving data. And now the blockchain is about to revolutionize databases, which will in turn revolutionize literally every aspect of our civilization.

IBM’s database model stood unchanged until about 10 years ago, when the blockchain came into this conservative space with a radical new proposition: What if your database worked like a network — a network that’s shared with everybody in the world, where anyone and anything can connect to it?

Read more at HBR

Stageless Deployment Pipelines: How Containers Change the Way We Build and Test Software

Large web services have long realized the benefits of microservices for scaling both applications and development. Most dev teams are now building microservices on containers but they haven’t updated their deployment pipeline for the new paradigm. They still use a classic build -> stage -> test -> deploy model. It’s classic and entrenched and it’s just a bad way to release code.

First the bad, staging servers get in the way of continuous delivery

Most development teams will recognize this everyday reality. You work in a sprint to get a number of changes completed. You open a new branch for each feature you work on and then you send it on for a pull request to a master, staging, or developer branch. The staging branch (or worse, your master branch) is then deployed to a staging server with all the changes from developers in the last few days (or two weeks). Then it’s pushed to  staging.

But oh no, there is a problem. The application doesn’t work, integration tests fail, there’s a bug, it’s not stable, or maybe you just sent the staging url to the marketing team and they don’t like the way a design was implemented. Now you need to get someone to go into the staging/master branch that’s been poisoned with this change. They probably have to rebase, then remerge a bunch of the pull requests minus the offending code and then push it back into staging. Assuming all goes well the entire team has only lost a day.

In your next retrospective the team talks about better controls and testing before things reach staging but no one stops to ask the question why they’re using this weird staging methodology anyway when we live in a world with containers and microservices. Staging servers were originally built for monolithic apps and were only meant to provide simple smoke tests to see that code didn’t just run on a developer’s local machine, but would at least run in some other server somewhere. Even though they are used for full application testing with microservices it’s not an efficient way to test changes.

Here’s the end result

  1. Batched changes happen in a slow cadence

  2. One person’s code only releases when everyone’s code is ready

  3. If a bug is found, the branch is now poisoned and you have to pull apart the merges to fix it (often requiring a crazy git cheatsheet to figure it out).

  4. Business owners don’t get to see code until it’s really too late to make changes

How should it work? Look at your production infrastructure

Your production infrastructure is probably ephemeral; built up of on demand instances in Amazon/Azure/Google Cloud.  Every developer should be able to spin up an instance on demand for their changes, send it to QA, iterate, etc before sending it on to release.

Instead of thinking about staging servers we have test environments which follow along the classic git branch workflow. Each test environment can bring together all the interconnected microservices for much richer testing conditions.

Following this model changes the whole feedback and iteration loop to stay within the feature branch, never moving onto merge and production until all stakeholders are happy. Further, you can actually test each image against the different versions of microservices.

To accomplish this DevOps teams can build lots of scripts, logic and workflows that they have to maintain or there’s tools out there that already build the stuff into a hosted CI as part of the container lifecycle management.

The advantages of test on demand iteration model

Once your test structure becomes untethered from a stagnant staging model dev teams can actually produce code faster. Instead of waiting for DevOps or approval to get the changes onto a staging server where stakeholders can approve the code goes straight into an environment where they can share and get feedback.

It also allows a much deeper level of testing that traditional CI by bringing all the connected microservices into a composition. You can actually write unit tests that rely on interconnected services. In this paradigm, integration testing allows for a greater variety of tests and each testing service essentially becomes it’s own microservice.

Once iteration is complete the code should be ready to go straight into Master (after a rebase) eliminating the group exercise that normally takes place around staging. Testing and iteration happens at the feature level, and then can be deployed at the feature level.

That means no more staging.

About Dan Garfield and Eran Barlev

Dan Garfield is a full-stack web developer with Codefresh, a container lifecycle management platform that provides developer teams of any size advanced pipelines and testing built specifically for containers like Docker. Check them out at https://codefresh.io

Eran is an ISTQB Certified Tester with over 20 years of experience as a software engineer working primarily in compiled languages. He is the Founder of the Canadian Software Testing Board (www.cstb.ca) and an active member of the ISTQB (www.istqb.org – International Software Testing Qualifications Board).

The Companies That Support Linux and Open Source: VMware

VMware is a global leader in cloud infrastructure and business mobility and has been active in open source development for many years.

The company has steadily increased its open source involvement through Linux Foundation projects such as ONAP, Cloud Native Computing Foundation (CNCF), Cloud Foundry, Open vSwitch and others. And it has just increased its commitment to open source and The Linux Foundation by becoming a Gold member.

Dirk Hohndel is Chief Open Source Officer at VMware.

Open source software helps VMware accelerate its development processes and deliver even better solutions to its customers, said Dirk Hohndel, Chief Open Source Officer at VMware, in the Q&A below.

We see open source components as vital ingredients to our products and are actively engaged in many upstream projects,” Hohndel said. “We also continue to create new and interesting open source projects of our own.”

Hohndel leads VMware’s Open Source Program Office, directing the efforts and strategy around use of and contribution to open source projects and driving common values and processes across the company for VMware’s interaction with the open source communities. Before joining VMware, he spent almost 15 years as Intel’s Chief Linux and Open Source Technologist and he’s been an active developer and contributor in Linux and open source since the early 1990s.

Here, Hohndel tells us more about VMware; how Linux and open source have become integral to their business; and how they participate in the open source community.

Linux.com: What does VMware do?

Dirk Hohndel: VMware is a global leader in cloud infrastructure and digital workspace technology. We help our customers to build and evolve scalable production IT environments delivered as an on-prem or hybrid cloud solution that meets their needs. Additionally, we provide customers with modern end-user computing solutions that enable users to access their critical applications, desktops and services using any device or platform.

Linux.com:  How and why do you use Linux and open source?

Hohndel: VMware uses many open source components as part of the solutions we deliver to our customers. Linux is a key guest (and host) OS that we support and the basis of many customer solutions that run on top of our infrastructure.

We see open source components as vital ingredients to our products and are actively engaged in many upstream projects. We also continue to create new and interesting open source projects of our own such as the Project Clarity design system or the Project Harbor container image registry.

Linux.com: Why did you increase your commitment to The Linux Foundation?

Hohndel: We see The Linux Foundation as one of the key consortia in the broader open source ecosystem. In parallel, we steadily increased our engagements with the various projects and foundations such as ONAP, CNCF, Cloud Foundry, and others under the LF in the past few years. It only made sense to increase our engagement in and support for The Linux Foundation, given the role its projects play in our business.

Linux.com: What interesting or innovative trends in technology are you witnessing and what role do Linux and open source play in them?  How is VMware participating in that innovation?

Hohndel: The IT infrastructure industry is constantly evolving. More and more of the relevant solutions stacks are built around open source components, and many companies are collaborating on accelerating the transformation of entire industry verticals. The recently launched ONAP Project is an excellent example of this trend and VMware was one of the founding Platinum sponsors of this project.

Linux.com: How has participating in the Linux and open source communities changed your company?

Hohndel: At its roots, VMware is an engineering driven company. Our engagement with the Linux and open source communities has helped us accelerate our development processes and allowed us to collaborate with other partners and customers in this space to deliver even better solutions.

Linux.com: Is there anything else important or upcoming that you’d like to share?

Hohndel: For VMware, the upgrade to a Gold sponsorship of the Linux Foundation is an integral part of our open source strategy and a key step on our journey to a more open and collaborative future. We look forward to working across many LF projects in order to create solutions that delight our customers.

Learn more about Linux Foundation corporate membership and see a full list of members at https://www.linuxfoundation.org/members/join.

Why Open Source Is Like a Team Sport

As director for Open Platform for NFV (OPNFV) — a role she alternatively describes as coach, nerd matchmaker and diplomat — Heather Kirksey oversees and provides guidance for all aspects of the project, from technology to community and marketing. At the recent Linux Foundation Open Source Leadership Summit, she headed up a session titled “Open Source as a Team Sport” with and OPNFV’s Chris Price and OpenStack’s Jonathan Bryce. …

Superuser sat down with Kirksey to ask her more about the parallels between hockey and open source. She tells us why the brutality of hockey is a good metaphor for open source, about leveling the open source playing field for women and how you can get involved with OPNFV.

Of all the team sports, hockey is one of the most violent, right?

Why do you think I like hockey? I like my sports with a side of brutality…In most sports there are tensions that flare up and sometimes it can get raw and there are fisticuffs. At the end of the day, you need to come together because you’re trying to accomplish a goal.

Read more at Superuser

What If Mesos Metrics Collection Was a Snap?

https://www.youtube.com/watch?v=DbBC5zkwLd4?list=PLbzoR-pLrL6pLSHrXSg7IYgzSlkOh132K


This talk covers the basics of Mesos metrics collection, introduces Snap — a powerful, open source telemetry framework for modern infrastructures — as well as an open source plugin for Snap developed specifically to collect Mesos cluster metrics.

What If Mesos Metrics Collection Was a Snap?

Roger Ignazio, tech lead at Mesosphere, introduces the Snap plugin for Apache Mesos at MesosCon Asia 2016. Snap is an open telemetry framework that simplifies the collection, processing, and publishing of system data through a single API. It collects hundreds of metrics from Mesos masters and agents and helps you can make sense of this mass of data so that you can monitor your cluster operations.

Ignazio presents Snap in the context of day two operations. “Day two operations is everything that comes after day one,” says Ignazio. “So, what does that mean? Everything that happens after you provision a Mesos or a DC/OS cluster falls into day two operations. That’s logging, that’s debugging, that’s metrics collection. Really, anything that you need to do to operate and ensure the health of a cluster.”

There are many Mesos metrics APIs that you can potentially use, and Ignazio describes some fo them. “The first one is the redirect endpoint. In a Mesos cluster you commonly have three or five or seven masters, for a production highly available deployment. The redirect endpoint returns a HTTP 307 redirect to the leading master, and this is important because you never want to query the non-leading master for its metrics.”

“The next is this metrics snapshot endpoint, and that’s a summary of the masters metrics. Kind of a high level and operations view. It’s things like how long it’s taking to query the Mesos internal registry, and how many messages are being sent back and forth between the frameworks.” Other metrics APIs include state and state summary endpoints, which provide either a high level or a detailed view of cluster states, and metrics about running containers, container IDs, CPU, memory, and disk usage.

Snap separates the collection of all of this data from publishing it. “You can filter, you can add context, you can type metrics, you can aggregate them, and then ultimately publish them onto your message queue or to a time-series database, and you can visualize them with your tools of choice.” The Grafana dashboard is a popular choice for creating a visual representation of your Snap data.

Watch the full presentation (below) to learn more about Snap’s architecture and to see some examples of how to use it.

https://www.youtube.com/watch?v=DbBC5zkwLd4?list=PLbzoR-pLrL6pLSHrXSg7IYgzSlkOh132K

Interested in speaking at MesosCon Asia on June 21 – 22? Submit your proposal by March 25, 2017. Submit now>>

Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now to save over $125!

Litebook Launches $249 Linux Laptop

A company called Litebook has released a new Linux laptop that is priced to compete with Chromebooks — if not as cheap as the $89 Pinebook. That’s because the Pinebook is bare-boned when it comes to specs, using an ARM CPU, 2GB of RAM, and 16GB of built-in storage. The Litebook, on the other hand, uses an Intel Celeron processor (the N3150), twice as much memory, and a 512GB hard drive. (An extra $20 gets you a 32GB SSD in addition to the hard drive to help speed up boot-ups.) 

The Litebook ships with the Elementary OS flavor of Linux, though you can install an alternate that uses the Linux kernel 4.8.

Read more at ZDNet

802.Eleventy What? A Deep Dive into Why Wi-Fi Kind of Sucks

Just as everybody got used to the idea that 802.11b sucked, 802.11g came along. Promising 54 screaming Mbps, 802.11g was still only half the speed of Fast Ethernet, but five times faster than original Ethernet! Right? Well, no. Just like 802.11b, the advertised speed was really the maximum physical layer data rate, not anything you could ever expect to see on a progress bar. And also like 802.11b, your best case scenario tended to be about a tenth of that—5 Mbps or so—and you’d be splitting that 5 Mbps or so among all the computers on the network, not getting it for each one of them like you would with a switched network.

802.11n was introduced to the consumer public around 2010, promising six hundred Mbps. Wow! Okay, so it’s not as fast as the gigabit wired Ethernet that just started getting affordable around the same time, but six times faster than wired Fast Ethernet, right? Once again, a reasonable real-life expectation was around a tenth of that. Maybe. On a good day. To a single device.

Read more at Ars Technica

Top 4 JavaScript Code Editors

JavaScript is everywhere, and its ubiquitous presence on the web is undeniable. Every app uses it in one form or another. And any developer who is serious about the web should learn JavaScript. If you already know it, be sure to continue learning new frameworks, libraries, and tools, because JavaScript is a living, evolving language.

The JavaScript community has a great open source environment, and that has led to some excellent open source JavaScript IDEs (Integrated Development Environments). The open source movement is strong, and there are many IDEs that you can use to code your JavaScript program.

Read more at OpenSource.com

Google’s Microservices Protocol Joins Kubernetes in Cloud Foundation

Google’s gRPC protocol was originally developed to speed up data transfer between microservices, proving faster and more efficient than passing around data encoded in JSON.

Yesterday the Cloud Native Computing Foundation (CNCF), which oversees the development of Kubernetes, announced it would also become the home for gRPC’s development.

Read more at InfoWorld