Home Blog Page 807

Can You Do Serious Microservices Without a PaaS?

Application servers are dead. At least, that seems to be the message being pushed by most evangelists and even a few enterprise software providers. In their place will rise self-contained microservices, running much the same kind of technology, but now deployed in containers or stand alone applications.

I really like the idea of microservices. They force you to communicate via well-defined interfaces, provide a small surface area for updates and testing, and look really good on a resume. Unfortunately, without a PaaS, I just don’t see how self-contained, self-hosting microservices would actually work in practice.

Read more at DZone

Real World Microservices: When Services Stop Playing Well and Start Getting Real

Microservices allow engineering teams to move quickly to grow a product… assuming they don’t get bogged down by the complexity of operating a distributed system. In this post, I’ll show you how some of the hardest operational problems in microservices—staging and canarying of deep services—can be solved by introducing the notion of routing to the RPC layer.

Looking back at my time as an infrastructure engineer at Twitter (from 2010 to 2015), I now realize that we were “doing microservices”, though we didn’t have that vocabulary at the time. (We used what I now understand to be a bad word—SOA).

Buzzwords aside, our motivations were the same as those doing microservices today. We needed to allow engineering teams to operate independently—to control their own deploy schedules, on call rotations, availability, and scale. These teams needed the flexibility to iterate and scale quickly and independently—without taking down the site.

Read more at Buoyant Blog

What Does DevOps 2.0 Look Like?

In 2008, Patrick Debois laid the foundations for DevOps at an Agile conference in Toronto.  He was trying to come up with a solution for the inherent conflicts between developers and system admins.   Both disciplines seemed to be at odds: developers wanted to release software more frequently, but system admins wanted to ensure stability, performance, and scalability.  While this conflict isn’t necessarily black and white, it highlighted the need for developers and system admins to no longer consider themselves as mutually exclusive roles, but rather as cross-functional partners.

A year later, Paul Hammond and John Allspaw gave a talk at the Velocity ‘09 conference that highlighted the necessity for cooperation between Dev and Ops.  This inspired Debois to coin the term “DevOps” (#DevOps)…

Read more at DevOps Zone

 

Integrating Docker Hub In Your Application Build Process

This tutorial will be the first in a series that will help you create continuous integration and deployment (CI/CD) pipelines for your applications.

CI/CD are vital in an Agile organization. We are going to discover how the addition of Docker tools in our toolkit can greatly improve application updates, testing and shipping.

Depending on your resources and constraints, there are of course multiple ways of achieving such a goal. Here, we are going to start simple and discover the free tools that Docker has made available to us.

You are going to create a Dockerfile for a simple web application, build it, get it to run locally, store it online, and create your first automated build on Docker Hub.

 Requirements

  • A valid account on Docker Hub
  • A valid account on either Github or Bitbucket. Both platforms have free plans to get you started.
  • Docker toolbox installed and running on your local machine.
     

Create a local repo for your app

For this tutorial you will use a very simple Flask application, written in python, and create a production ready Docker image.

Create a folder for your application code and initiate a git repo in it with:

mkdir flask-demo-app
cd flask-demo-app
git init

Add the following code in a file named app.py:

from flask import Flask

app = Flask(__name__)
app.debug = True

@app.route('/')
def main():
    return 'Hi ! I'm a Flask application.'

if __name__ == '__main__':
    app.run(host='0.0.0.0')

Docker helps us stay as DRY as possible, by giving us the ability to extend an already existing image. Here, I propose to use the image jazzdd/alpine-flask, as it is based on Alpine Linux, thus very lightweight.

Create a Dockerfile with the following content in it:

FROM jazzdd/alpine-flask
COPY . /app

Build your image locally:

docker build -t demo .
docker run -d --name app demo:latest

Test it by running curl in a temporary container with:

docker run --rm --link app:app byrnedo/alpine-curl http://app

You should see in your terminal something like:

Hi ! I'm a Flask application.

Stop and remove your container with:

docker rm -vf app

Add and commit your files to your local repo:

git add app.py Dockerfile
git commit -a -m "Initial commit"

 Upload your repo online

Here, you can choose either Github or Bitbucket. Both are very well supported by Docker Hub. In case it’s the first time you’re doing this, here are the links to their documentation:

  • Github
  • bitbucket 

    Once this is done, upload your code on your platform of choice with:

    git remote add origin git@.org:/.git
    git push -u origin master

If you navigate through the web interface, you should now see the code of your application.

Create an automated build on Docker Hub

In the top right part of your Docker Hub account web interface, click on the create automatic build button and follow the process, which is really self-explanatory.

In the top right part of your Docker Hub account web interface, click on the create automatic build button.
 

As the documentation is not exactly up to date, you will need to trigger the first build manually by clicking on the Trigger button in the Build settings tab:

Test your Build

Once you’ve created your automatic build and triggered its first run, you should now be able to test it with:

docker run -d --name app -p 80:80 <your_docker_hub_id>/<your_project_name>:latest

Stop and remove this container, once you’re done with it.

Version tagging

One of the most interesting features of the Hub is the ability to tag your Docker images. This way you can easily differentiate between them and, in the case of a problem with a specific version, perform a rollback.

In most cases the default configuration should suffice. It is very well integrated with Github and Bitbucket repositories.

When using a specific git branch name, Docker Hub will create your images with the matching tags.

Create a branch for a newer version of your code.

git checkout -b 1.1.0

Modify your app.py code with:

def main():
    return 'Hi ! I'm a Flask application in a Docker container.'

Upload your new branch:

git push -u origin 1.1.0

See the new image being built:

See the new image being built.

A new tagged build is now available:

A new tagged build is now available.

Run it with:

docker run -d --name app -p 80:80 <your_docker_hub_id>/<your_project_name>:1.1.0

Bravo, you just connected to the version 1.1.0 of your application!

Conclusion

As I mentioned earlier, using Docker Hub to build your images each time the code is modified and pushed to repository has some limitations. As of today it will build your images at the most every 5 minutes. But it is certainly a very cheap way of doing so as you don’t have to maintain your own repository. Later on, you might want to look into using base images of your making and hosting your own private Docker registry for obvious privacy and performance reasons.

Next time, we will explore how to run unit tests on your application using Travis.

Plasma 5.6 Makes KDE Plasma Desktop Environment More Powerful, Beautiful And Useful

kde plasma 5 linux desktop environment

 
Plasma has always been the talk of the town for its sleek and cutting edge look. KDE Plasma among all other Linux Desktop Environments have always stood out for its continuous development. The latest release of KDE Plasma is 5.6 which includes some new features, tweaks and fixes. Plasma desktop is also highly customizable so that you can customize it the way you need it to be.

Read At LinuxAndUbuntu

Business Transformation Matures Beyond Cloud: Red Hat

While cloud is still top of mind for many chief information officers, Red Hat office of technology vice president and chief technologist Chris Wright believes an increasing area of focus for businesses today is trying to figure out how to automate their infrastructure stack.

Speaking to ZDNet, Wright said a key goal of many businesses today is to improve their speed to market, and part of making that happen has been through automation so that engineers and developers can focus on application development, rather than be concerned with running the backend.

“It’s all about the application at the end of the day; we’re not just building infrastructure to just sit there … we actually want to do real work and that work is generated from application developers being able to quickly build apps,” he said.

Read more at ZDNet

 

Project Calico, Flannel Join Forces for Policy-Secured Networking

Traditional approaches to network management and security are ill-suited for the fluid nature of container and microservices-based architectures. Containers appear, disappear and are moved around to different compute nodes far too frequently to be assigned static IP addresses, much less be protected by firewalls and IP tables at the network’s perimeter.

With this in mind, two open source projects, CoreOS’ Flannel virtual networking technology, and the Project Calico, another network overlay technology with strong security controls, have joined forces to offer a single package, called Canal, that will offer policy-based secure networking for the container and microservices era. 

A new San Francisco-based company has been formed, Tigera, that will manage both projects. Tigera is a spin-off of network solutions provider Metaswitch, and many of the San Francisco Metaswitch engineers who worked on Calico have moved over to Tigera. CoreOS is donating the Flannel codebase to Tigera.

Read  more at The New Stack

Top 5 Business Benefits of SDN & NFV

When combined, SDN and NFV promise to bring an unprecedented level of digital control to global IT networks.

While software-defined networking (SDN) shuns proprietary hardware for an open, programmable global network infrastructure that can be centrally managed, network functions virtualization (NFV) enables features, such as firewall/proxy and acceleration, to be virtualized and delivered either from the network or from customer premise equipment (CPE), enabling zero touch provisioning when additional functionality is required.

Together these technologies make it possible for enterprises to access network capacity on demand via a self-service portal. In addition, routing and security policies can automatically adapt to address real-time congestion, security threats or network outages.

Read more at The New IP.

On the Rise: Six Unsung Apache Big Data Projects

Countless organizations around the world are now working with data sets so large and complex that traditional data processing applications can no longer drive optimized analytics and insights. That’s the problem that the new wave of Big Data applications aims to solve, and the Apache Software Foundation (ASF) has recently graduated a slew of interesting open source Big Data projects to Top-Level status. That means that they will get active development and strong community support.

Most people have heard of Apache Spark, a Big Data processing framework with built-in modules for streaming, SQL, machine learning and graph processing. IBM and other companies are pouring billions of development dollars into Spark initiatives, and NASA and the SETI Institute are collaborating to analyze terabytes of complex deep space radio signals using Spark’s machine learning capabilities in a hunt for patterns that might betray the presence of intelligent extraterrestrial life.

However, several other recently elevated Apache Big Data projects deserve attention, too. In fact, some of them may produce ecosystems of activity and development that will rival Spark’s. In conjunction with this week’s ApacheCon North America conference and Apache: Big Data events, this article will round up the Apache Big Data projects that you should know about.

Here are six projects on the rise:

Kylin  

Apache recently announced that its Kylin project, an open source Big Data project born at eBay, has graduated to Top-Level status. Kylin is an open source Distributed Analytics Engine designed to provide an SQL interface and multi-dimensional analysis (OLAP) on Apache Hadoop, supporting extremely large datasets. It is still widely used at eBay and at a few other organizations.

“Apache Kylin’s incubation journey has demonstrated the value of Open Source governance at ASF and the power of building an open-source community and ecosystem around the project,” said Luke Han, Vice President of Apache Kylin. “Our community is engaging the world’s biggest local developer community in alignment with the Apache Way.”

As an OLAP-on-Hadoop solution, Apache Kylin aims to fill the gap between Big Data exploration and human use, “enabling interactive analysis on massive datasets with sub-second latency for analysts, end users, developers, and data enthusiasts,” according to developers. “Apache Kylin brings back business intelligence (BI) to Apache Hadoop to unleash the value of Big Data,” they added.

Lens

Apache also recently announced that Apache Lens, an open source Big Data and analytics tool, has graduated from the Apache Incubator to become a Top-Level Project (TLP). According to the announcement: “Apache Lens is a Unified Analytics platform. It provides an optimal execution environment for analytical queries in the unified view. Apache Lens aims to cut the Data Analytics silos by providing a single view of data across multiple tiered data stores.”

“By providing an online analytical processing (OLAP) model on top of data, Lens seamlessly integrates Apache Hadoop with traditional data warehouses to appear as one. It also provides query history and statistics for queries running in the system along with query life cycle management.”

“Incubating Apache Lens has been an amazing experience at the ASF,” said Amareshwari Sriramadasu, Vice President of Apache Lens. “Apache Lens solves a very critical problem in Big Data analytics space with respect to end users. It enables business users, analysts, data scientists, developers and other users to do complex analysis with ease, without knowing the underlying data layout.”

Ignite

The ASF has also announced that Apache Ignite has become a top-level project. It’s an open source effort to build an in-memory data fabric.

“Apache Ignite is a high-performance, integrated and distributed In-Memory Data Fabric for computing and transacting on large-scale data sets in real-time, “orders of magnitude faster than possible with traditional disk-based or flash technologies,” according to Apache community members. “It is designed to easily power both existing and new applications in a distributed, massively parallel architecture on affordable, industry-standard hardware.”

Brooklyn

The foundation announced that Apache Brooklyn is now a Top-Level Project (TLP), “signifying that the project’s community and products have been well-governed under the ASF’s meritocratic process and principles.” Brooklyn is an application blueprint and management platform used for integrating services across multiple data centers as well as and a wide range of software in the cloud.

According to the Brooklyn announcement: “With modern applications being composed of many components, and increasing interest in micro-services architecture, the deployment and ongoing evolution of deployed apps is an increasingly difficult problem. Apache Brooklyn’s blueprints provide a clear, concise way to model an application, its components and their configuration, and the relationships between components, before deploying to public Cloud or private infrastructure. Policy-based management, built on the foundation of autonomic computing theory, continually evaluates the running application and makes modifications to it to keep it healthy and optimize for metrics such as cost and responsiveness.”

Brooklyn is in use at some notable organizations. Cloud service providers Canopy and Virtustream have created product offerings built on Brooklyn. IBM has also made extensive use of Apache Brooklyn in order to migrate large workloads from AWS to IBM Softlayer.

Apex

In April, the Apache Software Foundation elevated its Apex project to Top-Level status. It is billed as “a large scale, high throughput, low latency, fault tolerant, unified Big Data stream and batch processing platform for the Apache Hadoop ecosystem.” Apex works in conjunction with Apache Hadoop YARN, a resource management platform for working with Hadoop clusters.

Tajo

Finally, Apache Tajo, an advanced open source data warehousing system in Apache Hadoop, is another new Big Data project to know about. Apache claims that Tajo provides the ability to rapidly extract more intelligence for Hadoop deployments, third party databases, and commercial business intelligence tools.

Clearly, although Apache Spark draws the bulk of the headlines, it is not the only Big Data tool from Apache to keep your eyes on. As this year continues, Apache likely will graduate even more compelling Big Data projects to Top-Level status, where they will benefit from optimized development resources and more.

 

Enterprise networkers have organized: Here are their demands

A user group for enterprise IT managers is taking on software-defined networking, calling for new technologies they say would better serve enterprise needs.

On Tuesday, the Open Networking User Group (ONUG) announced initiatives behind four technologies that it says would help enterprises build and run their networks better.

There’s no shortage of platforms and protocols for software-defined infrastructure, including things like OpenFlow, OpenStack and ONOS (Open Network Operating System). But they were developed around the needs of vendors and service providers more than of enterprises, ONUG founder Nick Lippis said. His group wants to push along a few more pieces that aren’t there yet.

SDN (software-defined networking) has been around since 2009 and found its way into some major products and some enterprise and carrier implementations. SD-WAN, a variant for wide-area networks, is expected to grow quickly in the next few years.

Both place more control of infrastructure into software, which can bring new capabilities and let less expensive commodity hardware take the place of proprietary gear.

Read more at PCWorld.