Home Blog Page 673

Nov. 7 Webinar on Taking the Complexity Out of Hadoop and Big Data

The Linux Foundation’s Hadoop project, ODPi, and Enterprise Strategy Group (ESG) are teaming up on November 7 for a can’t miss webinar for Chief Data Officers and their Big Data Teams.

As a bonus, all registrants will receive a free copy of Nik’s latest Big Data report.
Join ESG analyst Nik Rouda and ODPi Director John Mertic for “Taking the Complexity out of Hadoop and Big Data” to learn:

  1. How ODPi pulls complexity out of Hadoop, freeing enterprises and their vendors to innovate in the application space

  2. How CDOs and app vendors port apps easily across cloud, on prem and Hadoop distros. Nik revels ESG’s latest research on where enterprises are deploying net new Hadoop installs across on-premise, public, private and hybrid cloud

  3. What big data industry leaders are focusing on in the coming months

Removing Complexity

As ESG’s Nik Rouda observes, “Hadoop is not one thing, but rather a collection of critical and complementary components. At its core are MapReduce for distributed analytics jobs processing, YARN to manage cluster resources, and the HDFS file system. Beyond those elements, Hadoop has proven to be marvelously adaptable to different data management tasks. Unfortunately, too much variety in the core makes it harder for stakeholders (and in particular, their developers) to expand their Hadoop-enhancing capabilities.”
The ODPI Compliant certification program ensures greater simplicity and predictability for everyone downstream of Hadoop Core – SIs, app vendors and end users.

Application Portability

ESG reveals their latest findings on how enterprises are deploying Hadoop, and you may be surprised at the percent moving to the cloud. Find out who’s deploying on premise (dedicated and shared), who’s using pre-configured on-prem infrastructure, what percent are moving to private, public and hybrid cloud.

Where Industry Leaders are Headed

ESG interviewed leaders like Capgemini, VMWare, and more as part of this ODPi research – let their thinking light your way as you develop your Hadoop and Big Data Strategy.

Reserve your spot for this informative webinar. 

As a bonus, all registrants will receive a free copy of Nik’s latest Big Data report.

Managing Production Systems with Kubernetes in Chinese Enterprises

Kubernetes has rapidly evolved from running production workloads at Google to deployment in an increasing number of global enterprises. Interestingly, US and Chinese enterprises have different expectations when it comes to requirements, platforms, and tools. In his upcoming talk at KubeCon, Xin Zhang, CEO of Caicloud, will describe his company’s experiences using Kubernetes to manage production systems in large-scale Chinese enterprises. We spoke with him to learn more.

Linux.com: Is there anything holding back Kubernetes adoption and/or successful Kubernetes deployments in China?

Xin Zhang: There are several pain points of Kubernetes adoption we have encountered during Chinese enterprise deployment. Some examples are listed below:

  • The most obvious one is that people may immediately stumble onto is the Internet inaccessibility to certain Docker images hosted outside the Chinese network. Some traditional industries even require no outbound network accessibility (no traffic going out of the enterprise intranet), so being able to deploy Kubernetes without outside network access is a must.
  • Currently, most mutating operations of Kubernetes require using command-line and writing yaml or JSON files, whereas a considerable amount of Chinese enterprise users are more familiar and comfortable with UI operations.
  • Many of the networking and storage plugins of Kubernetes are based on US cloud providers such as AWS, GCE, or Azure, which may not be always available or satisfactory (performance-wise) to Chinese enterprise users.
  • The complexity of Kubernetes (both its concept and its operations manual) may seem a burden to certain users.

Linux.com: Are there certain required features of a production system that are unique to Chinese enterprises?

Xin: When working with our customers, we did observe a set of commonly requested features that are missing or not currently mature from the official upstream releases. While these patterns are summarized from our Chinese customers, they may have broader applicability elsewhere. We sketch some of them below:

  • A better logging mechanism is required. The default logging module requires applications to dump their logs to stdout or stderr, while system components like fluentd will correctly do the right thing. However, Chinese enterprise applications are usually old-school style, which write logs to local files, and some applications use separate files to do fine-grained logging classification. Sometime enterprises even want to send logs into their existing, separate log store and processing pipeline, instead of using the EFK plugins.

  • Monitoring: There are several customized monitoring requests complementing the upstream solution:

    • Some customers consider running the somewhat heavyweight monitoring components in the same cluster as their applications a potential risk, and we did observe cases where monitoring components eat up system resources and affect user applications. Hence, being able to run monitoring components separately from the application cluster represents a common request.
    • While Kubernetes monitors applications running in it, a follow-up question is who monitors Kubernetes itself (its system components) and makes sure even the master is highly available.
    • Chinese enterprises tend to have existing monitoring infrastructure and tools (Zabbix is extremely popular!), and they’d like to have a unified monitoring panel that include both Kubernetes container level monitoring and existing metrics.
  • Network separation: While the default Kubernetes networking model allows any point-to-point network access within a cluster, complex enterprise usage scenarios require network policies, isolation, access control, or QoS among pods or services. Some enterprises even require Kubernetes to manage or cope with underlying SDN devices such as Huawei SDN controller.

Linux.com: What are the most common pitfalls you’ve seen when running Kubernetes in the wild?

Xin: We did encounter a handful of pitfalls during production usage in large-scale enterprise workloads. Some of them are summarized below:

  • Resource quota and limit: While the resource quota and limit are intended to perform resource isolation and allocation, a good percentage of Chinese enterprise users have little idea of what values are appropriate to set. As a result, users may set inappropriate min or max resource range for applications, that either result in task OOM or very low resource utilization.

  • Monitoring instability: We found in our setting using the default heapster + influxdb solution for monitoring is not very stable in large-scale deployments, which can cause missed alerting or instability of the whole system.

  • Running out of disk: As there is little limitation on disk usage in certain scenarios, an application that writes excessive logs may exhaust the local disk quota and cause other tasks to fail.

  • Update the cluster: We provide commercial distributions of Kubernetes to customers and update our version every three months, roughly aligned with the upstream release schedule. And updating a live Kubernetes cluster is still cumbersome.

Linux.com: What well-known Chinese enterprises currently run Kubernetes in production today? What are they using it for? 

Xin: Some of our own Kubernetes users cover leaders in a variety of industries, some example customers or industries are:

  • Jinjiang Travel International is one of the top 5 largest OTA and hotel companies that sells hotels, travel packages, and car rentals. They use Kubernetes containers to speed up their software release velocity from hours to just minutes, and they leverage Kubernetes to increase the scalability and availability of their online workloads.
  • China Mobile is one of the largest carriers in China. They use containers to replace VMs to run various applications on their platform in a lightweight fashion, and they leverage Kubernetes to increase resource utilization.
  • State Power Grid is the state-owned power supply company in China. They use containers and Kubernetes to provide failure resilience and fast recovery.

Linux.com: How can Kubernetes be used more effectively in global environments?

Xin: To us, some imminent needs that will enable wider Kubernetes adoption globally are the following:

  • Ease of deployability with more diverse IaaS settings, in the parts of world where GCE, AWS, etc. are not the best choices.

  • More performance tuning and optimization: Production systems have stringent performance requirements, hence continuing to push the boundary of Kubernetes performance is of great value.

  • Better documentation and education: We have received customer complaints that the official document is still hard to follow and too many cross-references exist. We hope more efforts could be devoted to better documentation and more educational events happening around the globe (such as training, certification, and technical meetups/conferences).

Registration for this event is sold out, but you can still watch the keynotes via livestream and catch the session recordings on CNCF’s YouTube channel. Sign up for the livestream now.

Microsoft Open Sources Its Next-Gen Cloud Hardware Design

Microsoft today open sourced its next-gen hyperscale cloud hardware design and contributed it to the Open Compute Project (OCP). Microsoft joined the OCP, which also includes Facebook, Google, Intel, IBM, Rackspace and many other cloud vendors, back in 2014. Over the last two years, it already contributed a number of server, networking and data center designs.

With this new contribution, Project Olympus, it’s taking a slightly different approach to open source hardware, however. Instead of contributing designs that are already finalized, which is the traditional approach to open sourcing this kind of work, the Project Olympus designs aren’t production-ready yet. The idea here is to ensure that the community can actually collaborate in the design process.

Read more at Tech Crunch

Node.js Is Helping Developers Get the Most Out of JavaScript

Node.js, the JavaScript runtime of choice for high-performance, low latency apps, continues to gain popularity among developers on the strength of JavaScript.

When a small startup decided to launch its technological foundation on top of Microsoft’s .NET platform, it needed a .NET expert to provide a master view. Being lean and distributed, the company chose .NET guru Carl Franklin to serve remotely as CTO to oversee things.

However, at the DEVintersection conference in Las Vegas last week, Franklin, now executive vice president of App vNext and co-host and founder of .NET Rocks!, said he held the CTO position for all of two days before someone whispered in the CEO’s ear and convinced him that hot, new Node.js—not shriveled old .NET—was the way to go.

“Node.js is rapidly replacing Java and .NET due to the agility of the Node.js software development life cycle,” said Dan Shaw, CTO and co-founder of NodeSource, a provider of support services for Node.js shops. “Building a Java app typically takes six to 24 months from start to finish. In contrast, Node.js applications take two to six months.”

Read more at eWeek

Let’s Automate Let’s Encrypt

HTTPS is a small island of security in this insecure world, and in this day and age, there is absolutely no reason not to have it on every Web site you host. Up until last year, there was just a single last excuse: purchasing certificates was kind of pricey. That probably was not a big deal for enterprises; however, if you routinely host a dozen Web sites, each with multiple subdomains, and have to pay for each certificate out of your own dear pocket—well, that quickly could become a burden.

Now you have no more excuses. Enter Let’s Encrypt — a free Certificate Authority that officially left Beta status in April 2016. 

Read more at Linux Journal

‘Thanks for Using Containers!’ … Said No CEO Ever

“We think we’re going to get magical powers when we use other people’s servers,” said Casey West, Principal Technologist for Pivotals Cloud Foundry platform, during his OSCON Europe talk, in which he provided a humorous, and insightful look  at how the CEO sees, or doesn’t see or honestly doesn’t care about  the vast majority of the work that IT professional do in cloud.

IT pros work across pretty much every industry these days. But the expectations are largely the same across all of them, no matter if the projects they work on are “greenfield projects” designed to break into new areas of business, or “brownfield projects,” which is a nice way of saying you are updating legacy systems.

With greenfield systems, all you have to do is create something from thin air and compete with billion dollar companies. No big deal.” The requirements are basically twofold: All you have to do is…

  • Deliver faster than everyone else.
  • Never make a mistake.

With brownfield systems, All you have to do is modernize an existing application that makes all our revenue in order to compete with companies theoretically valued at a billion dollars.” No big deal. Oh and…

  • Deliver faster than everyone else.
  • Never make a mistake.

Read more at The New Stack

How DNS Works: A Primer

The Domain Name System is critical to fundamental IP networking. Learn DNS basics in this primer.

DNS has been in the news a great deal as of late. First, there was the controversy over the United States government essentially handing over control of the Internet’s root domain naming system. Then DNS made headlines when cybercriminals performed three separate distributed denial of service (DDoS) attacks on a major DNS service provider by leveraging a botnet army of millions of compromised IoT devices. Yet with all the hoopla surrounding DNS, it surprises me how many IT pros don’t fully understand DNS and how it actually works.

DNS stands for Domain Name System. Its purpose is to resolve and translate human-readable website names to IPv4 or IPv6 addresses. 

Read more at Network Computing

Hyperledger Eyes Mobile Blockchain Apps With ‘Iroha’ Project

A blockchain project developed by several Japanese firms including by startup Soramitsu and IT giant Hitachi has been accepted into the Hyperledger blockchain initiative.

Developed by Hyperledger member and blockchain startup Soramitsu, Iroha was first unveiled during a meeting of the project’s Technical Steering Committee last month. Iroha is being pitched as both a supplement to other Hyperledger-tied infrastructure projects like IBM’s Fabric (on which it is based) and Intel’s Sawtooth Lake.

Read more CoinDesk

Deployment Automation: The Linchpin of DevOps Success

Deployment Automation is the linchpin of DevOps transformation. I cannot put it more simply:
To accelerate your DevOps adoption, and get the biggest bang for your buck: FOCUS ON DEPLOYMENTS.

2015-2016-state-of-the-devops-reportsThe previous State of the DevOps reports have shown a pretty straightforward equation:

Deployment frequency is THE indicator for success
and deployment pain is a predictor of failure.

More Deploys

We all remember the impressive hockey-stick graph from the 2015 report, comparing the number of deploys/day/developer between high-performing IT organizations (in Orange) and low-performing ones.

That difference became even more staggering in the 2016 research.

The 2016 State of the DevOps report showed that high-performing IT organizations deploy 200 times more frequently than low performers, with 2,555 times faster lead times.
 

Less Pain

The 2016 State of the DevOps report showed that high-performing IT organizations deploy 200 times more frequently than low performers, with 2,555 times faster lead times.

The 2015 report found that deployment automation (along with CI, testing and version control practices) predicted lower levels of deployment pain, higher IT performance, and lower change failure rates.

banging-head-against-a-wallThe reports show that those high performing IT organizations also have higher employee loyalty and engagement.

On the other end of the spectrum: deployment pain is correlated with employee churn.

This makes Deployment automation one of the clearest examples of the convergence of the 2 axes of DevOps: the technology/process axis, and the culture/people one.

If your talent is running for the hills, and your developers are tired of banging their heads against the wall trying to get stuff to work — you should look at how you deploy.

ARA

Not only the State of the DevOps reports, but ALL analyst research confirms how critical deployments are. For example, in the recent Gartner Magic Quadrant for Application Release Automation, Gartner mentions that ARA – which Deployment Automation and release coordination are key tenants of – is the most important technology to an organization’s adoption of DevOps.

Bottom Line

Electric Cloud focuses primarily on Application Release Automation and on the Ops-side of large-scale deployment automation because we understand the bottom line is pretty simple:
To accelerate your DevOps transformation, and keep employee satisfaction high – focus on deployments.

Read the full article here. 

Best Practices for Process as Code

We recently hosted another episode of our Continuous Discussion (#c9d9) video podcast, featuring expert panelists discussing process as code.

Our expert panel included: David-Blank Edelman, technical evangelist for Apcera; Juni Mukherjee, Author of “Continuous Delivery Pipeline – Where Does It Choke?”; J. Paul Reed, managing partner of Release Engineering Approaches; Mark Chassy, product director at Orson Software; Michael Wittig, Author of “Amazon Web Services in Action”; and, our very own Anders Wallgren and Sam Fell.

During the episode, panelists discussed their definitions of process-as-code as well as use cases and best practices for defining your automation processes as code, and ensuring your automation is versionable, testable, and repeatable.

To read the full list of best practices, go here.