Home Blog Page 541

Data Center Networking Performance: New Apps Bring New Requirements

With machine learning, big data, cloud, and networking functions virtualization (NFV) initiatives invading the data center, there are implications for data center networking performance.

Large cloud services providers such as Amazon, Google, Baidu, and Tencent have reinvented the way in which IT services can be delivered, with capabilities that go beyond scale in terms of sheer size to also include scale as it pertains to speed and agility. That’s put traditional carriers on notice: John Donovan, chief strategy officer and group president at AT&T technology and operations, for instance, said last year that AT&T wants to be the “most aggressive IT company in the world.” He noted that in a world where over-the-top (OTT) offerings have become commonplace, application and services development can no longer be defined by legacy processes.

“People that were suppliers are now competitors,” he said. “People that were competitors are now partners in areas such as open source development. The way the whole industry worked is changing. …”

Read more at SDxCentral

Stricter Immigration Policies Crimp U.S. Open Source Development

What do Linus TorvaldsDirk HohndelMichael WideniusSolomon HykesNithya RuffSam RamjiLennart PoetteringBoris RenskiMadhura MaskaskyTheodore Ts’oWim Coekaerts, and Mark Shuttleworth all have in common? Each of them has founded or led major open source projects.

Also, each of them is an immigrant to the U.S., child of an immigrant or a non-U.S. national.

“Linux, the largest cooperatively developed software project in history, is created by thousands of people from around the world and made available to anyone to use for free,” noted Jim Zemlin, the executive director of the Linux Foundation in a blog post earlier this year…

Read more at The New Stack

The Mainframe vs. the Server Farm: A Comparison

 Let’s take a look at what the mainframe really is, and consider its use cases.

Mainframe Workloads

What do you use a mainframe for? Complex, data-intensive workloads, both batch and high-volume online transaction processing (OLTP). For example, banks do a lot of both. When customers access their accounts online that is OLTP. It is real-time and interactive. After hours banks typically run batch jobs: sending out customer statements, billing, daily totals, interest calculations, reminders, marketing emails, and reporting. This may mean processing terabytes of data in a short time, and that is what mainframes excel at. Health care, schools, government agencies, electric utilities, factory operations, enterprise resource planning, and delivering online entertainment are all good candidates for mainframes. The Internet of Things–PCs, laptops, smartphones, vehicles, security systems, “smart” appliances, and utility grids–are all well-served by mainframes.

Read more at DZone

Cloud Foundry Foundation CTO Chip Childers to Host Twitter Q&A

On Thursday, June 1, The Linux Foundation will continue its series of Twitter chats entitled #AskLF featuring leaders at the organization. Previous chats were hosted by The Linux Foundation’s Arpit Joshipura, GM of Networking & Orchestration and Clyde Seepersad, Manager of Training and Certifications. June’s #AskLF host is CTO of Cloud Foundry Foundation, Chip Childers.

#AskLF, was created to broaden access to thought leaders, community organizers, and expertise within The Linux Foundation. While there are many opportunities to interact with staff at Linux Foundation global events, which bring together over 25,000 open source influencers, a live Twitter Q&A will give participants a direct line of communication to designated hosts.

Chip Childers, Cloud Foundry CTO.
Chip Childers is an open source and large-scale computing veteran, having spent 18 years in the field. He co-founded Cloud Foundry Foundation as Technology Chief of Staff in 2015, coming from a VP of Product Strategy role at Cumulogic. Before that, he was the inaugural VP of Apache Cloudstack while leading Enterprise Cloud Services at SunGardChilders led the rebuild of pivotal applications for organizations such as IRS.gov, USMint.gov, and Merrill Lync. 

This “Cloud Foundry 101” #AskLF session will take place in advance of Cloud Foundry Summit Silicon Valley, where Childers will present a talk called A Platform for the Enterprise: Where Maturity & Innovation Intersect. @linuxfoundation followers are encouraged to ask Childers questions related to the Cloud Foundry platform and the foundation’s community

Sample questions might include:

  • What is the Cloud Foundry Foundation Developer Training and Certification Program and how do I get started?

  • Why do developers choose Cloud Foundry over other platforms and competitors?

  • How does The Cloud Foundry Foundation grow its community of contributors? How can I get involved? 

  • What will I get out of attending Cloud Foundry Summit?

Here’s how you can participate in the first #AskLF:

  • Follow @linuxfoundation on Twitter: Hosts will take over The Linux Foundation’s account during the session.

  • Save the date: June 1, 2017 at 10 a.m. PT.

  • Use the hashtag #AskLF: To ask Childers your questions while he hosts, simply tweet it with the hashtag #AskLF on 6/1 between 10 am & 10:45 am PDT. We can’t guarantee that he will have time to answer every inquiry, but every attempt will be made!

  • Consider attending Open Networking Summit in Santa Clara next month: This #AskLF session will prepare you to engage in the topics at Cloud Foundry Summit and you’ll get a chance to hear Childers speak live. Click here for registration and schedule details.

More dates and details for future #AskLF sessions to come! We’ll see you on Twitter, June 1 at 10 a.m. PT.

Read blogs by Chip Childers here: 

https://www.cloudfoundry.org/author/cchilders/​

*Note: Unlike Reddit-style AMAs, #AskLF is not focused around general topics that might pertain to the host’s personal life. To participate, please focus your questions around open source networking and Chip Childers’s career.

Google’s New Home for All Things Open Source Runs Deep

Google is not only one of the biggest contributors to the open source community but also has a strong track record of delivering open source tools and platforms that give birth to robust technology ecosystems. Just witness the momentum that Android and Kubernetes now have. Recently, Google launched a new home for its open source projects, processes, and initiatives. The site runs deep and has several avenues worth investigating. Here is a tour and some highlights worth noting.

Will Norris, a software engineer at Google’s Open Source Programs Office, writes: “One of the tenets of our philosophy towards releasing open source code is that ‘more is better.’ We don’t know which projects will find an audience, so we help teams release code whenever possible. As a result, we have released thousands of projects under open source licenses ranging from larger products like TensorFlow, Go, and Kubernetes to smaller projects such as Light My Piano, Neuroglancer, and Periph.io. Some are fully supported while others are experimental or just for fun. With so many projects spread across 100 GitHub organizations and our self-hosted Git service, it can be difficult to see the scope and scale of our open source footprint.”

Projects. The new directory of open source projects, which is rapidly expanding, is one of the richest parts of the Google Open Source site. If you investigate many of the projects, you can find out how they are used at Google.  A pull-down menu conveniently categorizes the many projects, so that you can investigate, for example, cloud, mobile or artificial intelligence tools. Animated graphics also shuffle between projects that you may not be aware of but might be interested in. Here is an example of one of these graphics:

TensorFlow project.

Do you know about Cloud Network Monitoring Agent, or Minimal Configuration Manager? The Projects section of Google’s site is where you can discover tools like these.

Docs. One of the most compelling components of Google’s new home for all things open source is a section called Docs, which is billed as “our internal documentation for how we do open source at Google.” From open source contributors and developers to companies implementing open source programs, this section of Google’s site has a motherlode of tested and hardened information. There are three primary sections of the docs:

  • Creating covers how Google developers release code that they’ve written, either in the form of a new project or as a patch to an external project.

  • Using explains how Google brings open source code into the company and uses it. It delves into maintaining license compliance, and more.

  • Growing describes some of the programs Google runs inside and outside the company to support open source communities.

According to Norris: “These docs explain the process we follow for releasing new open-source projects, submitting patches to others’ projects, and how we manage the open-source code that we bring into the company and use ourselves. But in addition to the how, it outlines why we do things the way we do, such as why we only use code under certain licenses or why we require contributor license agreements for all patches we receive.”

Blog. The Google Open Source site also includes a tab for the Google Open Source blog, which has steadily remained a good avenue for finding new tools and open source news. The site houses blog posts from people all around Google, and includes collections of links that can take you to other useful blogs, such as the Google Developers Blog and the official Google Blog.

Community. Not only does Google run open outreach programs such as Google Summer of Code and Google Code-in, it also sponsors and contributes projects to organizations like the Apache Software Foundation. The Community section on the Google Open Source site is dedicated to outreach programs and is also a good place to look in on if you want to get involved with Google’s programs. Here are just a few of the community-centric affiliations Google has that you may not know about.

It’s no accident that Google is evolving and improving its home for all things open source. The company’s CEO Sundar Pichai came up at Google as chief of products, and helped drive the success of open source-centric tools ranging from Chrome to Android. Pichai knows that these tools have improved enormously as a result of community involvement. Now, more than ever, Google’s own success is tied to  the success of open source.

Are you interested in how organizations are bootstrapping their own open source programs internally? You can learn more in the Fundamentals of Professional Open Source Management training course from The Linux Foundation. Download a sample chapter now.

Usage Patterns and the Economics of the Public Cloud

Illustrating the huge diversity of topics covered at WWW, following yesterday’s look at recovering mobile user trajectories from aggregate data, today’s choice studies usage variation and pricing models in the public cloud. The basis for the study is data from ‘a major provider’s public cloud datacenters.’ Unless Google or Amazon are sending their data to three researchers from Microsoft, it’s a fair bet we’re looking at Azure.

Research in economics and operations management posits that dynamic pricing is critically important when capacity is fixed (at least in the short run) and fixed costs represent a substantial fraction of total costs.

Read more at The Morning Paper

Puppet IT Automation Wades into Enterprise Containers

Puppet and its enterprise customers are in the same boat, afloat through the early phases of support for Docker containers. Puppet introduced products and updates this week, which include new support for containers to help enterprise customers advance to the new technology.

For sophisticated IT shops where containers are already in use, configuration management can be seen as passé. In such bleeding-edge environments, container infrastructures are immutable — destroyed and recreated continually — rather than updated with tools such as Puppet or Chef.

Read more at TechTarget

AI Toolkits: A Primer

If you’re not an AI specialist, but trying to understand the area, it helps to know the major tools that data scientists use to create AI systems. I thought I would survey the common toolkits, highlight which are the most popular, and explain which ecosystems they connect to.

Machine learning
Contemporary AI workloads divide into two classes. The first of these classes, and the overwhelming majority, is machine learning. These incorporate the most common algorithms used by data scientists: linear models, k-means clustering, decision trees and so on. Though we now talk of them as part of AI, this is what data scientists have been doing for a long time!

Read more at Medium

4 Cool Kubernetes Tools for Mastering Clusters

Kubernetes, the cluster manager for containerized workloads, is a hit. With the Big K doing the heavy lifting in load balancing and job management, you can turn your attention to other matters.

But like nearly every open source project, it’s a work in progress, and almost everyone who works with Kubernetes will find shortcomings, rough spots, and annoyances. Here are four projects that lighten the load that comes with administering a Kubernetes cluster.

Kube-applier

A key part of the Kubernetes success story is its uptake with IT brands other than Google. Cloud storage firm Box has picked up on Kubernetes and open-sourced some of the bits it’s used to aid with its internal deployment; kube-applier is one such project.

Read more at InfoWorld

Bringing Interactive BI to Big Data

SQL on Hadoop is continuously improving, but it’s still common to wait minutes to hours for a query to return. In this post, we will discuss the open source distributed analytics engine Apache Kylin and examine, specifically, how it speeds up big data query orders, and what some of the features in version 2.0—including snowflake schema support and streaming cubing—mean for interactive BI.

What is Apache Kylin?

Kylin is an OLAP engine on Hadoop. As shown in Figure 1, Kylin sits on top of Hadoop and exposes relational data to upper applications via the standard SQL interface.

Read more at O’Reilly