Home Blog Page 465

6 Ways to Work with Database Admins in the DevOps World

DevOps is defined as “unifying the operations and engineering teams,” in order to foster a culture of cross-team collaboration, codify how infrastructure is built, and become a more data-driven organization. But it seems databases and the teams that care for them are treated as an exception to this environment. In most companies, databases are still treated like walled gardens, with the database hosts tended to like delicate flowers and the database administrators (DBAs) guarding any and all access to them.

This walled-garden attitude invariably affects the rest of the organization, from tech ops, to delivery engineering, all the way to product planning, as everyone tries to work around the datastore. Ultimately this reduces the benefits of an agile approach to software development, which is a problem for companies that have been running for a few years and have reached a solid financial footing with loyal paying customers, but are having a hard time shedding that startup skin (the one that flies by the seat of its pants), and are feeling the pressure to achieve a sense of stability in existing and future offerings.

Read more at OpenSource.com

Intel Takes First Steps To Universal Quantum Computing

Someone is going to commercialize a general purpose, universal quantum computer first, and Intel wants to be the first. So does Google. So does IBM. And D-Wave is pretty sure it already has done this, even if many academics and a slew of upstart competitors don’t agree. What we can all agree on is that there is a very long road ahead in the development of quantum computing, and it will be a costly endeavor that could nonetheless help solve some intractable problems.

The big news this week is that Intel has been able to take a qubit design that its engineers created alongside of those working at QuTech and scale it up to 17 qubits on a single package. A year ago, the Intel-QuTech partnership had only a few qubits on their initial devices, Jim Clarke, director of quantum hardware at Intel, tells The Next Platform, and two years ago it had none. So that is a pretty impressive roadmap in a world where Google is testing a 20 qubit chip and hopes to have one running at 49 qubits before the year is out. 

“We are trying to build a general purpose, universal quantum computer,” says Clarke. “This is not a quantum annealer, like the D-Wave machine. There are many different types of qubits, which are the devices for quantum computing, and one of the things that sets Intel apart from the other players is that we are focused on multiple qubit types. …”

Read more at The Next Platform

Why Linux Works

The Linux community works, it turns out, because the Linux community isn’t too concerned about work, per se. As much as Linux has come to dominate many areas of corporate computing – from HPC to mobile to cloud – the engineers who write the Linux kernel tend to focus on the code itself, rather than their corporate interests therein.

Such is one prominent conclusion that emerges from Dawn Foster’s doctoral work, examining collaboration on the Linux kernel. Foster, a former community lead at Intel and Puppet Labs, notes, “Many people consider themselves a Linux kernel developer first, an employee second.”

As Foster writes, “Even when they enjoy their current job and like their employer, most [Linux kernel developers] tend to look at the employment relationship as something temporary, whereas their identity as a kernel developer is viewed as more permanent and more important.”

Because of this identity as a Linux kernel developer first, and corporate citizen second, Linux kernel developers can comfortably collaborate even with their employer’s fiercest competitors. This works because the employers ultimately have limited ability to steer their developers’ work…

Read more at Datamation

Sneak Peak: ODPi Webinar on Data Governance – The Why and the How

We all use metadata everyday. You may have found this blog post through a search, leveraging metadata tags / keywords. Metadata allows data practitioners to use data outside the application that created it, find the right data sets, and automate governance processes. Metadata today has proven value, yet many data platforms do not have metadata support.

Furthermore, where metadata management exists it uses proprietary formats and APIs.  Proprietary tools support a limited range of data sources and governance actions, and it can be an expensive efforts to combine their metadata create an enterprise data catalogue. In an ideal world, metadata should have the ability to be moved with the data and be augmented and processed through open APIs for permitted usages.

Enter Open Metadata, which enables various tools to connect to data & metadata repositories to exchange metadata.

Open Metadata has two major parts:

  1. OMRS – Open Metadata Repository Services makes it possible for various metadata repositories to exchange metadata. Metadata repositories can be from various vendors or concern with specific subject area.

  2. OMAS – Open Metadata Access Services provides specialized services to various types of tools/applications and thus enable out of the box connection to metadata. These tools can be, but not limited to:

    1. BI and Visualization tools

    2. Governance tools

    3. Integration tools and engines such as ETL and information virtualisation

The OMAS enables subject matter experts to collaborate around the data, feeding back their knowledge about the data and the uses they have made about it to help others and support economic evaluation of data.

Screen Shot 2017-09-27 at 11.28.59 AM.png

Open Metadata aims to provide data practitioners with an enterprise data catalog that lists all of their data, where it is located, its origin (lineage), owner, structure, meaning, classification and quality. No matter where the data resides. Furthermore, new tools from any vendor would be able to connect to your data catalog out of the box. No vendor lock-in and no expensive population of yet another proprietary, siloed metadata repository. Additionally, Metadata would be added automatically to the catalogue as new data is created.

But how do you ensure consistency, no vendor lock-in and cost effectiveness of Open Metadata?  The Answer is Open Governance.

Open Governance enables automation of metadata capture and governance of data. Open governance includes 3 frameworks:

  1. Open Connector Framework (OCF) for metadata driven access to data assets.

  2. Open Discovery Framework (ODF) for automated analysis of data and advanced metadata capture.

  3. Governance Action Framework (GAF) for automated governance enforcement, verification, exception management and logging.

Open Metadata and Open Governance together allows metadata to be captured when the data is created, moved with the data and be augmented and processed by any of the vendor tools.

Screen Shot 2017-09-27 at 11.29.28 AM.png

Open Metadata and Governance consists of:

  • Standardized, extensible set of metadata types

  • Metadata exchange APIs and notifications

  • Frameworks for automated governance

Open Metadata and Governance will allow you to have:

  • An enterprise data catalogue that lists all of your data, where it is located, its origin (lineage),

  • owner, structure, meaning, classification and quality

  • New data tools (from any vendor) connect to your data catalogue out of the box

  • Metadata being added automatically to the catalogue as new data is created and analysed

  • Subject matter experts collaborating around the data

  • Automated governance processes protect and manage your data

Dive into this topic further on Oct. 12 for a free webinar as John Mertic, Director of ODPi at The Linux Foundation hosts Srikanth Venkat, Senior Director, Product Management at Hortonworks, Ferd Scheepers, Chief Information Architect at ING and Mandy Chessell, Distinguished Engineer and Master Inventor at IBM.

Register for this free webinar now.

 

Cloud Foundry Adds Native Kubernetes Support for Running Containers

Cloud Foundry, the open-source platform as a service (PaaS) offering, has become somewhat of a de facto standard in the enterprise for building and managing applications in the cloud or in their own data centers. The project, which is supported by the Linux Foundation, is announcing a number of updates at its annual European user conference this week. Among these are support for container workloads and a new marketplace that highlights the growing Cloud Foundry ecosystem.

Cloud Foundry made an early bet on Docker containers, but with Kubo, which Pivotal and Google donated to the project last year, the project gained a new tool for allowing its users to quickly deploy and manage a Kubernetes cluster (Kubernetes being the Google-backed open-source container orchestration tool that itself is becoming the de facto standard for managing containers).

Read more at TechCrunch

We’re Just on the Edge of Blockchain’s Potential

No one could have seen blockchain coming. Now that it’s here, blockchain has the potential to completely reinvent the world of financial transactions, as well as other industries. In this interview, we talked to JAX London speaker Brian Behlendorf about the past, present, and future of this emerging technology.

JAXenter: Open source is crucial for the success of a lot of projects. Could you talk about why blockchain needs open collaboration from an engaged community?

Brian Behlendorf: I believe we are heading towards a future full of different blockchain ecosystems for different purposes. Many will be public, many private, some unpermissioned, some permissioned — and they’ll differ in their choice of consensus mechanism, smart contract platform, security protocols, and other attributes, and many will talk to each other. To keep this from becoming a confusing mess, or a platform war, collaboration on common software infrastructure is key. The Open Source communities behind Linux, Apache, and other successful platform technologies have demonstrated how to do this successfully.

Read more at JaxEnter

Measure Your Open Source Program’s Success

Open source programs are proliferating within organizations of all types, and if yours is up and running, you may have arrived at the point where you want to measure the program’s success. Many open source program managers are required to demonstrate the ROI of their programs, but even if there is no such requirement, understanding the metrics that apply to your program can help optimize it. That is where the free Measuring Your Open Source Program’s Success guide comes in. It can help any organization measure program success and can help program managers articulate exactly how their programs are driving business value.

Once you know how to measure your program’s success, publicizing the results — including the good, the bad, and the ugly — increases your program’s transparency, accountability, and credibility in open source communities. To see this in action, check out example open source report cards from Facebook and Google.

Read more at The Linux Foundation

Europe Pledges Support for Open Source Government Solutions

European Union & EFTA nations recognize open source software as a key driver of government digital transformation.

It was thus fitting that Estonia, the current EU presidency, brought together Ministers from 32 countries (under the umbrellas of the EU and European Free Trade Association) to adopt the Tallinn Declaration on E-Government, creating a renewed political dynamism coupled with legal tools to accelerate the implementation of a range of existing EU policy instruments (e.g., the e-Government Action Plan and ISA²program).

Perhaps the most significant development for open source supporters is the explicit recognition of open source software (OSS) as a key driver towards achieving ambitious governmental digitisation goals by 2020.

Read more at OpenSource.com

What’s Next in DevOps: 5 Trends to Watch

The term “DevOps” is typically credited to this 2008 presentation on agile infrastructure and operations. Now ubiquitous in IT vocabulary, the mashup word is less than 10 years old: We’re still figuring out this modern way of working in IT.

Sure, people who have been “doing DevOps” for years have accrued plenty of wisdom along the way. But most DevOps environments – and the mix of people and culture, process and methodology, and tools and technology – are far from mature.

More change is coming. That’s kind of the whole point. “DevOps is a process, an algorithm,” says Robert Reeves, CTO at Datical. “Its entire purpose is to change and evolve over time.”

What should we expect next? Here are some key trends to watch, according to DevOps experts.

Read more at Enterprisers Project

Examining Network Connections on Linux Systems

There are a lot of commands available on Linux for looking at network settings and connections. In today’s post, we’re going to run through some very handy commands and see how they work.

ifquery command

One very useful command is the ifquery command. This command should give you a quick list of network interfaces. However, you might only see something like this —showing only the loopback interface:

$ ifquery --list
lo

If this is the case, your /etc/network/interfaces file doesn’t include information on network interfaces except for the loopback interface. You can add lines like the last two in the example below — assuming DHCP is used to assign addresses — if you’d like it to be more useful.

Read more at NetworkWorld