Home Blog Page 465

Intel Takes First Steps To Universal Quantum Computing

Someone is going to commercialize a general purpose, universal quantum computer first, and Intel wants to be the first. So does Google. So does IBM. And D-Wave is pretty sure it already has done this, even if many academics and a slew of upstart competitors don’t agree. What we can all agree on is that there is a very long road ahead in the development of quantum computing, and it will be a costly endeavor that could nonetheless help solve some intractable problems.

The big news this week is that Intel has been able to take a qubit design that its engineers created alongside of those working at QuTech and scale it up to 17 qubits on a single package. A year ago, the Intel-QuTech partnership had only a few qubits on their initial devices, Jim Clarke, director of quantum hardware at Intel, tells The Next Platform, and two years ago it had none. So that is a pretty impressive roadmap in a world where Google is testing a 20 qubit chip and hopes to have one running at 49 qubits before the year is out. 

“We are trying to build a general purpose, universal quantum computer,” says Clarke. “This is not a quantum annealer, like the D-Wave machine. There are many different types of qubits, which are the devices for quantum computing, and one of the things that sets Intel apart from the other players is that we are focused on multiple qubit types. …”

Read more at The Next Platform

Why Linux Works

The Linux community works, it turns out, because the Linux community isn’t too concerned about work, per se. As much as Linux has come to dominate many areas of corporate computing – from HPC to mobile to cloud – the engineers who write the Linux kernel tend to focus on the code itself, rather than their corporate interests therein.

Such is one prominent conclusion that emerges from Dawn Foster’s doctoral work, examining collaboration on the Linux kernel. Foster, a former community lead at Intel and Puppet Labs, notes, “Many people consider themselves a Linux kernel developer first, an employee second.”

As Foster writes, “Even when they enjoy their current job and like their employer, most [Linux kernel developers] tend to look at the employment relationship as something temporary, whereas their identity as a kernel developer is viewed as more permanent and more important.”

Because of this identity as a Linux kernel developer first, and corporate citizen second, Linux kernel developers can comfortably collaborate even with their employer’s fiercest competitors. This works because the employers ultimately have limited ability to steer their developers’ work…

Read more at Datamation

Sneak Peak: ODPi Webinar on Data Governance – The Why and the How

We all use metadata everyday. You may have found this blog post through a search, leveraging metadata tags / keywords. Metadata allows data practitioners to use data outside the application that created it, find the right data sets, and automate governance processes. Metadata today has proven value, yet many data platforms do not have metadata support.

Furthermore, where metadata management exists it uses proprietary formats and APIs.  Proprietary tools support a limited range of data sources and governance actions, and it can be an expensive efforts to combine their metadata create an enterprise data catalogue. In an ideal world, metadata should have the ability to be moved with the data and be augmented and processed through open APIs for permitted usages.

Enter Open Metadata, which enables various tools to connect to data & metadata repositories to exchange metadata.

Open Metadata has two major parts:

  1. OMRS – Open Metadata Repository Services makes it possible for various metadata repositories to exchange metadata. Metadata repositories can be from various vendors or concern with specific subject area.

  2. OMAS – Open Metadata Access Services provides specialized services to various types of tools/applications and thus enable out of the box connection to metadata. These tools can be, but not limited to:

    1. BI and Visualization tools

    2. Governance tools

    3. Integration tools and engines such as ETL and information virtualisation

The OMAS enables subject matter experts to collaborate around the data, feeding back their knowledge about the data and the uses they have made about it to help others and support economic evaluation of data.

Screen Shot 2017-09-27 at 11.28.59 AM.png

Open Metadata aims to provide data practitioners with an enterprise data catalog that lists all of their data, where it is located, its origin (lineage), owner, structure, meaning, classification and quality. No matter where the data resides. Furthermore, new tools from any vendor would be able to connect to your data catalog out of the box. No vendor lock-in and no expensive population of yet another proprietary, siloed metadata repository. Additionally, Metadata would be added automatically to the catalogue as new data is created.

But how do you ensure consistency, no vendor lock-in and cost effectiveness of Open Metadata?  The Answer is Open Governance.

Open Governance enables automation of metadata capture and governance of data. Open governance includes 3 frameworks:

  1. Open Connector Framework (OCF) for metadata driven access to data assets.

  2. Open Discovery Framework (ODF) for automated analysis of data and advanced metadata capture.

  3. Governance Action Framework (GAF) for automated governance enforcement, verification, exception management and logging.

Open Metadata and Open Governance together allows metadata to be captured when the data is created, moved with the data and be augmented and processed by any of the vendor tools.

Screen Shot 2017-09-27 at 11.29.28 AM.png

Open Metadata and Governance consists of:

  • Standardized, extensible set of metadata types

  • Metadata exchange APIs and notifications

  • Frameworks for automated governance

Open Metadata and Governance will allow you to have:

  • An enterprise data catalogue that lists all of your data, where it is located, its origin (lineage),

  • owner, structure, meaning, classification and quality

  • New data tools (from any vendor) connect to your data catalogue out of the box

  • Metadata being added automatically to the catalogue as new data is created and analysed

  • Subject matter experts collaborating around the data

  • Automated governance processes protect and manage your data

Dive into this topic further on Oct. 12 for a free webinar as John Mertic, Director of ODPi at The Linux Foundation hosts Srikanth Venkat, Senior Director, Product Management at Hortonworks, Ferd Scheepers, Chief Information Architect at ING and Mandy Chessell, Distinguished Engineer and Master Inventor at IBM.

Register for this free webinar now.

 

Cloud Foundry Adds Native Kubernetes Support for Running Containers

Cloud Foundry, the open-source platform as a service (PaaS) offering, has become somewhat of a de facto standard in the enterprise for building and managing applications in the cloud or in their own data centers. The project, which is supported by the Linux Foundation, is announcing a number of updates at its annual European user conference this week. Among these are support for container workloads and a new marketplace that highlights the growing Cloud Foundry ecosystem.

Cloud Foundry made an early bet on Docker containers, but with Kubo, which Pivotal and Google donated to the project last year, the project gained a new tool for allowing its users to quickly deploy and manage a Kubernetes cluster (Kubernetes being the Google-backed open-source container orchestration tool that itself is becoming the de facto standard for managing containers).

Read more at TechCrunch

We’re Just on the Edge of Blockchain’s Potential

No one could have seen blockchain coming. Now that it’s here, blockchain has the potential to completely reinvent the world of financial transactions, as well as other industries. In this interview, we talked to JAX London speaker Brian Behlendorf about the past, present, and future of this emerging technology.

JAXenter: Open source is crucial for the success of a lot of projects. Could you talk about why blockchain needs open collaboration from an engaged community?

Brian Behlendorf: I believe we are heading towards a future full of different blockchain ecosystems for different purposes. Many will be public, many private, some unpermissioned, some permissioned — and they’ll differ in their choice of consensus mechanism, smart contract platform, security protocols, and other attributes, and many will talk to each other. To keep this from becoming a confusing mess, or a platform war, collaboration on common software infrastructure is key. The Open Source communities behind Linux, Apache, and other successful platform technologies have demonstrated how to do this successfully.

Read more at JaxEnter

Measure Your Open Source Program’s Success

Open source programs are proliferating within organizations of all types, and if yours is up and running, you may have arrived at the point where you want to measure the program’s success. Many open source program managers are required to demonstrate the ROI of their programs, but even if there is no such requirement, understanding the metrics that apply to your program can help optimize it. That is where the free Measuring Your Open Source Program’s Success guide comes in. It can help any organization measure program success and can help program managers articulate exactly how their programs are driving business value.

Once you know how to measure your program’s success, publicizing the results — including the good, the bad, and the ugly — increases your program’s transparency, accountability, and credibility in open source communities. To see this in action, check out example open source report cards from Facebook and Google.

Read more at The Linux Foundation

Europe Pledges Support for Open Source Government Solutions

European Union & EFTA nations recognize open source software as a key driver of government digital transformation.

It was thus fitting that Estonia, the current EU presidency, brought together Ministers from 32 countries (under the umbrellas of the EU and European Free Trade Association) to adopt the Tallinn Declaration on E-Government, creating a renewed political dynamism coupled with legal tools to accelerate the implementation of a range of existing EU policy instruments (e.g., the e-Government Action Plan and ISA²program).

Perhaps the most significant development for open source supporters is the explicit recognition of open source software (OSS) as a key driver towards achieving ambitious governmental digitisation goals by 2020.

Read more at OpenSource.com

What’s Next in DevOps: 5 Trends to Watch

The term “DevOps” is typically credited to this 2008 presentation on agile infrastructure and operations. Now ubiquitous in IT vocabulary, the mashup word is less than 10 years old: We’re still figuring out this modern way of working in IT.

Sure, people who have been “doing DevOps” for years have accrued plenty of wisdom along the way. But most DevOps environments – and the mix of people and culture, process and methodology, and tools and technology – are far from mature.

More change is coming. That’s kind of the whole point. “DevOps is a process, an algorithm,” says Robert Reeves, CTO at Datical. “Its entire purpose is to change and evolve over time.”

What should we expect next? Here are some key trends to watch, according to DevOps experts.

Read more at Enterprisers Project

Examining Network Connections on Linux Systems

There are a lot of commands available on Linux for looking at network settings and connections. In today’s post, we’re going to run through some very handy commands and see how they work.

ifquery command

One very useful command is the ifquery command. This command should give you a quick list of network interfaces. However, you might only see something like this —showing only the loopback interface:

$ ifquery --list
lo

If this is the case, your /etc/network/interfaces file doesn’t include information on network interfaces except for the loopback interface. You can add lines like the last two in the example below — assuming DHCP is used to assign addresses — if you’d like it to be more useful.

Read more at NetworkWorld

In Device We Trust: Measure Twice, Compute Once with Xen, Linux, TPM 2.0 and TXT

Is it a small tablet or large phone? Is it a phone or broadcast sensor? Is it a server or virtual desktop cluster? Is x86 emulating ARM, or vice-versa? Is Linux inspiring Windows, or the other way around? Is it microcode or hardware? Is it firmware or software? Is it microkernel or hypervisor? Is it a security or quality update? Is anything in my device the same as yesterday? When we observe our evolving devices and their remote services, what can we question and measure?

General Purpose vs. Special Purpose Ecosystems

The general-purpose computer now lives in a menagerie of special-purpose devices and information appliances. Yet software and hardware components within devices are increasingly flexible, blurring category boundaries. With hardware virtualization on x86 and ARM platforms, the ecosystems of multiple operating systems can coexist on a single device. Can a modular and extensible multi-vendor architecture compete with the profitability of vertically integrated products from a single vendor?

Operating systems evolved alongside applications for lucrative markets. PC desktops were driven by business productivity and media creation. Web browsers abstracted OS differences, as software revenue shifted to e-commerce, services, and advertising. Mobile devices added sensors, radios and hardware decoders for content and communication. Apple, now the most profitable computer company, vertically integrates software and services with sensors and hardware. Other companies monetize data, increasing demand for memory and storage optimization.

Some markets require security or safety certifications: automotive, aviation, marine, cross domain, industrial control, finance, energy, medical, and embedded devices. As software “eats the world,” how can we modernize vertical markets without the economies of scale seen in enterprise and consumer markets? One answer comes from device architectures based on hardware virtualization, Xen, disaggregation, OpenEmbedded Linux and measured launch. OpenXT derivatives use this extensible, open-source base to enforce policy for specialized applications on general-purpose hardware, while reusing interoperable components.

OpenEmbedded Linux supports a range of x86 and ARM devices, while Xen isolates operating systems and unikernels. Applications and drivers from multiple ecosystems can run concurrently, expanding technical and licensing options. Special-purpose software can be securely composed with general-purpose software in isolated VMs, anchored by a hardware-assisted root of trust defined by customer and OEM policies. This architecture allows specialist software vendors to share platform and hardware support costs, while supporting emerging and legacy software ecosystems that have different rates of change.

On the Shoulders of Hardware, Firmware and Software Developers

0eMLJYIX3yDSWwbPA-1nhpPwza2JM2m_zJ7Idh41

System Architecture, from NIST SP800-193 (Draft), Platform Firmware Resiliency

By the time a user-facing software application begins executing on a powered-on hardware device, an array of firmware and software is already running on the platform.  Special-purpose applications’ security and safety assertions are dependent on platform firmware and the developers of a computing device’s “root of trust.”

If we consider the cosmological “Turtles All The Way Down” question for a computing device, the root of trust is the lowest-level combination of hardware, firmware and software that is initially trusted to perform critical security functions and persist state. Hardware components used in roots of trust include the TCG’s Trusted Platform Module (TPM), ARM’s TrustZone-enabled Trusted Execution Environment (TEE), Apple’s Secure Enclave co-processor (SEP), and Intel’s Management Engine (ME) in x86 CPUs. TPM 2.0 was approved as an ISO standard in 2015 and is widely available in 2017 devices.

TPMs enable key authentication, integrity measurement and remote attestation. TPM key generation uses a hardware random number generator, with private keys that never leave the chip. TPM integrity measurement functions ensure that sensitive data like private keys are only used by trusted code. When software is provisioned, its cryptographic hash is used to extend a chain of hashes in TPM Platform Configuration Registers (PCRs). When the device boots, sensitive data is only unsealed if measurements of running software can recreate the PCR hash chain that was present at the time of sealing. PCRs record the aggregate result of extending hashes, while the TPM Event Log records the hash chain.  

Measurements are calculated by hardware, firmware and software external to the TPM. There are Static (SRTM) and Dynamic (DRTM) Roots of Trust for Measurement. SRTM begins at device boot when the BIOS boot block measures BIOS before execution. The BIOS then execute, extending configuration and option ROM measurements into static PCRs 0-7. TPM-aware boot loaders like TrustedGrub can extend a measurement chain from BIOS up to the Linux kernel. These software identity measurements enable relying parties to make trusted decisions within specific workflows.

DRTM enables “late launch” of a trusted environment from an untrusted one at an arbitrary time, using Intel’s Trusted Execution Technology (TXT) or AMD’s Secure Virtual Machine (SVM). With Intel TXT, the CPU instruction SENTER resets CPUs to a known state, clears dynamic PCRs 17-22 and validates the Intel SINIT ACM binary to measure Intel’s tboot MLE, which can then measure Xen, Linux or other components. In 2008, Carnegie Mellon’s Flicker used late launch to minimize the Trusted Computing Base (TCB) for isolated execution of sensitive code on AMD devices, during the interval between suspend/resume of untrusted Linux.  

If DRTM enables launch of a trusted Xen or Linux environment without reboot, is SRTM still needed? Yes, because attacks are possible via privileged System Management Mode (SMM) firmware, UEFI Boot/Runtime Services, Intel ME firmware, or Intel Active Management Technology (AMT) firmware. Measurements for these components can be extended into static PCRs, to ensure they have not been modified since provisioning. In 2015, Intel released documentation and reference code for an SMI Transfer Monitor (STM), which can isolate SMM firmware on VT-capable systems. As of September 2017, an OEM-supported STM is not yet available to improve the security of Intel TXT.

Can customers secure devices while retaining control over firmware?  UEFI Secure Boot requires a signed boot loader, but customers can define root certificates. Intel Boot Guard provides OEMs with validation of the BIOS boot block. Verified Boot requires a signed boot block and the OEM’s root certificate is fused into the CPU to restrict firmware. Measured Boot extends the boot block hash into a TPM PCR, where it can be used for measured launch of customer-selected firmware. Sadly, no OEM has yet shipped devices which implement ONLY the Measured Boot option of Boot Guard.

Measured Launch with Xen on General Purpose Devices

OpenXT 7.0 has entered release candidate status, with support for Kaby Lake devices, TPM 2.0, OE meta-measured, and forward seal (upgrade with pre-computed PCRs).  

OpenXT 6.0 on a Dell T20 Haswell Xeon microserver, after adding a SATA controller, low-power AMD GPU and dual-port Broadcom NIC, can be configured with measured launch of Windows 7 GPU p/t, FreeNAS 9.3 SATA p/t, pfSense 2.3.4, Debian Wheezy, OpenBSD 6.0, and three NICs, one per passthrough driver VM.

Does this demonstrate a storage device, build server, firewall, middlebox, desktop, or all of the above? With architectures similar to Qubes and OpenXT derivatives, we can combine specialized applications with best-of-breed software from multiple ecosystems. A strength of one operating system can address the weakness of another.

Measurement and Complexity in Software Supply Chains

While ransomware trumpets cryptocurrency demands to shocked users, low-level malware often emulates Sherlock Holmes: the user sees no one. Malware authors modify code behavior in response to “our method of questioning”, simulating heisenbugs. As system architects pile abstractions, self-similarity appears as hardware, microcode, emulator, firmware, microkernel, hypervisor, operating system, virtual machine, namespace, nesting, runtime, and compiler expand onto neighboring territory. There are no silver bullets to neutralize these threats, but cryptographic measurement of source code and stateless components enables whitelisting and policy enforcement in multi-vendor supply chains.

Even for special-purpose devices, the user experience bar is defined by mass-market computing. Meanwhile, Moore’s Law is ending, ARM remains fragmented, x86 PC volume is flat, new co-processors and APIs multiply, threats mutate and demand for security expertise outpaces the talent pool. In vertical markets which need usable, securable and affordable special-purpose devices, Xen virtualization enables innovative applications to be economically integrated with measured, interoperable software components on general-purpose hardware. OpenXT is an open-source showcase for this scalable ecosystem. Further work is planned on reference architectures for measured disaggregation with Xen and OpenEmbedded Linux.

If you are interested in virtualization and security, watch my presentation from the 2017 Xen Project Summit and join the OpenXT and OpenEmbedded communities! If you are attending the 2017 Embedded Linux Conference Europe, visit the OpenXT measured launch demo at the Technical Showcase on October 23 and attend Matthew Garrett’s talk, “Making Trusted Boot Practical on Linux” on October 24.