Blockchain and its ability to “embed trust” can help elevate trust, which right now, is low, according to Sally Eaves, a chief technology officer and strategic advisor to the Forbes Technology Council, speaking at The Linux Foundation’s Open FinTech Forum in New York City.
People’s trust in business, media, government and non-government organizations (NGOs) is at a 17-year low, and businesses are suffering as a result, Eaves said.
Additionally, Eaves said, 87 percent of millennials believe business success should be measured in more than just financial performance. People want jobs with real meaning and purpose, she added.
To provide further context, Eaves noted the following urgent global challenges:
1.5 billion people cannot prove their identity (which has massive implications in not just banking but education as well)
2 billion people worldwide do not have a bank account or access to a financial institution
Identity fraud is estimated to cost the UK millions of euros annually.
One of the most critical pieces of software on a shared cluster is the resource manager, commonly called a job scheduler, which allows users to share the system in a very efficient and cost-effective way. The idea is fairly simple: Users write small scripts, commonly called “jobs,” that define what they want to run and the required resources, which they then submit to the resource manager. When the resources are available, the resource manager executes the job script on behalf of the user. Typically this approach is for batch jobs (i.e., jobs that are not interactive), but it can also be used for interactive jobs, for which the resource manager gives you a shell prompt to the node that is running your job.
Some resource managers are commercially supported and some are open source, either with or without a support option. The list of candidates is fairly long, but the one I talk about in this article is Slurm. …The SLUM architecture is very similar to other job schedulers. Each node in the cluster has a daemon running, which in this case is named slurmd. The resources are referred to as nodes. The daemons can communicate in a hierarchical fashion that accommodates fault tolerance. On the Slurm master node, the daemon is slurmctld, which also has failover capability.
Many new gRPC users are surprised to find that Kubernetes’s default load balancing often doesn’t work out of the box with gRPC. For example, here’s what happens when you take a simple gRPC Node.js microservices app and deploy it on Kubernetes:
While the voting service displayed here has several pods, it’s clear from Kubernetes’s CPU graphs that only one of the pods is actually doing any work—because only one of the pods is receiving any traffic. Why?
In this blog post, we describe why this happens, and how you can easily fix it by adding gRPC load balancing to any Kubernetes app with Linkerd, a CNCF service mesh and service sidecar.
We’re running out of time to tackle climate change. Could an open source, distributed approach build the necessary momentum? Executive director of LF Energy tells Techworld about the new initiative which already has some enterprises on board.
The prospects from the UN’s most recent climate report are bleak. There are less than two decades until the point of no return for the planet’s climate, and the leaders of major countries seem to be retracting political willingness to fix the existential threat.
But, the roadblocks might not be as daunting as they first appear. Shuli Goodman, executive director of the newly created LF Energy group, hopes to fundamentally transform the way energy is distributed, reduce waste, and build new models that could be scaled out with an open source framework.
Organizations looking to better locate, understand, manage and gain value from their data have a new industry standard to leverage. ODPi, a nonprofit Linux Foundation organization focused upon accelerating the open ecosystem of big data solutions, recently announced ODPi Egeria, a new project that supports the free flow of metadata between different technologies and vendor offerings.
Recent data privacy regulations such as GDPR have brought data governance and security concerns to the forefront for enterprises, driving the need for a standard to ensure that data providence and management is clear and consistent—supporting the free flow of metadata between different technologies and vendor offerings. Egeria enables this, as the only open source driven solution designed to set a standard for leveraging metadata in line of business applications, and enabling metadata repositories to federate across the enterprise.
The first release of Egeria focuses on creating a single virtual view of metadata. It can federate queries across different metadata repositories and has the ability to synchronize metadata between different repositories. The synchronization protocol controls what is shared, with which repositories and ensures that updates to metadata can be made with integrity.
Most Linux users have heard about the open source RISC-V ISA and its potential to challenge proprietary Arm and Intel architectures. Most are probably aware that some RISC-V based CPUs, such as SiFive’s 64-bit Freedom U540 found on its HiFive Unleashed board, are designed to run Linux. What may come as a surprise, however, is how quickly Linux support for RISC-V is evolving.
“This is a good time to port Linux applications to RISC-V,” said Comcast’s Khem Raj at an Embedded Linux Conference Europe presentation last month. “You’ve got everything you need. Most of the software is upstream so you don’t need forks,” he said.
By adopting an upstream first policy, the RISC-V Foundation is accelerating Linux-on-RISC-V development both now and in the future. Early upstreaming helps avoid forked code that needs to be sorted out later. Raj offered specifics on different levels of RISC-V support from the Linux kernel to major Linux distributions, as well as related software from Glibc to U-Boot (see farther below).
The road to RISC-V Linux has also been further accelerated thanks to the enthusiasm of the Linux open source community. Penguinistas see the open source computing architecture as a continuation of the mission of Linux and other open source projects. Since IoT is an early RISC-V target, the interest is particularly keen in the open source Linux SBC community. The open hardware movement recently expanded to desktop PCs with System76’s Ubuntu-driven Thelio system.
Processors remain the biggest exceptions to open hardware. RISC-V is a step in the right direction for CPUs, but RISC-V lacks a spec for graphics, which with the rise of machine vision and edge AI and multimedia applications, is becoming increasingly important in embedded. There’s progress on this front as well, with an emerging project to create an open RISC-V based GPU called Libre RISC-V. More details can be found in this Phoronix story.
SiFive launches new Linux-driven U74 core designs
RISC-V is also seeing new developments on the CPU front. Last week, SiFive, which is closely associated with the UC Berkeley team that developed the architecture, announced a second gen RISC-V CPU core designs called IP 7 Series. IP 7 features the Linux-friendly U74 and U74-MC chips. These quad-core, Cortex-A55 like processors, which should appear in SoCs in 2019, are faster and more power efficient than the U540.
The new U74 chips will support future, up to octa-core, SoC designs that mix and match the U74 cores with its new next-gen MCU chips: the Cortex-M7 like E76 and Cortex-R8 like S76. The U74-MC model even features its own built-in S7 MCU for real-time processing.
Although much of the early RISC-V business has been focused on MCUs, SiFive is not alone in building Linux-driven RISC-V designs. Earlier this summer a Shakti Project backed by the Indian government demonstrated Linux booting on a homegrown 400MHz Shakti RISC-V processor.
A snapshot of Linux support for RISC-V
In his ELC presentation, called “Embedded Linux on RISC-V Architecture — Status Report,” Raj, who is an active contributor to RISC-V, as well as the OpenEmbedded and Yocto projects, revealed the latest updates for RISC-V support in the Linux kernel and related software. The report has a rather short shelf life, admitted Raj: “The software is developing very fast so what I say today may be obsolete tomorrow — we’ve already seen a lot of basic tool, compilers, and toolchain support landing upstream.”
Raj started with a brief overview of RISC-V, explaining how it supports 32-, 64-, and even future 128-bit instruction sets. Attached to these versions are extensions such as integer multiply/divide, atomic memory access, floating point single and double precision, and compressed.
The initial Linux kernel support adopts the most commonly used profile for Linux: RV64GC (LP64 ABI). The G and the C at the end of the RV64 name stand for general-purpose and compressed, respectively.
The Linux kernel has had a stable ABI (application binary interface) in upstream Linux since release 4.15. According to Raj, the recent 4.19 release added QEMU virt board drivers “thanks to major contributions from UC Berkeley, SiFive, and Andes Technology.”
You can now run many other Linux-related components on a SiFive U540 chip, including binutils 2.28, gcc 7.0, glibc 2.27 and 2.28 (32-bit), and newlib 3.0 (for bare metal bootstrapping). For the moment, gdb 8.2 is available only for bare-metal development.
In terms of bootloaders, Coreboot offered early support, and U-Boot 2018.11 recently added RISC-V virt board support upstream. PK/BBL is now upstream on the RISC-V GitHub page.
OpenEmbedded/Yocto Project OE/Yocto was the first official Linux development platform port, with core support upstreamed with the 2.5 release. Among full-fledged Linux distributions, Fedora is the farthest along. Fedora, which has done a lot of the “initial heavy lifting,” finished its bootstrap back in March, said Raj. In addition, its “Koji build farm is turning out RISC-V RPMs like any other architecture,” he added. Fedora 29 (Rawhide) offers specific support for the RISC-V version of QEMU.
Debian still lacks toolchain for cross-build development on RISC-V, but it’s already possible, said Raj. Buildroot now has a 64-bit RISC-V port and a 32-bit port was recently submitted.
Raj went on to detail RISC-V porting progress for the LLVM compiler and the Musl C library. Farther behind, but in full swing, are ports for OpenOCD UEFI, Grub, V8, Node.js, Rust, and Golang, among others. For the latest details, see the RISC-V software status page, as well as other URLs displayed toward the end of Raj’s ELC video below.
Addressing the rapidly growing user base around GraphQL, The Linux Foundation has launched the GraphQL Foundation to build a vendor-neutral community around the query language for APIs (application programming interfaces).
“Through the formation of the GraphQL Foundation, I hope to see GraphQL become industry standard by encouraging contributions from a broader group and creating a shared investment in vendor-neutral events, documentation, tools, and support,” said Lee Byron, co-creator of GraphQL, in a statement.
“GraphQL has redefined how developers work with APIs and client-server interactions,” said Chris Aniszczyk, Linux Foundation vice president of developer relations…
Maybe you’ve heard of Go. It was first introduced in 2009, but like any new programming language, it took a while for it to mature and stabilize to the point where it became useful for production applications. Nowadays, Go is a well-established language that is used for network and database programming, web development, and writing DevOps tools. It was used to write Docker, Kubernetes, Terraform and Ethereum. Go is accelerating in popularity, with adoption increasing by 76% in 2017, and now there are Go user groups and Go conferences. Whether you want to add to your professional skills, or are just interested in learning a new programming language, you may want to check it out.
Why Go?
Go was created at Google by a team of three programmers: Robert Griesemer, Rob Pike, and Ken Thompson. The team decided to create Go because they were frustrated with C++ and Java, which over the years had become cumbersome and clumsy to work with. They wanted to bring enjoyment and productivity back to programming.
…The idea of Go’s design is to have the best parts of many languages. At first, Go looks a lot like a hybrid of C and Pascal (both of which are successors to Algol 60), but looking closer, you will find ideas taken from many other languages as well.
Go is designed to be a simple compiled language that is easy to use, while allowing concisely-written programs that run efficiently. Go lacks extraneous features, so it’s easy to program fluently, without needing to refer to language documentation while programming. Programming in Go is fast, fun, and productive.
DevOps and cloud computing have become two of the ways companies can achieve this needed transformation, though the relationship between the two is not easily reconciled—DevOps is about the process and process improvement, while cloud computing is about technology and services. It’s important to understand how the cloud and DevOps work together to help businesses achieve their transformation goals.
Different organizations outline DevOps in different ways. This article does not debate which definition is correct, but rather presents them both to focus on the cloud’s benefit to DevOps. That said, DevOps definitions generally fall into two terms:
In organizations it is defined as developer-friendly operations—IT operations are run separately yet in a way that is much more friendly to developers (e.g., self-service catalogs are provided to developers for stipulating infrastructure or providing technology-enabled pipelines for deploying new code).
DevOps as a single consolidated team is habituated in organizations—developers take on operations responsibilities and vice versa.
Companies that focus on developers for operations often use cloud computing to speed developer productivity and efficiency. Cloud computing permits developers more control over their own components, resulting in smaller wait times. This application-specific architecture makes it easy for developers to own more components. By using cloud tools and services to automate the process of building, managing and provisioning through the code, service teams speed up the development process, eliminate possible human error and establish repeatability.
There are times when you want to browse the Internet privately, access geo-restricted content or bypass any intermediate firewalls your network might be enforcing.
One option is to use a VPN, but that requires installing a client software on your machine and setting up your own VPN server or subscribing to a VPN service.
The simpler alternative is to route your local network traffic with an encrypted SOCKS proxy tunnel. This way, all your applications using the proxy will connect to the SSH server and the server will forward all the traffic to its actual destination. Your ISP (internet service provider) and other third parties will not be able to inspect your traffic and block your access to websites.
This tutorial will walk you through the process of creating an encrypted SSH tunnel and configuring Firefox and Google Chrome web browsers to use SOCKS proxy.