Home Blog Page 706

Meet Hyperledger: An “Umbrella” for Open Source Blockchain & Smart Contract Technologies

It’s hard to believe I’ve been working at The Linux Foundation on Hyperledger for four months already. I’ve been blown away by the amount of interest and support the project has received since the beginning of the year. As things really start to take off, I think it’s important to take a step back to reflect and recapitulate why and what we’re doing with Hyperledger. Simply put, we see Hyperledger as an “umbrella” for software developer communities building open source blockchain and related technologies. In this blog post, I’m going to try to define what we mean by “umbrella,” that is, the rationale behind it and how we expect that model to work towards building a neutral, foundational community.

The Hyperledger Project was initially seeded with various blockchain-supporting commercial members, some of whom had interesting internal or nascent open source efforts that needed the kind of home that the Linux Foundation could provide. It emerged at a time when it was clear that three points needed to be made to the market:

  1. Open, transparent governance of the software development process for blockchain technologies matters
  2. Intellectual property provenance and safeguards of the software matters
  3. Key use cases are driving permissioned or “consortium” chain models

Read more at The Hyperledger Project

OpenStack API Benchmarking and Scaling — 3 Test Cases

Have you ever been curious how much of a workload the OpenStack control plane can handle before needing to scale horizontally? Based on the load, how will the API performance adjust? How much overhead will load on the OpenStack APIs add to my application deployment timeline? What behavior should I look for to determine its time to add more control plane resources?

These are just a few questions I have been asked about operating OpenStack clouds.  While performing API benchmarking can be measured in many different ways, it is a good idea to have some high level frame of reference.

There are about a million ways to measure performance, and your approach may differ from what I have done. 

Read more at Rackspace Blog

Keynote: Open Source is a Positive-Sum Game – Sam Ramji, CEO, Cloud Foundry Foundation

https://www.youtube.com/watch?v=qvvwAUZYdNk?list=PLGeM09tlguZTvqV5g7KwFhxDlWi4njK6n

Sam Ramji wants to get as many people as possible to come and play this amazing positive-sum game that we call open source software.

Who Needs the Internet of Things?

This week, the Raspberry Pi Foundation announced it has sold more than 10 million Raspberry Pi boards and celebrated the milestone by releasing a new Raspberry Pi Starter Kit. While many of these Linux-driven hacker boards were used for the foundation’s original purpose — creating a low-cost computer for computer education — a large percentage have been sold to hobbyists and commercial developers working on Internet of Things (IoT) projects ranging from home automation to industrial sensor networks.

Linux-driven open source and commercial single board computers and modules sit at the heart of the IoT phenomenon. They are usually found in the form of gateways or hubs that aggregate sensor data from typically wirelessly enabled, sensor-equipped endpoints. Sometimes these endpoints run Linux as well, but these are more often simpler, lower-power MCU-driven devices such as Arduino-based devices. Linux and Windows run the show in the newly IoT-savvy cloud platforms that are emerging to monitor systems and analyze data fed from the gateways in so-called fog ecosystems.

Over the next few weeks, I’ll be analyzing the IoT universe, with a special focus on Linux and other open source technologies used in home and industrial automation. I’ll look at major open source products and projects, IoT-oriented hacker boards, security and privacy issues, and future trends.

An Expanding Definition

Much has changed since back in 2013 when the Internet of Things emerged as the next shiny tech bauble. The challenge in understanding IoT is not only that the technology is constantly evolving, but that the definition keeps expanding from an original focus on machine-to-machine (M2M) applications that communicate without human intervention.

Many analyses include a whole range of what we used to call embedded computing gear, including high-end networking and digital signage equipment. Other definitions more logically include drones, robots, automotive computers, and wearables. No wonder market estimates and forecasts are all over the map, ranging up to McKinsey’s projection that IoT will be a $6.2 trillion industry by 2025.

In this series, we will focus primarily on automation solutions that rely on numerous wireless, low-power sensor endpoints. However, we will also explore the many concepts and technologies IoT shares with emerging mobile autonomous devices and wearables. A drone, for example, can be used as a flying sensor array that is integrated within a larger IoT network.

Almost all the forecasts agree that IoT is going to be huge, and will substantially change our lives. Theoretically, IoT will make business more efficient, a claim that is backed up by some early anecdotal evidence. Farmers are more closely monitoring crops with the help of sensor networks to ensure a better yield, and factory owners are monitoring operations to spot maintenance issues without requiring costly shutdowns.

Electric utilities have been early IoT users, using sensors to monitor equipment and help customers reduce energy bills. If and when we start taking energy consumption and climate change seriously, IoT will be one of the key tools for documenting the problem and helping to solve it.

Major contractors have begun to add sensors to buildings and other large infrastructure as they’re being built, hooking them up to simulation engines to spot flaws, inefficiencies, and costly over-engineering before the problems are baked into the design. A few forward-looking cities such as Singapore are using IoT to monitor water networks for leaks, and the shipping industry is beginning to add sensors to crates of perishable food or medicines.

In the home, the payoff is muddier, but that hasn’t stopped millions from buying commercial home automation hubs and ecosystems such as Nest or SmartThings, or hacking together their own networks using open source boards. Energy savings lead the way here way here, followed by security and remote monitoring and automation gizmos such as a timed sprinkler system.

Some home applications, such as automated window shades, are mere conveniences that will only make it harder to leave the couch and get some modest indoors exercise. For many hobbyists, however, the cost of setting up such IoT gizmos is more than compensated by the joy of invention and control.

Security and Privacy Challenges

Video surveillance is often considered to be an IoT application, and cameras are bundled with many home automation systems. Video, after all, is nothing but a visual sensor, even if it’s one that requires more processing power and higher-bandwidth communications.

The controversial role of surveillance, which extends beyond video to other forms of home automation monitoring, has led some to decry IoT as the enemy of privacy. Home surveillance may make it easier to keep tabs on pets, small children, and the elderly, but it also makes it easier to spy on each other, threatening traditional bonds of trust.

The privacy issue is also intertwined with legitimate fears about the security vulnerabilities of IoT gear. Most commercial home automation systems offer a cloud component, which is useful for external communications, updates, video storage, and increasingly voice response and self-learning analytics. Yet a cloud connection also expands the potential for corporate information harvesting, or even worse, black hats gaining access to the cloud platform to steal personal information or attack systems such as security and heating systems. In response to these vulnerabilities, many open source automation projects promote a localized approach where you control your own cloud, even if it’s at the expense of extended functionality.

Solving privacy and security issues aren’t the only challenges facing IoT. Even within the home, let alone a factory, the complexity of integration and interoperability can be mind boggling. Standards organizations try to bridge the gaps between different commercial and open source ecosystems, but that leaves the question of who will bridge the gaps between the standards. In the open source world, two major players are IoTivity and AllSeen, but it’s too early to say how the standards question will shake out.

Technical challenges also remain before IoT will reach its true potential. Yet all the key technologies have passed the thresholds required for substantial ROI. Sensors, wireless radios, and processors are getting smaller, cheaper, and more power efficient. The hard part is hooking it all up.

Read the next article in this series, 21 Open Source Projects for IoT​

Interested in learning how to adapt Linux to an embedded system? Check out The Linux Foundation’s Embedded Linux Development course.

Everyone Wins With Open Source Software

As open source software matures and is used by more and more major corporations, it is becoming clear that the enterprise software game has changed. Sam Ramji, CEO of the Cloud Foundry Foundation, believes that open source software is a positive sum game, as reflected in his keynote at ApacheCon in Vancouver in May.

Invoking his love of game theory, Ramji stated emphatically that open source software is a positive-sum game, where the more contributors there are to the common good, the more good there is for everyone. This idea is the opposite of a zero-sum game, where if someone benefits or wins, then another person must suffer, or lose.

Legacy software and hardware used to be a zero-sum game, thanks to the lock-in of choosing a single provider. But with the dawn of the age of cloud computing, things have begun to change.

“With a software-driven economy, the old ideas of zero-sum games no longer make sense,” Ramji said. “We need to create trust more broadly, so we can have more positive sum games. We need to update our thinking. With open source as a positive-sum game, we each have the responsibility to share what we’re learning, and bring more players into the game.”

Ramji himself said this idea is not profound for individuals; the whole open source software idea is built around many contributors adding for the benefit of the common good. But when you take that to a corporate level with major competitors contributing code to projects that could end up being used against them, you’re looking at a major shift in business thinking.

“It’s one thing for denizens of a non-software industry, like transportation or telecommunications, to decide to make software a free shared good,” Ramji said. “Software companies have always competed with each other based on software.”

What has caused that shift? More and more people who can create good software.

“The specialness of software and the price buyers are willing to pay historically has deteriorated,” Ramji said. “It used to be software was so hard that they only way you could get it was that you’d pay for proprietary software, and you’d be investing in a long term relationship with the company that had the only engineers that could change it, with phrases like ‘lock in’, and ‘the devil you know.’

“This has put significant price pressure in software industry leaders. Their corporate structure is based on the high margins that customers were willing to pay. Proprietary software, for a long time, enjoyed margins of 85 or 90 percent. When that margain gets threatened, the company’s structure gets challenged.”

The way to overcome that challenge and ensure the future of the software industry, Ramji said, is trust. And trust is built from good governance, clear sets of boundaries and rules, and enforcement of those rules.

“I think that trust, its absence, and the process for bootstrapping trust are at the heart of the current shift in the structure of the software industry,” Ramji said. “I think that shift is to non-profit foundations.”

That is why the work of foundations like Apache Software and Cloud Foundry is so important: They help large companies who have the most to contribute to open source software understand they also have the most to gain.

“Our calling here is to understand the phenomenon as best as possible, share our knowledge of the higher order, the meta structure of open source so we can get as many people as possible into this amazing game we call open source software,” Ramji said.

Watch the complete presentation below:

https://www.youtube.com/watch?v=qvvwAUZYdNk?list=PLGeM09tlguZTvqV5g7KwFhxDlWi4njK6n

linux-com_ctas_apache_052316_452x121.png?itok=eJwyR2ye

Port Binding in Cloud-Native Apps

This is an excerpt from Kevin Hoffman’s free ebook, Beyond the Twelve-Factor App.

In Beyond the Twelve-Factor App, I present a new set of guidelines that builds on Heroku’s original 12 factors and reflects today’s best practices for building cloud-native applications. I have changed the order of some to indicate a deliberate sense of priority, and added factors such as telemetry, security, and the concept of “API first” that should be considerations for any application that will be running in the cloud.

These new 15-factor guidelines are:

  1. One codebase, one application
  2. API first
  3. Dependency management
  4. Design, build, release, and run
  5. Configuration, credentials, and code
  6. Logs

Read more at O’Reilly

Dig Into DNS: Part 2

In the first article in this series, I provided an introduction to the powerful dig utility and its uses in performing DNS lookups. For those who haven’t used the command before, these articles will give a useful overview of its features and capabilities. This time, I’ll explain how dig syntax differs from other packages and offer some time-saving examples.

To TCP or Not to TCP

Many name servers on the Internet today do use UDP on occasion, especially for larger packet sizes (e.g., for IPv6 or DNSSec). TCP responses are also made available on some name servers, however. And, if my memory serves, one of the first times I ever used the dig utility was to query a TCP name server. The format might look something like this if you needed to use such a command, by querying a popular DNS Resolver “8.8.8.8”:

# dig +tcp @8.8.8.8 chrisbinnie-linux.tld

Get the Syntax Right

Although the for dig syntax is a little different from other DNS lookup packages, there’s a very simple “+” or “no” structure to the options that it offers, which I particularly like.

For example, if you wanted to perform UDP lookups only, then you would simply add “no” to the beginning of the TCP parameter, as follows:

# dig +notcp @8.8.8.8 chrisbinnie.tld

It’s very simple; I’m sure that you’ll agree.

Additionally, there are a significant number of features available with the dig utility. Let’s look at the displaying of Time To Live (TTL) settings on a DNS record — in other words, the countdown until a record should be refreshed to prevent it from going stale. This command can be entered as “+[no]ttlid” options. An example of this option in use might be:

# dig +ttlid chrisbinnie.tld

The standard output from the dig utility is very verbose and — certainly in comparison to “host” — I find it a little unwieldy. I will start off by looking at the “short” output version of the dig utility. The output is shown below as an example IP address “1.2.3.4”.

# dig chrisbinnie.tld +nocomment +nostats +short

1.2.3.4

Without the “+short” addition, we see the following:

# dig chrisbinnie.tld +nocomment +nostats

; <<>> DiG 9.8.1-P1 <<>> chrisbinnie.tld +nocomment +nostats

;; global options: +cmd

;chrisbinnie.tld.            IN    A

chrisbinnie.tld.        37765    IN    A    1.2.3.4

We can now do exactly the same but include the default comments and statistics parameters. You can see the default output offers significantly more detail:

# dig chrisbinnie.tld



; <<>> DiG 9.8.1-P1 <<>> chrisbinnie.tld

;; global options: +cmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58982

;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0


;; QUESTION SECTION:

;chrisbinnie.tld.            IN    A


;; ANSWER SECTION:

chrisbinnie.tld.        35636    IN    A    1.2.3.4


;; Query time: 15 msec

;; SERVER: 127.0.0.1#53(127.0.0.1)

;; WHEN: Sat Nov  8 20:22:27 2014

;; MSG SIZE  rcvd: 44

I hope the majority of the answer’s output here is self-explanatory. The question and answer sections are really very close to a DNS query-and-response’s model. It’s important to note the section stating the server which was used (under “SERVER” we see “localhost” because we didn’t query a name server directly with the “@server” parameter). There is mention of the standard DNS port following that output under “#53”. Also the “Query time” section can offer insight into server load issues and connectivity bottlenecks.

Fill Your Boots

One time-saving feature that would undoubtedly help if used within scripts is being able to ingest a potentially lengthy list of hostnames to query from a text file. We can use the “-f” switch to activate the reading of a file.

# dig -f file_full_of_hostnames

You can simply list a new hostname to query on each line of a file in a list and then run a command over it as above.

Respect My Authority!

If, for example, you wanted to check all of the name servers responsible, authoritatively, for a domain name, then you could swiftly achieve this with a command like this:

# dig ultradns.com NS +noall +answer


ultradns.com.        236 IN NS pdns196.ultradns.org.

ultradns.com.        236 IN NS pdns196.ultradns.com.

ultradns.com.        236 IN NS pdns196.ultradns.info.

ultradns.com.        236 IN NS pdns196.ultradns.net.

ultradns.com.        236 IN NS pdns196.ultradns.biz.

ultradns.com.        236 IN NS pdns196.ultradns.co.uk.

For simplicity, I have pruned the output a little ,but you can see that the domain name that we queried has a healthy geographical coverage when it comes to its authoritative name servers. This is by design, to provide optimal levels of resilience.

The Future Has A Number Six

We couldn’t possibly expect IPv6 to be ignored by the superb dig utility, even though the excellent DNS tool heralds from a time well before IPv6 was popularly used. However, because it’s part of one of the most widely used DNS servers on the planet — namely BIND — we can safely say that IPv6 functionality will be present in the dig utility.

This example of querying IPv6 is relatively intuitive. To check an “AAAA” record, you would do the following:

# dig google.com AAAA +short

2a00:1450:400c:c00::71

Next Time

In part three of this series, I’ll take a closer look at favorite dig utility feature — the “trace” option.
 

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Learn more about network and system management with the Essentials of System Administration course from The Linux Foundation.

Assessing the Current State of Container Security

Any rational organization that wishes to run mission-critical services on containers will at some point ask the question: “But is it secure? Can we really trust containers with our data and applications?”

Amongst tech folks, this often leads to a containers versus virtual machines (VMs) debate and a discussion of the protection provided by the hypervisor layer in VMs. While this can be an interesting and informative discussion, containers versus VMs is a false dichotomy; concerned parties should simply run their containers inside VMs, as currently happens on most cloud providers.

Read more at The New Stack

Meet Workload Diversity with Server-Side Storage

The more the enterprise gravitates toward software-defined architectures, the more it is confronted with diverse data loads and increasingly complex application requirements. This is leading to a Catch-22 in that while overall hardware requirements are diminishing, the enterprise still needs to field a wide variety of solutions in order to provide optimal support for emerging workloads.

One of these challenges is increasing the availability of in-memory and on-server storage, which goes a long way toward removing latency in high-speed applications. According to a new study by storage subsystem provider Crucial, two thirds of IT decision-makers say they need to expand their capacity of server-side memory in order to support the increasing number of virtual machines under management. 

Read more at IT Business Edge

4 Big Ways Companies Benefit from Having Open Source Program Offices

In the first article in my series on open source program offices, I took a deep dive into what an open source program office is and why your company might need one. Next I looked at how Google created a new kind of open source program office. In this article, I’ll explain a few benefits of having an open source program office.

At first glance, one big reason why a company not in the business of software development might more enthusiastically embrace an open source program office is because they have less to lose. After all, they’re not gambling with software products that are directly tied to revenue. Facebook, for example, can easily unleash a distributed key-value datastore as an open source project because they don’t sell a product called “enterprise key-value datastore.” That answers the question of risk, but it still doesn’t answer the question of what they gain from contributing to the open source ecosystem.

Read more at OpenSource.com