Home Blog Page 674

How to Run Commands at Shutdown on Linux

Linux and Unix systems have long made it pretty easy to run a command on boot. Just add your command to /etc/rc.local and away you go. But as it turns out, running a command on shutdown is a little more complicated.

Why would you want to run a command as the computer shuts down? Perhaps you want to de-register a machine or service from a database. Maybe you want to copy data from a volatile storage system to a permanent location. Want your computer to post “#RIP me!” on its Twitter account before it shuts down?

Read more at OpenSource.com

Microservices and Smart Networks Will Save the Internet

Imagine smart cars talking directly to each other so they don’t crash. Imagine hooking your smart phone into a giant mesh of phone video streams at a stadium event, so you can watch your event from multiple perspectives. Imagine smart factory devices that manage themselves for better safety and efficiency. Imagine intelligent phones, and other intelligent devices, communicating directly at close range so they don’t bog down the Internet or cell phone networks. It doesn’t take much imagination to see how this benefits disaster management, to give one example, which traditionally is hampered by overloaded phone networks.

What makes all of this possible? Robert Shimp of Oracle paints a picture at LinuxCon North America of a fascinating future full of specialized distributed services and devices. We grew up with smart, powerful server-client computing over dumb networks. Now the networks are getting smart, and the endpoints are getting smaller, smarter, and more distributed.

It’s a huge disruptive shift that is going to leave IT professionals scrambling to figure out what to do. Shimp opens his presentation with some interesting or scary, depending on your perspective, numbers. He says, “It’s a fact that over the next few years, about 50% of all the corporate data centers, the privately held corporate data centers in large companies are going to go away. For small or mid-sized businesses, the percentage is going to be dramatically higher. That leaves the IT ops person with a couple of choices: either you’re going to go to work for some intergalactic infrastructure provider or you’re going to find something else interesting for your company to do. I think the answer is that there are a lot of very interesting things to do.”

Really, Really Interesting Things

“In fact, I will make the forecast that no more than one-third of the business applications out there are going to run in some giant hyperscale data center in Chicago or wherever. Two-thirds of all the business applications are, going to be distributed computing types of applications…there are, roughly speaking, 20 some odd billion devices out there on the edge of the network and intelligent devices. That’s going to 80 billion by 2025. It’s a very dramatic shift. There is a lot more intelligence in smartphones today that’s only going to increase more and more over the coming years. That’s going to create a lot of computing capacity at the edge of the network to do really, really interesting things.”

We’ve seen predictions for many years that all of these billions of devices coming online will create massive Internet congestion. Which is true, so the solution is to distribute everything. “Most of these types of devices are incredibly chatty, and if you allow them to simply connect up to the hyperscale clouds and do whatever they’re doing, it’s going to bring the Internet to its knees over time. Rather than go with that approach, which is going to be incredibly expensive, the idea is to move to distributed applications in which we push the computing capacity, all the data and the applications as close to the edge of the network as we can.”

So just how, exactly, do we do this? How do we implement security and updates? How do we manage increased complexity? Watch Shimp’s talk (below) to learn what is happening out on the bleeding edge of computing, some of the tools and architecture already being developed, and a lot of fascinating details on what the future looks like.

Watch 25+ keynotes and technical sessions from open source leaders at LinuxCon + ContainerCon Europe. Sign up now to access all videos!

Research from OpenStack Summit Shows Deployments Ramping Up

If you already have OpenStack administration skills, or are considering pursuing them, you’ll want to take note of some surprising research reported in conjunction with OpenStack Summit in Barcelona last week. Specifically, a study commissioned by The OpenStack Foundation found that enterprises have moved squarely away from the OpenStack evaluation stage that was prevalent last year, and are managing deployments and serious workloads. Directly on the heels of that study, a Red Hat survey produced similar results, and showed that as containerized applications emerge as a new workload type, OpenStack is a prime deployment environment.

The OpenStack Foundation commissioned analysts at 451 Research to do a study of enterprise private cloud users. You can find many related user stories spanning workloads, organization sizes and geography at the OpenStack User Stories page. The 451 analysts found that about 72 percent of OpenStack-based clouds are between 1,000 and 10,000 cores and three fourths choose OpenStack to increase operational efficiency and app deployment speed. Increasingly, enterprises are running OpenStack at scale, and that is going to create enormous opportunities for OpenStack administrators, developers, and container technologists.

At the same time, OpenStack is arriving not just at huge enterprises but also at smaller businesses. Some of the findings from the 451 Research include:

  • Mid-market adoption shows that OpenStack use is not limited to large enterprises. Two-thirds of respondents (65 percent) are in organizations of between 1,000 and 10,000 employees.

  • OpenStack-powered clouds have moved beyond small-scale deployments. Approximately 72 percent of OpenStack enterprise deployments are between 1,000 to 10,000 cores in size. Additionally, five percent of OpenStack clouds among enterprises top the 100,000 core mark.

  • OpenStack users are adopting containers at a faster rate than the rest of the enterprise market with 55 percent of OpenStack users also using containers, compared to 17 percent across all respondents.

  • OpenStack supports workloads that matter to enterprises, not just test and dev. These include infrastructure services (66 percent), business applications and big data (60 percent and 59 percent, respectively), and web services and ecommerce (57 percent).

  • OpenStack users can be found in a diverse cross section of industries. While 20 percent cited the technology industry, the majority come from manufacturing (15 percent), retail/hospitality (11 percent), professional services (10 percent), healthcare (7 percent), insurance (6 percent), transportation (5 percent), communications/media (5 percent), wholesale trade (5 percent), energy & utilities (4 percent), education (3 percent), financial services (3 percent), and government (3 percent).

  • Increasing operational efficiency and accelerating innovation/deployment speed are top business drivers for enterprise adoption of OpenStack, at 76 and 75 percent, respectively. Supporting DevOps is a close second, at 69 percent. Reducing cost and standardizing on OpenStack APIs were close behind, at 50 and 45 percent, respectively.

“Our research in aggregate indicates enterprises globally are moving beyond using OpenStack for science projects and basic test and development to workloads that impact the bottom line,” said Al Sadowski, research vice president with 451 Research. “This is supported by our OpenStack Market Monitor which projects an overall market size of over $5 billion in 2020 with APAC, namely China, leading the way in terms of growth.”

Red Hat Bolsters the Case

Also in conjunction with OpenStack Summit in Barcelona, Red Hat is out with notable results from its polling of its OpenStack user base. It found that production OpenStack deployments increased hugely in the last year, according to a survey of 150 information technology decision makers and professionals.

The results stand in sharp contrast to Red Hat’s results from last year, which showed that many enterprises were still in the evaluation stage. According to the company:

[Beyond indications] of a doubling of OpenStack production deployments from a year ago, trendlines indicate that:

  • OpenStack is critical infrastructure for application development, especially with containers

  • Built-in management tools aren’t doing the job by themselves

  • Customers want workload portability across OpenStack and other infrastructures

  • Organizations are looking for strong technical support

In short, there is pronounced need for OpenStack expertise at many organizations.

Not only have production deployments increased, Red Hat reported, but the use cases are growing as well. The bulk of respondents (66 percent) are now using, or planning to use, Platform-as-a-Service (PaaS) with their OpenStack deployments. This is a jump over last year’s survey, when just 54 percent of respondents were considering PaaS and OpenStack together, and the findings show the combined growth in interest of these complementary technologies.

In other Red Hat news, the company said that Swisscom has selected Red Hat as its technology partner to help the company deliver a modern, agile, and highly scalable OpenStack cloud platform.

If you have been considering picking up OpenStack administration skills or certification, now is clearly the time.  How can you get trained and certified?This article lists some great choices, including some free options.

Additionally, The Linux Foundation offers an OpenStack Administration Fundamentals course, which serves as preparation for certification. The course is available bundled with the COA exam, enabling students to learn the skills they need to work as an OpenStack administrator and get the certification to prove it. The most unique feature of the course is that it provides each learner with a live OpenStack lab environment that can be rebooted at any time (to reduce the pain of troubleshooting what went wrong). Customers have access to the course and the lab environment for a full 12 months after purchase.

Learn everything you need to know to create and manage private and public clouds with The Linux Foundation Training’s online, self-paced OpenStack Administration Fundamentals course.

Linux Administration in Distributed Cloud Computing Environments by Robert Shimp, Oracle

Watch Robert Shimp’s talk to learn what is happening on the bleeding edge of computing, some of the tools and architecture already being developed, and a lot of fascinating details on what the future looks like.

DDoS Defenses Emerging from Homeland Security

Government, academic, and private-sector officials are collaborating on new ways to prevent and mitigate distributed denial-of-service (DDoS) attacks, based on research years in the making but kicked into high gear by the massive takedown this month of domain name system provider Dyn.

The largest attacks in summer 2015 were about 400 gigabits per second, but September 2016 saw an attack on security blogger Brian Krebs of more than 600Gbps, while Dyn said its own attack may have exceeded 1.2 terabits per second. Government-led research is focusing on the 1-terabit range but with systems that can scale higher, which is already needed due to the proliferation of vulnerable Internet of Things devices too easily commandeered by malicious hackers.

Read more at Tech Republic

DTrace for Linux 2016

With the final major capability for BPF tracing (timed sampling) merging in Linux 4.9-rc1, the Linux kernel now has raw capabilities similar to those provided by DTrace, the advanced tracer from Solaris. As a long time DTrace user and expert, this is an exciting milestone! On Linux, you can now analyze the performance of applications and the kernel using production-safe low-overhead custom tracing, with latency histograms, frequency counts, and more.

There have been many tracing projects for Linux, but the technology that finally merged didn’t start out as a tracing project at all: it began as enhancements to Berkeley Packet Filter (BPF). At first, these enhancements allowed BPF to redirect packets to create software-defined networks. Later on, support for tracing events was added, enabling programmatic tracing in Linux.

 

Read more at Brendan Gregg’s Blog

The (Updated) History of Android

Android has been with us in one form or another for more than eight years. During that time, we’ve seen an absolutely breathtaking rate of change unlike any other development cycle that has ever existed. When it came time for Google to dive in to the smartphone wars, the company took its rapid-iteration, Web-style update cycle and applied it to an operating system, and the result has been an onslaught of continual improvement. Lately, Android has even been running on a previously unheard of six-month development cycle, and that’s slower than it used to be. For the first year of Android’s commercial existence, Google was putting out a new version every two-and-a-half months.

Looking back, Android’s existence has been a blur. It’s now a historically big operating system. Almost a billion total devices have been sold, and 1.5 million devices are activated per day—but how did Google get here? With this level of scale and success, you would think there would be tons of coverage of Android’s rise from zero to hero. However, there just isn’t. Android wasn’t very popular in the early days, and until Android 4.0, screenshots could only be taken with the developer kit. These two factors mean you aren’t going to find a lot of images or information out there about the early versions of Android.

Read more at Ars Technica

3 Reasons Hyperledger Has Blockchain’s Best Development Model

Zaki Manian is the founder of Skuchain, a startup seeking to bring cryptographic trust to the supply chain.

In this opinion piece, Manian argues that Hyperledger offers the best development model for the permissioned blockchain industry, and that attempts to use public networks for business-to-business use cases are perhaps misguided.

The nascent business-to-business permissioned ledger industry is moving rapidly from pilots to real product development and deployment.

But as this novel application space opens up, it is important for technologists to start answering hard questions about what software development processes will provide the blockchain layer of these applications stacks.

Read more at CoinDesk

An Introduction to Linux Filesystems

This article is intended to be a very high-level discussion of Linux filesystem concepts. It is not intended to be a low-level description of how a particular filesystem type, such as EXT4, works, nor is it intended to be a tutorial of filesystem commands.

Every general-purpose computer needs to store data of various types on a hard disk drive (HDD) or some equivalent, such as a USB memory stick. There are a couple reasons for this. First, RAM loses its contents when the computer is switched off. There are non-volatile types of RAM that can maintain the data stored there after power is removed (such as flash RAM that is used in USB memory sticks and solid state drives), but flash RAM is much more expensive than standard, volatile RAM like DDR3 and other, similar types.

Read more at OpenSource.com

Collaboration Yields Open Source Technology for Computational Science

The gap between the computational science and open source software communities just got smaller – thanks to an international collaboration among national laboratories, universities and industry.

The Eclipse Science Working Group (SWG), a global community for individuals and organizations who collaborate on commercially-friendly open source software, recently released five projects aimed at expediting scientific breakthroughs by simplifying and streamlining computational science workflows. The open source projects, which represent years of development and thousands of users, are the product of intense collaboration among SWG members including ORNL, Diamond Light Source, Itema AS, iSencia, Kichwa Coders, Lablicate GmbH, and others. 

 

Read more at Phys.org