Dr. Web researchers spotted a Linux trojan, dubbed Linux.Proxy.10 that has been used to infect thousands of Linux devices.
The trojan infiltrates computers and devices that etiher have standard settings or are already infected by a Linux malware and is distributed by the threat actor logging into the vulnerable devices via the SSH protocol, according to a Jan. 24 blog post.
Dilbert Comic Strip on 2009-05-25. Copyright 2009 Scott Adams, Inc./Distributed by UFS, Inc.
If you’re like me, a term of art called the “Frozen Middle” meant nothing to you, except perhaps as a substance found inside a Klondike bar. It happens to be a business school term referring to middle management, specifically middle management that is “frozen” or not performing well. I first heard about it through a recent Twitter discussion and then looked up its source. The earliest reference I can find is from a 2005 HBS blog post, and that post references case studies from the automobile industry. I found the term intriguing because I am, in fact, middle management. And I have, as a matter of course, dealt with many many middle managers in the past. Some of those middle managers were exceptional, some were competent, and some were… hey, is that a Klondike bar??? “Great,” I can hear you say. “But what does this have to do with Open Source?”
The curse of broken middle management has plagued numerous large organizations throughout the years, and you can find an entire cottage industry around books, training materials and other things designed to “fix” the problem. Either these fixes are inadequate, people don’t listen or just don’t retain knowledge, or, as I believe is usually the case, middle management is really hard to do well. Upper executives have a luxury that middle managers don’t – they can spend all day looking at reports, creating power points, and thinking up crazy stuff for middle managers to do, sometimes by outright edict with little input from anyone else. Red Hat CEO Jim Whitehurst has touched on this subject in his book, “The Open Organization.” “OK!” I can hear you shout in your best internal monologue voice, “But what does this have to do with open source???!!!” I’m glad you asked, but there’s no need to shout!
If you’ve regularly read my articles – and really, why *wouldn’t* you? – you know that I’ve said that open source leveled the playing field of customers, partners and vendors (and individual developers and users). The old hierarchy of vendors and customers led to software that was bulky, overpriced and excessive, and usually left the customer with little choice in the matter. They could either buy bulky, overpriced thing from vendor A, or they could spend a lot of time and effort switching to the other bulky, overpriced thing from vendor B. This was the great innovation and “killer app” of open source. It upended that model entirely and allowed customers greater choice and flexibility. As I think about the frozen middle, I wonder if open source principles could be used to ameliorate that problem as well. This works well in engineering organizations, but similar tactics could work in other types of collaborative settings as well.
When you dive into the frozen middle problem and what can cause organization dysfunction, the topic invariably turns to the subject of “bureaucratic inertia.” At this point people usually shrug their shoulders, roll their eyes, and express exasperation, recommending that no one should beat their head against a wall in search of a solution. What is bureaucratic inertia, and what causes it? There are a couple of primary causes, some of which I’ve personally witnessed, and others I’ve read about in business case studies:
Whiplash. How often do executives issue strategy edicts? How often do those change? I have a hunch that the more edicts issued per year, the less effective middle management tends to be. At some point, middle management begins to realize that their success at a particular job has more to do with managing day-to-day operations and performance metrics than paying close attention to any particular edict from upper management.
Ownership. When issuing a new strategy edict, what was the process of determining the planned execution of this edict, the goal of the edict, and the construction of said edict itself? Specifically, what role did middle management play in pursuing a particular edict? What about individual contributers? “None,” you say? Oops.
Communication. How were the goals, execution and other aspects of the edict communicated to the rank and file? And how were their expectations and feedback communicated back up the bureaucratic food chain? To make the question simpler, let’s make the answer “yes” or “no”. Were these things adequately communicated up and down the org chart? No? Good luck, chumps!
Here’s where it gets interesting. What if internal projects were run just like any other open source project? The tooling for software engineering projects is probably more mature for this purpose, but the same principles could apply anywhere. It would serve the same objectives as open source did for the customer-vendor relationship. Now, instead of a strict heirarchy with little information ending up where it should, everyone in the organization collaborates, with information flowing where it should, whenever it’s requested. Think of it as equal parts Innersource, DevOps, and open source community management. Lest anyone fear that I’m suggesting an “Ark B” strategy for middle management, I believe that everyone in the organization has a role to play, including middle management. It’s not that there shouldn’t be hierarchy, it’s that hierarchy should be designed to function optimally.
In an open source collaboration scenario, edicts are not issued without close collaboration with those implementing the strategy. Strategies and implementation plans are developed transparently, with input from interested parties at every stage. This way, everyone owns it. In larger organizations, such initiatives can suffer from a glut of input, but open source projects have found ways to solve that problem. There is where a hierarchy comes in. In open source projects, these hierarchies are voted in by project members, with managers attaining their positions according to their contributions. This may not work in practice in many organizations, but managers can serve the same purpose. In an open source project, the manager’s job is to facilitate communication, unlock bureaucratic roadblocks faced by individual contributors, review their work, and use their influence to prevent stupid decisions both in upper management and from individuals. Upper management can benefit because they’re no longer divorced from the day-to-day experience, seeing more clearly the challenges faced by everyone else. Individual contributors benefit because they have more access to project leaders and, yes, can bypass middle management when appropriate. Yes, there’s a hierarchy, but it’s not rigid, and that’s important.
I’m going to stop here for this first post on the subject. There are several different directions I could go in a subsequent post or series, but I’d like to hear from the readers first. What are your experiences with middle management? Upper executives? Individual contributors? How did you push through bureaucratic obstacles? Did you have to “hack the hierarchy”? Let’s use this as the starting point of a conversation.
Vault is the leading technical event dedicated to Linux storage and filesystems where developers and operators in the filesystems and storage space can advance computing for data storage. Linux has been at the center of the advances in data, filesystems and storage with its widespread use in cloud computing, big data and other data-intensive computing workloads. At Vault, hardware vendors collaborate within the Linux community to develop cutting-edge storage hardware, helping transform Linux into a leader in the storage industry.
Haoyuan Li, CEO of Alluxio (formerly Tachyon), will present a keynote on the San Mateo-based startup’s journey thus far and the road ahead. The open source, software-only storage company, which focuses on big data analytics jobs with Apache Spark, recently struck up a partnership with Dell EMC’s private cloud. And last year Alluxio announced an integration with Huawei’s big data storage solution.
Facebook’s Josef Bacik, Oracle’s Martin Petersen, and Red Hat’s Rik van Riel will also give a keynote recap of the invitation-only Linux Storage, Filesystem & Memory Management Summit, which will be held directly preceding Vault in the same venue. The summit gathers foremost development and research experts and kernel subsystem maintainers to map out and implement improvements to the Linux filesystem, storage and memory management subsystems that will make their way into the mainline kernel in the coming years.
Other speakers at Vault include:
Ahmed El-Shimi from Minima will provide insight into using machine learning to predict storage failures.
Felix GV from LinkedIn will explore how they refresh 100TB of data per day across multiple datacenters using Project Voldemort.
Kernel hacker Christoph Hellwig will illuminate filesystem and block storage optimizations in his talk, “Improving block discard support throughout the Linux Storage Stack.”
Kevin Vigor from Facebook will examine how NFS is critical infrastructure and lessons they’ve learned from running it at very large scale in his talk: “NFS @ scale: worst. protocol. evar. (except for all the others)“.
Sage Weil from Red Hat will discuss a new storage backend for Ceph named “Bluestore.”
Registration for Vault is discounted to $500 through February 4. Discounted academic rates are also available. Applications for diversity scholarships are currently being accepted. For information on eligibility and how to apply, please click here.
Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the registration price. Save $225 by registering before February 4.
Thiago Macieira from the Intel Open Source Technology Center began his talk at LinuxCon Europe by saying many people have not yet made the switch to IPv6, so his talk contained a brief introduction to IPv6 and some of the differences compared to IPv4.
IPv6 has been around for a long time. The first IPv6 RFC was released more than 20 years ago, and we began exhausting the IPv4 address space in 2011. Thiago Macieira from the Intel Open Source Technology Center began his talk at LinuxCon Europe by saying that he didn’t think he would still need to be talking about this today, and he wished we had already solved this problem. But, many people have not yet made the switch to IPv6, so his talk contained a brief introduction to IPv6 and some of the differences compared to IPv4.
Macieira says that one of the reasons things haven’t completely fallen apart yet is because we’ve created some workarounds, but they are just workarounds, and we still need to plan our move to IPv6. The biggest difference is the address size, with IPv4 containing only 32 bits of address space compared to 128 bits for IPv6. To put this in perspective, IPv6 would give us many trillions of addresses for every square centimeter on the planet. Another difference is that multicasting goes from optional in IPv4 to mandatory in IPv6 allowing one-to-many communication commonly used for things like Multicast DNS, which is what, for example, Apple uses to discover other devices on your home network. Other differences include a higher minimum Maximum Transmission Unit (MTU) and maximum packet size along with fragmentation at the origin (instead of at the router) and privacy extensions.
StateLess Address Auto-Configuration (SLAAC) in IPv6 provides a way for devices to configure themselves, even over the Internet, and communicate with each other without requiring a DHCP server. This is accomplished securely, while still protecting privacy, by using temporary addresses or stable, but opaque addresses, Macieira explained. However, you still need to make sure that your firewall is correctly configured using ip6tables, not iptables keeping in mind that addressable from the world does not mean that your device is reachable from the world.
Macieira points out that when using an API, there are two things to keep in mind. First, always use an IPv6 API. They are actually simpler to use and will still support IPv4, so if an API doesn’t support IPv6 already, stop using it. Second, don’t assume anything about the address.
The talk concluded with a number of interesting things that you can do with IPv6. Watch the entire video of the talk to learn more about how and why to use IPv6.
Interested in speaking at Open Source Summit North America on September 11-13? Submit your proposal by May 6, 2017. Submit now>>
Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!
In the cloud, open source tools and applications produce many kinds of DevOps efficiencies, and that’s especially true for logging and monitoring solutions. Monitoring cloud platforms, applications and components — along with processing and analyzing logs — is essential for ensuring high availability, top performance, low latency, and more. In fact, RightScale’s most recentState of the Cloud Survey reports that the most common cloud optimization action, focused on by 45 percent of enterprises and SMBs, is monitoring.
However, proprietary logging and monitoring solutions are expensive. Even worse, they are often bundled into even more expensive managed service offerings.
Enter the new wave of powerful open logging and monitoring solutions. Some of these focus on targeted tasks, such as container cluster monitoring and performance analysis, while others qualify as holistic monitoring and alerting toolkits, capable of multi-dimensional data collection and querying.
The Linux Foundation recentlyannounced the release of its report Guide to the Open Cloud: Current Trends and Open Source Projects. This third annual report provides a comprehensive look at the state of open cloud computing, and includes a section on logging and monitoring for the DevOps community. The report, which you candownload now, aggregates and analyzes research, illustrating how trends in containers, monitoring, and more are reshaping cloud computing. The report provides descriptions and links to categorized projects central to today’s open cloud environment. It takes special note of the fact that DevOps has emerged as the most effective method for application delivery and maintenance in the cloud.
In a series of posts appearing here, we are calling out many of these projects from the guide, by category, providing extra insights on how the overall category is evolving. Below, you’ll find a collection of several important DevOps tools for logging and monitoring and the impact that they are having, along with links to their GitHub repositories, all gathered from the Guide to the Open Cloud:
Fluentd is an open source data collector for unified logging layer, sponsored by Treasure Data. It structures data as JSON to unify all facets of processing log data: collecting, filtering, buffering, and outputting logs across multiple sources and destinations. Fluentd on GitHub
Heapster is a container cluster monitoring and performance analysis tool in Kubernetes. It supports Kubernetes and CoreOS natively and can be adapted to run on OpenShift. It also supports a pluggable storage backend: InfluxDB with Grafana, Google Cloud Monitoring, Google Cloud Logging, Hawkular, Riemann and Kafka. Heapster on GitHub
Logstash is Elastic’s open source data pipeline to help process logs and other event data from a variety of systems. Its plugins can connect to a variety of sources and stream data at scale to a central analytics system. LogStash on GitHub
Prometheus is an open source systems monitoring and alerting toolkit, originally built at SoundCloud and now a Cloud-Native Computing Foundation project at The Linux Foundation. It fits both machine-centric and microservices architectures and supports multi-dimensional data collection and querying. Prometheus on GitHub
Weave Scope is Weaveworks’ open source tool to monitor distributed applications and their containers in real time. It integrates with Kubernetes and AWS ECS. Weave Scope on GitHub
Docker 1.13 introduced a new version of Docker Compose. The main feature of this release is that it allows services defined using Docker Compose files to be directly deployed to Docker Engine, enabled with Swarm mode. This enables simplified deployment of multi-container applications on multi-host setups.
This article will use a simple Docker Compose file to show how services are created and deployed in Docker 1.13.
DevOps, once a small cultural movement, is driving demand for experienced professionals who can improve IT agility as they try to move at “cloud speed”. Mark Hinkle, VP at The Linux Foundation, talks about what’s next for the effort to improve coordination between software developers and operations personnel.
Nowadays, we see an almost desperate call for DevOps engineers. Indeed.com, a job search site, shows that DevOps job postings have more than doubled year-over-year. A quick perusal of these jobs posts indicates a degree of confusion around the definition: do we all mean the same thing when we talk about DevOps? Is it a job, a methodology, a trend, or a just a buzzword?
Google today announced that in March it will open-source its Google Earth Enterprise software, which lets organizations deploy Google Maps and Google Earth in their on-premises data center infrastructure.
Google unveiled the software back in 2006 and stopped selling it nearly two years ago. Since then, Google has released updates and provided support to organizations with existing licenses. Once it pops up online — on GitHub, under an Apache 2.0 license — organizations will be free to collaboratively or independently modify it for their own needs as open-source software.
The last few years have seen a massive jump in the frequency of reports about digital security breaches and personal privacy issues, and no doubt this trend will continue. We hear about scammers moving to social media, nations using cyberattacks as part of coordinated offensive strategies, and the rise of companies making millions tracking our online behavior.
Feeling apathetic about these events is all too easy, but you can do a great deal to improve your online security so that when you are caught up by a security event, you can mitigate the risk to yourself and quickly protect yourself from further risk. Security consciousness is surprisingly easy to learn, and many open source projects exist that can help you.