Many times, we are locked in a situation where we have to search for multiple files with different extensions, this has probably happened to several Linux users especially from within the terminal. There are…
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Prometheus is an open-source monitoring system and time-series database. Written in Go language, Prometheus is a natural member of the ecosystem around Cloud Native Computing Foundation. Prometheus is not just for monitoring Kubernetes applications; it also works for those in Mesos, Docker, OpenStack and other things.
In the following article, Treasure Data explains how you can collect Docker logs into a Prometheus server using Fluentd, an open source data collector tool.
Gareth Rushgrove is known by many people as the creator and editor of the popular DevOps Weekly email newsletter, and he spent several years working for the U.K. Government Digital Service (GDS) on GOV.UK and other projects. As Senior Software Engineer at Puppet, you can find him building some of the latest infrastructure automation products when he isn’t speaking at events on a wide variety of DevOps and related topics.
Gareth Rushgrove is the creator of DevOps Weekly and Senior Software Engineer at Puppet.
Linux.com: Why are so many organizations embracing DevOps?
Gareth Rushgrove: it really does vary. Sometimes it’s a matter of individual teams adopting new practices to get things done. Sometimes it’s a strategic initiative from IT management. Often it’s both at the same time meeting in the middle. At this point the data and the success stories, for instance from the DevOps Report, make it clear that not doing so puts you at a disadvantage, both for hiring skilled developer and operations professionals, and from a competitive standpoint regardless of what your organization does.
Linux.com: Why are individuals interested in participating?
Gareth: It’s becoming clearer to everyone that there are better ways of running complex systems, and that the impact of engaged employees plays a big part in that. DevOps as a movement has been very practitioner driven, so many of the conversations center around improving the quality of life for individual developers and operators – whether that’s talk of alert fatigue, burnout, or improving monitoring to make visibility of the running system a first class concern. Individuals get involved because it can make their jobs more impactful, but also because it humanizes operations as a craft.
Linux.com: What’s the primary advantage of DevOps?
Gareth: I don’t think a single advantage exists. DevOps has always been a banner under which many different practices co-exist. It’s not a framework or prescriptive approach to a single problem. The reality is different types of organizations, and different sizes of organizations, benefit from different practices. It’s all about context. Most organizations are under pressure to get software out to users more quickly and can benefit from reducing the cost of keeping the lights on. But you might also simply be trying to get more out of your existing IT estate, staff and budget, or trying to hire and retain qualified staff in a competitive market. You can’t really ‘do’ DevOps, but you can embrace various associated practices depending on particular problems your organization is trying to solve.
Linux.com: What is the overwhelming hurdle?
Gareth: It’s easy to say the main hurdle many organizations face is finding people. And it’s true the market for people with experience of being involved in a DevOps transformation is hot at the moment. But lots of organizations already have great people, but the processes and constraints they operate within pose the problem. DevOps can provide the impetus to break down old silos and adopt more effective ways of working. The formal literature is starting to catch up too – Effective DevOps from O’Reilly, The Puppet State of DevOps Report and the upcoming DevOps Handbook from Gene Kim et al. provide a really good introduction.
Linux.com: What advice would you give to people who want to get started in DevOps?
Gareth: If you can, head to a Devopsdays event. These are local conferences that attract people like you, who are interested in what better operations looks like. Listen to the speakers, ask lots of questions and have lots of conversations. Then take that back and apply it to your organization’s problems. Realize that DevOps is an area where you’ll often feel out of your comfort zone on either the technical content or the human factors side, but that’s what makes it interesting. Try and lean towards the topics you find more uncomfortable; if you’re generally regarded as a bit of a geek then explore the people side, if you’re in a management position explore the technical topics. It’s not that you can ignore one side or the other, but that successfully adopting DevOps practices is a classic sociotechnical systems problem – you need equal measures of technical know-how as well as empathy for other practitioners to get the best results.
Hadoop was born into a world begging to better utilize data, says project co-founder Doug Cutting, in this keynote presentation from Apache: Big Data North America 2016.
Looking back at 10 years of Hadoop, project co-founder and Cloudera Chief Architect Doug Cutting can see two primary factors in the success of open source big data technology: a heap of luck and the Apache Foundation’s unique support.
Cutting delivered a keynote at the Apache Big Data conference in Vancouver in May. In that talk, he said Hadoop was the right technology at the right time, but the reason it was able to capitalize on that position was the work from the Apache community.
“What really has made this happen is people using software, people contributing to software, people encouraging contributions from others; that core Apache capability is what drives things forward,” Cutting said.
Once Hadoop’s utility, flexibility and scalability became apparent to enterprise IT departments, the open source community expanded the ecosystem quickly. Cutting said that compared to the the era of databases directly preceding it — proprietary software and expensive hardware controlled by a very small group of huge companies — the pace of innovation is rapidly accelerating.
“The hallmark of this ecosystem that’s emerged is the way that it’s evolving,” Cutting said. “We’re seeing not just new projects added, but some of the old projects being replaced over time by things that are better. In the end, nothing is sacred. Any component can be replaced by something that is better.
“This is really exciting,” Cutting continued. “The pace of change in the big data ecosystem is astronomically greater than we saw in the 20 preceding years. The way this change is happening is the key: It’s a decentralized change. There is no one organization, or handful of organizations, that are deciding what are the next components in the stack.
“Rather, we’ve got this process where there are random mutations sprouting up all over. Some make it into the incubator and become top level projects at Apache, but mostly what matters is that people start using them. They decide which ones work, and start to invest further in those, and there is this very organic process in selecting and improving the next thing.
“It’s leading not only to faster change, but change that is more directed towards the problems that people really care about, and where they need solutions.”
Cutting, who first worked with Apache as founder of the Lucene project, also acknowledged that Hadoop happened to be the beneficiary of being in the right place at the right time. Cutting was working on Hadoop’s predecessor, Nutch, when Google released papers about its filesystem, GFS, and MapReduce. This helped solve some of the Nutch project’s scalability issues, and soon Hadoop was born.
The world it was born into was begging for a way to better utilize data, Cutting said.
“Industry was ripe for harnessing the data it was generating,” said Cutting. “A lot of data was being just discarded. People saw the possibility of capturing it but they didn’t have the tools, so they were ready to jump on something which gave them the tools.”
Hadoop was a first mover, and once the Apache-backed project started to grow and prove itself, the old guard found they had lost their ability to lock in clients to their proprietary systems. The community got momentum, and rest is 10 years of history.
“It’s really hard to fight Apache open source with something that isn’t,” Cutting said. “It’s much easier to join than to fight.”
Splice Machine, the relational SQL database system that uses Hadoop and Spark to provide high-speed results, is nowavailable in an open source edition.
Version 2.0 of Splice Machine added Spark to speed up OLAP-style workloads while still processing conventional OLTP workloads with HBase. The open source version, distributed under the Apache 2.0 license, supplies both engines and most of Splice Machine’s other features, including Apache Kafka streaming support. However, it omits a few enterprise-level options like encryption, Kerberos support, column-level access control, and backup/restore functionality.
A new environment for business-to-business networks announced by IBM last week will allow companies to test performance, privacy, and interoperability of their blockchain ecosystems within a secure environment, the company said. Based on IBM’s LinuxONE, a Linux-only server designed for high-security projects, the new cloud environment will let enterprises test and run blockchain projects that handle private data for their customers.
The service is still in limited beta, so IBM clients will not be able to get their hands on it just yet. Once it launches, however, the company said clients will be able to run blockchain in production environments that let them quickly and easily access secure, partitioned blockchain networks.
To compete in today’s marketplace, companies increasingly are turning to public cloud architectures. They build out robust cloud infrastructures to achieve efficiency, reduce costs and improve flexibility with the ability to dial up and dial down resources according to business needs.
Adopting a cloud strategy often takes time; however, it doesn’t end with migration. Even when companies feel they have developed an effective cloud environment, managing it can be a challenge. At any given moment, companies with fully formed clouds might struggle to understand just how their instances are working—how much they’re spending on the cloud, who in their organization is spinning up cloud resources and how people are using those resources.
The Korora distribution is a desktop oriented operating system built on Fedora. The Korora project has announced the availability of Korora 24 which is based on Fedora 24. The new version of Korora is available in four editions: Cinnamon, GNOME, MATE and Xfce.
“Changes in Korora 24: Images are 64-bit only, 32-bit users can still upgrade. Over the last few versions the demand for 32-bit ISOs has markedly decreased to the point where we feel it’s no-longer necessary to provide install images for the platform. Starting with Korora 24, images will be 64-bit (x86_64) only, however those who have 32-bit systems already are still able to upgrade to Korora 24″…
To start we’ll look at the ‘what’ of Serverless where I try to remain as neutral as I can about the benefits and drawbacks of the approach – we’ll look at those topics later.