Kubernetes is an open source container management platform designed to run enterprise-class, cloud-enabled and web-scalable IT workloads. It is built upon the foundation laid by Google based on 15 years of experience in running containerized applications.
Though their popularity is a mostly recent trend, the concept of containers has existed for over a decade. Mainstream Unix-based operating systems (OS), such as Solaris, FreeBSD and Linux, had built-in support for containers, but it was Docker that truly democratized containers by making them manageable and accessible to both the development and IT operations teams. Docker has demonstrated that containerization can drive the scalability and portability of applications. Developers and IT operations are turning to containers for packaging code and dependencies written in a variety of languages. Containers are also playing a crucial role in DevOps processes. They have become an integral part of build automation and continuous integration and continuous deployment (CI/CD) pipelines.
While core implementations center around the life cycle of individual containers, production applications typically deal with workloads that have dozens of containers running across multiple hosts. The complex architecture dealing with multiple hosts and containers running in production environments demands a new set of management tools. Some of the popular solutions include Docker Datacenter, Kubernetes, and Mesosphere DC/OS.
The open source Kubernetes container management project is probably the most popular of the various competing container management services available today. The Cloud Native Compute Foundation, which plays host to the open source side of Kubernetes, is hosting its first Kubernetes conference this week and unsurprisingly, we’ll see quite a bit of container-related news in the next few days.
First up is Microsoft, which is not only making the source code of the engine at the core of its Azure Container Service (ACS) available, but also launching a preview of its native integration of Kubernetes for ACS. In addition, Microsoft is also continuing to bet on Mesosphere’s DC/OS and updating that service to the latest release of DC/OS.
It’s easy to think we’ve reached peak Bitcoin, but the blockchain at the heart of cryptocurrencies contains the seeds of something revolutionary.
The blockchain is a decentralised electronic ledger with duplicate copies on thousands of computers around the world. It cannot be altered retrospectively, allowing asset ownership and transfer to be recorded without external verification.
Investors have now realised the blockchain is bigger than Bitcoin. In the first quarter of 2016, venture-capital investment in blockchain startups overtook that in pure-play Bitcoin companies for the first time, according to industry researcher CoinDesk, which has tallied $1.1 billion (£840m) in deals to date.
Some companies cannot allow having their services down. In case of a server outage a cellular operator might experience billing system downtime causing lost connection for all its clients. Admittance of the potential impact of such situations leads to the idea to always have a plan B.
In this article, we’re throwing light on different ways of protection against server failures, as well as architectures used for deployment of VMmanager Cloud, a control panel for building a High Availability cluster.
The open spec Orange Pi PC 2 runs Linux or Android on a quad-core -A53 Allwinner H5 SoC, and offers GbE, a 40-pin RPi interface, and three USB host ports.
Shenzhen Xunlong is keeping up its prolific pace in spinning off new Allwinner SoCs into open source SBCs, and now it has released its first 64-bit ARM model, and one of the cheapest quad-core -A53 boards around. The Orange Pi PC 2 runs Linux or Android on a new Allwinner H5 SoC featuring four Cortex-A53 cores and a more powerful Mali-450 GPU. The Orange Pi PC 2, which sells at Aliexpress for $19.98, or $23.33 including shipping to the U.S., updates the quad-core -A7 Allwinner H3 based Orange Pi PC, which came in 14th out of 81 SBCs in our hacker boards reader survey.
In December 2009, Google was the target of a series of highly coordinated, sophisticated advanced persistent threat (APT) attacks in which state-sponsored hackers from China stole intellectual property and sought to access and potentially modify Google source code the companys crown jewels. Dubbed Operation Aurora, the attack proved to be a referendum at Google on the layered, perimeter-based security model.
Five years later, in 2014, Google published a paper titled “BeyondCorp: A New Approach to Enterprise Security,” which detailed the companys radical security overhaul, transitioning to a trustless model where all applications live on the public Internet. Google wrote:
Virtually every company today uses firewalls to enforce perimeter security. However, this security model is problematic because, when that perimeter is breached, an attacker has relatively easy access to a companys privileged intranet. As companies adopt mobile and cloud technologies, the perimeter is becoming increasingly difficult to enforce. Google is taking a different approach…We are removing the requirement for a privileged intranet and moving our corporate applications to the Internet.
Yet while much of the world is in the throes of adopting the open, on-demand IT paradigm characterized by agility and elasticity that Google helped define, security has yet to be reimagined in the image of cloud and DevOps, much less Google.
In the second Kali Linux article, the network tool known as ‘nmap‘ will be discussed. While nmap isn’t a Kali only tool, it is one of the most useful network mapping tools in Kali.
Nmap, short for Network Mapper, is maintained by Gordon Lyon (more about Mr. Lyon here: http://insecure.org/fyodor/) and is used by many security professionals all over the world. The utility works in both Linux and Windows and is command line (CLI) driven. However for those a little more timid of the command line, there is a wonderful graphical front end for nmap called zenmap.
This talk by Vinu Charanya and Michael Benedict at LinuxCon North America goes into fascinating detail on the metering and chargeback system Twitter engineers built to solve this problem, using both a technical and social approach.
The Linux Foundation today released its third annual “Guide to the Open Cloud” report on current trends and open source projects in cloud computing.
Guide to the Open Cloud Report
The report aggregates and analyzes industry research to provide insights on how trends in containers, microservices, and more shape cloud computing today. It also defines the open source cloud and cloud native computing and discusses why the open cloud is important to just about every industry.
“From banking and finance to automotive and healthcare, companies are facing the reality that they’re now in the technology business. In this new reality, cloud strategies can make or break an organization’s market success. And successful cloud strategies are built on Linux and open source software,” according to the report.
A list of 75 projects at the end of the report serves as a directory for IT managers and practitioners looking to build, manage, and monitor their cloud resources. These are the projects to know about, try out, and contribute to in order to ensure your business stays competitive in the cloud.
The projects are organized into key categories of cloud infrastructure including IaaS, PaaS, virtualization, containers, cloud operating systems, DevOps, configuration management, logging and monitoring, software-defined networking (SDN), software-defined storage, and networking for containers.
New this year is the addition of a section on container management and automation tools, which is a hot area for development as companies race to fill the growing need to manage highly distributed, cloud-native applications. Traditional DevOps CI/CD tools have also been collected in a separate category, though functionality can overlap.
These additions reflect a movement toward the use of public cloud services and microservices architectures which is changing the nature of open source cloud computing.
“A whole new class of open source cloud computing projects has now begun to leverage the elasticity of the public cloud and enable applications designed and built to run on it,” according to the report.
To learn more about current trends in cloud computing and to see a full list of the most useful, influential, and promising open source cloud projects, download the report now.
Twitter runs on a massively complex infrastructure running thousands of services, so small efficiencies result in large gains. But figuring out how to measure performance is a giant problem in a system this complex, as is giving Twitter’s teams the incentive and tools to improve resource allocation. Vinu Charanya and Michael Benedict’s talk at LinuxCon North America goes into fascinating detail on the metering and chargeback system Twitter engineers built to solve this problem, using both a technical and social approach.
One of the events responsible for the creation of this system was the 2010 World Cup. Twitter engineers anticipated several times greater demand and scaled up to meet it. But the scale-up was not entirely successful. This resulted in a fundamental architecture change, breaking down functionality into multiple independent microservices.
In 2014, Ellen DeGeneres tweeted a selfie from the Oscars podium, which exposed additional weaknesses in the system. It was retweeted so many times and so fast that the original tweet became inaccessible for over an hour. Diagnosing exactly what went wrong was not easy. Benedict says, “Given the scale and size of Twitter, it’s important to really understand what is really the overall use of infrastructure platform resources across all of these services. How do you know who’s really using what? Given these number of services and number of teams at Twitter, it’s extremely important to understand how we can start capturing the utilization of resources per team, per project, per hour. Finally, how do you really incentivize the right behavior for these engineers, the team leads, the managers, to do the right thing in using our resources?”
Four Challenges
Vinu Charanya describes the Chargeback system that they built to address these problems. She says, “Chargeback provides the ability to track and measure infrastructure usage on a per engineering team basis and charge each owner their usage cost accordingly. Keeping this in mind as we started designing the system, we identified the top four challenges.
“Number one: service identity. We designed a generic service identification abstraction that provides a canonical way to identify a service across infrastructures.
“Number two: resource catalog. We worked with the infrastructure teams to identify and abstract resources that can be published for developers to consume and build.
“Number three: metering. Each infrastructure graphs the consumption of resources by each service through their service identifiers. We built a classic ETL data pipeline to collect all the usage metrics to aggregate and process them in a central location.
“Number four: service metadata. We also built a service metadata system that keeps track of ops and other service-related metadata.”
The end result of Chargeback is three reports for users: a Chargeback bill, an infrastructure profit-and-loss report, and a budgeting report.
Chargeback not only gives Twitter teams measurements of their resource usage and real-world costs, it is also an amazing tool for understanding exactly what is happening inside this huge, fast-moving, interdependent system. Watch Charanya and Benedict’s talk (below) to learn more about the tools and architecture of this most bleeding-edge of techologies.