Edge computing, like public cloud at scale, requires a convenient, powerful cloud software stack that can be deployed in a unified, efficient and sustainable way. Open source is leading the way.
When we think of cloud computing, most of us envision large-scale, centralized data centers running thousands of physical servers. As powerful as that vision sounds, it actually misses the biggest new opportunity: distributed cloud infrastructure.
Today, almost every company in every industry sector needs near-instant access to data and compute resources to be successful. Edge computing pushes applications, data and computing power services away from centralized data centers to the logical extremes of a network, close to users, devices and sensors. It enables companies to put the right data in the right place at the right time, supporting fast and secure access. The result is an improved user experience and, oftentimes, a valuable strategic advantage. The decision to implement an edge computing architecture is typically driven by the need for location optimization, security, and most of all, speed.
Linux kernel maintainer Willy Tarreau announced that the Linux 3.10 kernel series reached end of life and it will no longer receive maintenance updates that patch critical security vulnerabilities.
The end of life was reached this past weekend with the release of Linux kernel 3.10.108, which is the last maintenance update for the Linux 3.10 branch. Therefore, users and OEMs are now urged to upgrade to a more recent, long-term supported Linux kernel, such as the Linux 4.4 LTS series.
This tutorial will show you how to install and secure a Nginx web server on Debian 9 with a TLS certificate issued for free by the Let’s Encrypt Certificate Authority. Furthermore, we will configure automatic renewal of Lets’ Encrypt TLS certificates using a cron job before the certificates expire.
TLS, also known as Transport Layer Security, is a network protocol that uses SSL certificates to encrypt the network traffic which flows between a server and a client, or between a web server, such as Nginx server, and a browser. All data exchanged in between these two entities is secured and the connection cannot be decrypted even if it is intercepted using a technique such as by a man in the middle attack or packet sniffing. The certbot package software is the official client utility provided by Let’s Encrypt CA that can be used in the process of generating and downloading free Let’s Encrypt certificates in Debian.
How can I see the content of a log file in real time in Linux? Well there are a lot of utilities out there that can help a user to output the content of a file while the file is changing or continuously updating. Some of the most known and heavily used utility to display a file content in real time in Linux is the tail command (manage files effectively).
1. tail Command – Monitor Logs in Real Time
As said, tail command is the most common solution to display a log file in real time. However, the command to display the file has two versions, as illustrated in the below examples.
In the first example the command tail needs the -f argument to follow the content of a file.
Microsoft has recently increased their stakes in the Kubernetes community through a variety of actions. For example, they acquired Deis, a company that specialises in Kubernetes container management technologies. And, Microsoft became a member of the Cloud Native Computing Foundation, which is the home of the Kubernetes project.
Microsoft also continues to increase their engagement with the Kubernetes community with a talented team of engineers. In 2016, Brendan Burns, one of the three co-founders of Kubernetes (along with Joe Beda and Craig McLuckie), left Google and joined Microsoft as a distinguished engineer.
Brendan Burns, Distinguished Engineer at Microsoft
We spoke with Burns at DockerCon Europe to find out more about Microsoft’s engagement with the Kubernetes community. Here is an edited version of that discussion:
Linux.com: First things first, why would an ex-Googler and Kubernetes co-founder join Microsoft?
Brendan Burns:Microsoft is a company with a history that’s unique in the world of computing. Microsoft is a company that has been enabling developer productivity. It has been helping people who may not have thought of themselves as application builders or developer builders in the first place. But Microsoft enabled them to become people who are capable of building applications.
I have seen this with my friends who I went to college with. They took products like Visual Basic and Access to build businesses or consulting jobs. They created businesses to build applications using these technologies. These technologies empowered those people. I think cloud misses that. There is a gap where it’s hard to build reliable, scalable applications on the cloud.
I think that history of enabling developer productivity combined with a really great public cloud is an incredible opportunity to empower a whole new generation, a broader group of users to build these distributed applications. That’s why I am at Microsoft. I think it’s unique because this combination just doesn’t exist anywhere else in any other company.
Linux.com: What’s your role at Microsoft?
Burns: My role is to lead the teams that focus on containers and open source container orchestration within Microsoft. That includes managing the teams and making sure that we get the right people with the right skills, and it includes helping to set direction. It also involves writing some of the code myself. It’s a mix of everything that you would expect from engineering and technical leadership.
I’m really excited about trying to help Azure chart a direction into this new world and figuring out how to marry all of the skills that brought us really great developers tools like Visual Studio Code, with the skills of someone who is building a distributed application and who knows what it takes to deploy, manage, and operate a distributed application at scale.
I think there are a lot of people who work on development environments and a lot of people who build distributed systems, but there are fewer people who think about how they can come together, and that’s something that I’m pretty excited about as well. So, I’m trying to set that direction.
Linux.com: How is Microsoft consuming Kubernetes?
Burns:There are people who are building systems on top of Kubernetes. In fact, our Azure Container Service itself is deployed on Kubernetes. We also offer it as a service. In my capacity, I focus more on building a service for Azure users. The fact is, as big as Microsoft is, the world of public cloud is way bigger, so I want to build services that are useful and empowering to external users. I hope that by doing that, I build things that are useful for internal users as well.
Linux.com: What kind of engagement does Microsoft have with the Kubernetes community?
Burns:We contribute a lot of code. Some of this code is to make Azure work really well with Kubernetes. Some of it is code like Helm, which is an upstream open source project that is maintained primarily by Microsoft. It makes package development easy. It eases the deployment and management of containerized applications on top of Kubernetes.
We recently open sourced a project called Draft that is aimed at the developer side. We are trying to make it extremely easy for a developer, who may not have learned about containers or Kubernetes, to get started with with those technologies but also beyond that.
We participate in the leadership of a lot of open source governance and steering committees. Michelle Noorali, one of the Microsoft engineers, from my team was recently elected to the Community Steering Committee. I was on the bootstrap steering committee and continue to be on the Kubernetes steering committee. We also have representatives on the boards of the Open Container Initiative, the Cloud Native Computing Foundation, and we also contribute to Docker. Microsoft’s John Howard is the number four contributor of all time to the Docker project. So, as you can see, there are a lot of different ways in which Microsoft contributes its expertise and knowledge in this space.
At the OpenStack Summit in Australia, open-source cloud effort announces a series of new efforts to help improve integration across a variety of complementary cloud native technologies.
At the first day of the event, several initiatives designed to help improve and promote integration between OpenStack and other open-source cloud efforts were announced. Among the announcements was the Open Infrastructure Integration effort, the launch of the OpenLab testing tools program, the debut of the public cloud passport program and the formation of a financial services team.
“We’re really put some focus into the strategy for the OpenStack Foundation for next five years,” Jonathan Bryce, executive director of the OpenStack Foundation told eWEEK.
One of the most important pieces of any modern web application is the network. As applications become more distributed, it becomes crucial to reason about the network and its behavior in order to understand how a system will behave. Service meshes are more and more frequently proposed as a means of tackling this problem. If you’re not familiar with meshes, Matt Klein has a great intro to them, and Christian Posta has a great series on Patterns with Envoy.
Fundamentally, modern apps benefit from networking patterns like meshes for three reasons:
Scale: At the scale of most modern web applications, your traffic is a thing you manage. …
Five questions for Bryan Liles on the complexities of tracing, recommended tools and skills, and how to learn more about monitoring.
The first thing that makes tracing complex is understanding how it fits into your application monitoring stack. I like to break down monitoring into metrics, logs, and tracing. Tracing allows you to understand how your application’s components interact with themselves and any potential consumers. Secondly, finding a good toolset that works a diverse application infrastructure is also complex. This is why I’m hoping to see OpenTracing become more successful since it provides a good interface based on real world work at Google and Twitter. Finally, tracing is complex because of the amount of components involved. If you working in a large microservice-based application, you could have scores of microservices coupled with databases of many types and other applications as well. Combined with the tracing infrastructure, this leads to a large amount of items to consider. OpenTracing helps again by providing standards and clients to help simplify integration for the developer and operations teams.
Blockchain makes running an organization less costly. In the process, it introduces revolutionary degrees of transparency, inclusivity, and adaptability.
In an effort to not only understand blockchain itself, but also to discover the ways adopting it could change our approach to organizing today, I read several books on it. Blockchain Revolution, by father and son collaborators Don Tapscott and Alex Tapscott, is one of the most thoroughly researched I’ve encountered so far…
In the book, the authors raise two particularly interesting issues:
the impact of blockchain on organizational formation, and
the impact of blockchain on the ways we accomplish certain tasks
Pondering the first issue made me wonder: Why should organizations be formed in the first place, and how would blockchain technology “revolutionize” them according to open organization characteristics? I’ll explore that question in the first part of this two-part book review.
The second issue prompted me to think: How would our approaches to various tried-and-true organizational tasks change with the introduction of blockchain technology? I’ll address that one next time.
Containerization is changing how organizations deploy and use software. You can now deploy almost any software reliably with just the docker run command. And with orchestration platforms like Kubernetes and DC/OS, even production deployments are easy to set up.
You may have already experimented with Docker, and have maybe run a few containers. But one thing you might not have much experience with is understanding how Docker containers behave under different loads.
Because Docker containers, from the outside, can look a lot like black boxes, it’s not obvious to a lot of people how to go about getting runtime metrics and doing analysis.
In this post, we will set up a small CrateDB cluster with Docker and then go through some useful Docker commands that let us take a look at performance.