Developers want to change things as soon as they can, while operations teams remain apprehensive that changes will break stuff. To reconcile these two drives, Google forged the path of site reliability engineering (SRE), an emerging practice for maintaining complex computing systems that need to run with high reliability. As the founder of Google’s SRE Team, Ben Treynor put it: SRE is “what happens when a software engineer is tasked with what used to be called operations.”
SRE dates back to 2003 when Treynor joined Google to manage a team of engineers to run a production environment. The practice proved to be a success, and the company now 1,500 engineers working in SRE. Apple, Oracle, Microsoft, Twitter, Dropbox, IBM, and Amazon have all implemented their own SRE teams as well.
Serious distributions try to protect their repositories cryptographically against tampering and transmission errors. Arch Linux, Debian, Fedora, openSUSE, and Ubuntu all take different, complex, but conceptually similar approaches.
Many distributions develop, test, build, and distribute their software via a heterogeneous zoo of servers, mirrors, and workstations that make central management and protection of the end product almost impossible. In terms of personnel, distributions also depend on the collaboration of a severely limited number of international helpers. This technical and human diversity creates a massive door for external and internal attackers who seek to infect popular distribution packages with malware. During updates, then, hundreds of thousands of Linux machines download and install poisoned software with root privileges. The damage could hardly be greater.
The danger is less abstract than some might think. Repeatedly in the past, projects have had to take down one or more servers after hacker attacks. The motivation of (at least) all the major distributions to protect themselves from planted packages is correspondingly large and boils down to two actions: one simple and one cryptographic.
These were a pair of Chinese IBM POWER computers running AIX near the bottom of the list. These machines came in at 493 and 494. Since the November 2016 Top500, these supercomputers have dropped by over 100 places. At this rate, Linux will score a clean sweep in the next biannual Top500 competition.
Localization plays a central role in the ability to customize an open source project to suit the needs of users around the world. Besides coding, language translation is one of the main ways people around the world contribute to and engage with open source projects.
There are tools specific to the language services industry (surprised to hear that’s a thing?) that enable a smooth localization process with a high level of quality. Categories that localization tools fall into include:
The developers behind the open-source Hyperledger Fabric blockchain project have issued the software’s official release candidate.
Announced yesterday via the project’s public mailing list, the release effectively moves the software, one of several separate enterprise distributed ledger code bases being incubated under the Hyperledger umbrella, one step closer to a formal version 1.0 launch.
Jonathan Levi, founder of blockchain startup HACERA and a release manager for the project, sought to portray the announcement as a call to action to those seeking to leverage the software, framing it as evidence that Fabric is “getting serious” and moving steadily and pragmatically toward launch.
Docker can build an image by reading the build instructions from a file that’s generally referred to as Dockerfile. So, first, check your connectivity with the “dockerhost” and then create a folder called nginx. In that folder, we have created a file called dockerfile and in the dockerfile, we have used different instructions, like FROM, RUN, EXPOSE, and CMD.
To build an image, we’ll need to use the docker build command. With the -t option, we can specify the image name, and with a “.” at the end, we are requesting Docker to look at the current folder to find the dockerfile, and then build the image.
On the Docker Hub, we also see the repositories — for example, for nginx, redis, andbusybox. For a given repository, you can have different tags, which will define the individual image. On the repository page, we can also see a respective Dockerfile, from which an image is created — for example, you can see the Dockerfile of the nginx image.
If you don’t have an account on Docker Hub, I recommend creating one at this time. After logging in, you can see all the repositories we’ve created. Note that the repository name is prefixed with our username.
To push an image to Docker Hub, make sure that the image name is prefixed with the username used to log into the Docker Hub account. With the docker image push command, we can push the image to the Docker Registry, which would, by default, go to the Docker Hub.
DockerHub has a functionality called Docker automated builds, that can trigger a build on DockerHub as soon as you commit a code on your GitHub repository. On GitHub, we have a repository called docker-automated-build, in which we have a Dockerfile, using which the image will be created. In the real-world example, we would have our application code with Dockerfile.
To create the automated build, we need to first log into our DockerHub account, and then, link our GitHub account with DockerHub. Once the GitHub account is linked, we click on “Create” and then on “Create Automated Build.”
Next, we provide a short description and then click on “Create.” Then, we select the GitHub repository that we want to link with this DockerHub automated build procedure. Now, we can go to our GitHub repository and change something there. As soon as we commit the change, a Docker build process would start on our DockerHub account.
Our image build is currently in queue, which will be scheduled eventually, and our image would be created. After that, anybody would be able to download the image.
This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.
We often talk about being on call as being a bad thing. For example, the night before I wrote this my phone woke me up in the middle of the night because something went wrong on a computer. That’s no fun! I was grumpy.
In this post, though, we’re going to talk about what you can learn from being on call and how it can make you a better software engineer!. And to learn from being on call you don’t necessarily need to get woken up in the middle of the night. By “being on call”, here, I mean “being responsible for your code when it breaks”. It could mean waking up to issues that happened overnight and needing to fix them during your workday!
The Internet Archive is a nonprofit digital library based in San Francisco. It provides free public access to collections of digitized materials, including websites, books, documents, papers, newspapers, music, video and software.
This article describes how we made the full-text organic search faster — without scaling horizontally — allowing our users to search in just a few seconds across our collection of 35 million documents containing books, magazine, newspapers, scientific papers, patents and much more.
In the last few years, we’ve been seeing some significant changes in the suggestions that security experts are making for password security. While previous guidance increasingly pushed complexity in terms of password length, the mix of characters used, controls over password reuse, and forced periodic changes, specialists have been questioning whether making passwords complex wasn’t actually working against security concerns rather than promoting them.
Security specialists have also argued that forcing complexity down users’ throats has led to them writing passwords down or forgetting them and having to get them reset. They argued that replacing a password character with a digit or an uppercase character might make a password look complicated, but does not actually make it any less vulnerable to compromise. In fact, when users are forced to include a variety of characters in their passwords, they generally do so in very predictable ways. Instead of “password”, they might use “Passw0rd” or even “P4ssw0rd!”, but the variations don’t make the passwords significantly less guessable. People are just not very good at generating anything that’s truly random.
Do you have an Intel Skylake and Kaby Lake processor under your computer’s hood? Have you experienced unexplained application and system hiccups, data corruption, or data loss? It could be because your chipset has hyper-threading enabled and the chips are malfunctioning.
Henrique de Moraes Holschuh, a Debian Linux developer, revealed the Intel chip problem on the Debian developer list. Officially, Intel hasn’t acknowledged the problem, but engineers at Dell and Intel have told me that the problem, and its fix, exists.