These were a pair of Chinese IBM POWER computers running AIX near the bottom of the list. These machines came in at 493 and 494. Since the November 2016 Top500, these supercomputers have dropped by over 100 places. At this rate, Linux will score a clean sweep in the next biannual Top500 competition.
Localization plays a central role in the ability to customize an open source project to suit the needs of users around the world. Besides coding, language translation is one of the main ways people around the world contribute to and engage with open source projects.
There are tools specific to the language services industry (surprised to hear that’s a thing?) that enable a smooth localization process with a high level of quality. Categories that localization tools fall into include:
The developers behind the open-source Hyperledger Fabric blockchain project have issued the software’s official release candidate.
Announced yesterday via the project’s public mailing list, the release effectively moves the software, one of several separate enterprise distributed ledger code bases being incubated under the Hyperledger umbrella, one step closer to a formal version 1.0 launch.
Jonathan Levi, founder of blockchain startup HACERA and a release manager for the project, sought to portray the announcement as a call to action to those seeking to leverage the software, framing it as evidence that Fabric is “getting serious” and moving steadily and pragmatically toward launch.
Docker can build an image by reading the build instructions from a file that’s generally referred to as Dockerfile. So, first, check your connectivity with the “dockerhost” and then create a folder called nginx. In that folder, we have created a file called dockerfile and in the dockerfile, we have used different instructions, like FROM, RUN, EXPOSE, and CMD.
To build an image, we’ll need to use the docker build command. With the -t option, we can specify the image name, and with a “.” at the end, we are requesting Docker to look at the current folder to find the dockerfile, and then build the image.
On the Docker Hub, we also see the repositories — for example, for nginx, redis, andbusybox. For a given repository, you can have different tags, which will define the individual image. On the repository page, we can also see a respective Dockerfile, from which an image is created — for example, you can see the Dockerfile of the nginx image.
If you don’t have an account on Docker Hub, I recommend creating one at this time. After logging in, you can see all the repositories we’ve created. Note that the repository name is prefixed with our username.
To push an image to Docker Hub, make sure that the image name is prefixed with the username used to log into the Docker Hub account. With the docker image push command, we can push the image to the Docker Registry, which would, by default, go to the Docker Hub.
DockerHub has a functionality called Docker automated builds, that can trigger a build on DockerHub as soon as you commit a code on your GitHub repository. On GitHub, we have a repository called docker-automated-build, in which we have a Dockerfile, using which the image will be created. In the real-world example, we would have our application code with Dockerfile.
To create the automated build, we need to first log into our DockerHub account, and then, link our GitHub account with DockerHub. Once the GitHub account is linked, we click on “Create” and then on “Create Automated Build.”
Next, we provide a short description and then click on “Create.” Then, we select the GitHub repository that we want to link with this DockerHub automated build procedure. Now, we can go to our GitHub repository and change something there. As soon as we commit the change, a Docker build process would start on our DockerHub account.
Our image build is currently in queue, which will be scheduled eventually, and our image would be created. After that, anybody would be able to download the image.
This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.
We often talk about being on call as being a bad thing. For example, the night before I wrote this my phone woke me up in the middle of the night because something went wrong on a computer. That’s no fun! I was grumpy.
In this post, though, we’re going to talk about what you can learn from being on call and how it can make you a better software engineer!. And to learn from being on call you don’t necessarily need to get woken up in the middle of the night. By “being on call”, here, I mean “being responsible for your code when it breaks”. It could mean waking up to issues that happened overnight and needing to fix them during your workday!
The Internet Archive is a nonprofit digital library based in San Francisco. It provides free public access to collections of digitized materials, including websites, books, documents, papers, newspapers, music, video and software.
This article describes how we made the full-text organic search faster — without scaling horizontally — allowing our users to search in just a few seconds across our collection of 35 million documents containing books, magazine, newspapers, scientific papers, patents and much more.
In the last few years, we’ve been seeing some significant changes in the suggestions that security experts are making for password security. While previous guidance increasingly pushed complexity in terms of password length, the mix of characters used, controls over password reuse, and forced periodic changes, specialists have been questioning whether making passwords complex wasn’t actually working against security concerns rather than promoting them.
Security specialists have also argued that forcing complexity down users’ throats has led to them writing passwords down or forgetting them and having to get them reset. They argued that replacing a password character with a digit or an uppercase character might make a password look complicated, but does not actually make it any less vulnerable to compromise. In fact, when users are forced to include a variety of characters in their passwords, they generally do so in very predictable ways. Instead of “password”, they might use “Passw0rd” or even “P4ssw0rd!”, but the variations don’t make the passwords significantly less guessable. People are just not very good at generating anything that’s truly random.
Do you have an Intel Skylake and Kaby Lake processor under your computer’s hood? Have you experienced unexplained application and system hiccups, data corruption, or data loss? It could be because your chipset has hyper-threading enabled and the chips are malfunctioning.
Henrique de Moraes Holschuh, a Debian Linux developer, revealed the Intel chip problem on the Debian developer list. Officially, Intel hasn’t acknowledged the problem, but engineers at Dell and Intel have told me that the problem, and its fix, exists.
Type “devops” into any job search site today and the overwhelming majority of results will be for some variation of “DevOps Engineer”. The skills required will centre on tools like Puppet/Chef/Ansible, AWS/Azure, scripting in Python/Perl/Bash/PowerShell etc. Essentially, they’ve taken a deployment automation engineer role, crossed out “deployment automation” and written “DevOps” in its place.
There’s nothing wrong with hiring deployment automation (or, if you must, DevOps) engineers if you don’t have enough people with the right skills to deliver the deployment automation part of your DevOps strategy. The real problem is when hiring DevOps engineers is your DevOps strategy.
Deployment automation is an ancient art compared to DevOps. How ancient? Here’s an abbreviated history (feel free to skip to the tl;dr if you’re not a history buff):
I had intended to turn this list in to some kind of monster article or split it in to several. I suck at getting around to writing things so rather than let this fester unpublished I’m going to publish it as is.
This experience comes from 2 and a bit years evaluating and using docker for actual work. Some of it is probably rubbish. The largest deployment of nodes I’ve managed is 50 so not uber scale. Some things will be different beyond that scale but I’m pretty sure most of it applies below that.