Although often used interchangeably, performance testing and load testing are not exactly the same. Performance testing is the more general practice of testing an application’s responsiveness and stability under real-life scenarios.
Load testing is a specific subset of performance testing that is meant to determine the application’s quality of service when being used by a specific number of users simultaneously. Load testing software simulates numerous simultaneous uses throughout the application, allowing you to identify bottlenecks caused by throughfare or by massive concurrency.
Load testing may seem like an obvious practice but it can be overlooked. When you test your own application each day, it is easy to forget that what you see and what your users see may be very different. Any client-server application needs load testing in order to determine its limits and improve user experience.
Staging environments
Testing in a staging environment is key to performance testing of all stripes. Having one gives you a buffer between your development environment and the client’s production environment. This allows you to catch errors and slowdowns before you push to production, thus keeping your clients happy. There are many online tools that can help your team set up a staging environment for your site, including SiteGround and Vagrant. Use these software suites to create a staging environment that closely resembles real-use cases.
Open source is the new normal in tech today, with open components and platforms driving mission-critical processes at organizations everywhere. As open source has become more pervasive, it has also profoundly impacted the job market. Across industries the skills gap is widening, making it ever more difficult to hire peoplewith much needed job skills. That’s why open source training and certification are more important than ever, and this series aims to help you learn more and achieve your own certification goals.
In the first article in the series, we explored why certification matters so much today. In the second article, we looked at the kinds of certifications that are making a difference. This story will focus on preparing for exams, what to expect during an exam, and how testing for open source certification differs from traditional types of testing.
Clyde Seepersad, General Manager of Training and Certification at The Linux Foundation, stated, “For many of you, if you take the exam, it may well be the first time that you’ve taken a performance-based exam and it is quite different from what you might have been used to with multiple choice, where the answer is on screen and you can identify it. In performance-based exams, you get what’s called a prompt.”
As a matter of fact, many Linux-focused certification exams literally prompt test takers at the command line. The idea is to demonstrate skills in real time in a live environment, and the best preparation for this kind of exam is practice, backed by training.
Know the requirements
“Get some training,” Seepersad emphasized. “Get some help to make sure that you’re going to do well. We sometimes find folks have very deep skills in certain areas, but then they’re light in other areas. If you go to the website for Linux Foundation training and certification, for the LFCS and the LFCE certifications, you can scroll down the page and see the details of the domains and tasks, which represent the knowledge areas you’re supposed to know.”
Once you’ve identified the skills you need, “really spend some time on those and try to identify whether you think there are areas where you have gaps. You can figure out what the right training or practice regimen is going to be to help you get prepared to take the exam,” Seepersad said.
Practice, practice, practice
“Practice is important, of course, for all exams,” he added. “We deliver the exams in a bit of a unique way — through your browser. We’re using a terminal emulator on your browser and you’re being proctored, so there’s a live human who is watching you via video cam, your screen is being recorded, and you’re having to work through the exam console using the browser window. You’re going to be asked to do something live on the system, and then at the end, we’re going to evaluate that system to see if you were successful in accomplishing the task“
What if you run out of time on your exam, or simply don’t pass because you couldn’t perform the required skills? “I like the phrase, exam insurance,” Seepersad said. “The way we take the stress out is by offering a ‘no questions asked’ retake. If you take either exam, LFCS, LFCE and you do not pass on your first attempt, you are automatically eligible to have a free second attempt.”
The Linux Foundation intentionally maintains separation between its training and certification programs and uses an independent proctoring solution to monitor candidates. It also requires that all certifications be renewed every two years, which gives potential employers confidence that skills are current and have been recently demonstrated.
Free certification guide
Becoming a Linux Foundation Certified System Administrator or Engineer is no small feat, so the Foundation has created this free certification guide to help you with your preparation. In this guide, you’ll find:
Critical things to keep in mind on test day
An array of both free and paid study resources to help you be as prepared as possible
A few tips and tricks that could make the difference at exam time
A checklist of all the domains and competencies covered in the exam
With certification playing a more important role in securing a rewarding long-term career, careful planning and preparation are key. Stay tuned for the next article in this series that will answer frequently asked questions pertaining to open source certification and training.
Kubernetes security has come a long way since the project’s inception, but still contains some gotchas. Starting with the control plane, building up through workload and network security, and finishing with a projection into the future of security, here is a list of handy tips to help harden your clusters and increase their resilience if compromised.
Part One: The Control Plane
The control plane is Kubernetes’ brain. It has an overall view of every container and pod running on the cluster, can schedule new pods (which can include containers with root access to their parent node), and can read all the secrets stored in the cluster. This valuable cargo needs protecting from accidental leakage and malicious intent: when it’s accessed, when it’s at rest, and when it’s being transported across the network.
1. TLS Everywhere
TLS should be enabled for every component that supports it to prevent traffic sniffing, verify the identity of the server, and (for mutual TLS) verify the identity of the client.
Note that some components and installation methods may enable local ports over HTTP and administrators should familiarize themselves with the settings of each component to identify potentially unsecured traffic.
This network diagram by Lucas Käldström demonstrates some of the places TLS should ideally be applied: between every component on the master, and between the Kubelet and API server. Kelsey Hightower‘s canonical Kubernetes The Hard Way provides detailed manual instructions, as does etcd’s security model documentation.
Google has not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017, when it began requiring all employees to use physical Security Keys in place of passwords and one-time codes, the company told KrebsOnSecurity.
Security Keys are inexpensive USB-based devices that offer an alternative approach to two-factor authentication (2FA), which requires the user to log in to a Web site using something they know (the password) and something they have (e.g., a mobile device).
A Google spokesperson said Security Keys now form the basis of all account access at Google.
The basic idea behind two-factor authentication is that even if thieves manage to phish or steal your password, they still cannot log in to your account unless they also hack or possess that second factor.
In this video from ISC 2018, John Bent and Jay Lofstead describe how the IO500 benchmark measures storage performance in HPC environments. The second IO500 list was revealed at ISC 2018 in Frankfurt, Germany.
Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 was created in 2017 and published its first list at SC17. The need for such an initiative has long been known within High Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and spirited discussion, finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking.
The multi-fold goals of the benchmark suite are as follows:
Maximizing simplicity in running the benchmark suite
Encouraging complexity in tuning for performance
Allowing submitters to highlight their “hero run” performance numbers
Forcing submitters to simultaneously report performance for challenging IO patterns.
When I published the highlights of my journey switching from Windows to Linux on my everyday laptop, I was floored at the engagement it received across all corners of the web. I also voiced an admittedly wrong assumption within the article itself that it wouldn’t attract many eyeballs, and yet it became one of my most viewed pieces this year. From where I’m sitting, that tells me a ton of people are interested — are at least actively curious — about ditching Windows and making the jump to Linux. Read more at Forbes.
Timers add yet another way of starting services, based on… well, time. Although similar to cron jobs, systemd timers are slightly more flexible. Let’s see how they work.
So you will “improve” your Minetest set up by creating a timer that will run the game’s server 1 minute after boot up has finished instead of right away. The reason for this could be that, as you want your service to do other stuff, like send emails to the players telling them the game is available, you will want to make sure other services (like the network) are fully up and running before doing anything fancy.
Jumping in at the deep end, your minetest.timer unit will look like this:
# minetest.timer
[Unit]
Description=Runs the minetest.service 1 minute after boot up
[Timer]
OnBootSec=1 m
Unit=minetest.service
[Install]
WantedBy=basic.target
Not hard at all.
As usual, you have a [Unit] section with a description of what the unit does. Nothing new there. The [Timer] section is new, but it is pretty self-explanatory: it contains information on when the service will be triggered and the service to trigger. In this case, the OnBootSec is the directive you need to tell systemd to run the service after boot has finished.
Other directives you could use are:
OnActiveSec=, which tells systemd how long to wait after the timer itself is activated before starting the service.
OnStartupSec=, on the other hand, tells systemd how long to wait after systemd was started before starting the service.
OnUnitActiveSec= tells systemd how long to wait after the service the timer is activating was last activated.
OnUnitInactiveSec= tells systemd how long to wait after the service the timer is activating was last deactivated.
Continuing down the minetest.timer unit, the basic.target is usually used as a synchronization point for late boot services. This means it makes minetest.timer wait until local mount points and swap devices are mounted, sockets, timers, path units and other basic initialization processes are running before letting minetest.timer start. As we explained in the second article on systemd units, targets are like the old run levels and can be used to put your machine into one state or another, or, like here, to tell your service to wait until a certain state has been reached.
The minetest.service you developed in the first two articles ended up looking like this:
What you are doing is stripping out those hacky pauses in the Bash script. Systemd does the waiting now.
Making it work
To make sure things work, disable minetest.service:
sudo systemctl disable minetest
so it doesn’t get started when the system starts; and, instead, enable minetest.timer:
sudo systemctl enable minetest.timer
Now you can reboot you server machine and, when you run sudo journalctl -u minetest.* you will see how, first the minetest.timer unit gets executed and then the minetest.service starts up after a minute… more or less.
Figure 1: The minetest.service gets started one minute after the minetest.timer… more or less.
A Matter of Time
A couple of clarifications about why the minetest.timer entry in the systemd’s Journal shows its start time as 09:08:33, while the minetest.service starts at 09:09:18, that is less than a minute later: First, remember we said that the OnBootSec= directive calculates when to start a service from when boot is complete. By the time minetest.timer comes along, boot has finished a few seconds ago.
The other thing is that systemd gives itself a margin of error (by default, 1 minute) to run stuff. This helps distribute the load when several resource-intensive processes are running at the same time: by giving itself a minute, systemd can wait for some processes to power down. This also means that minetest.service will start somewhere between the 1 minute and 2 minute mark after boot is completed, but when exactly within that range is anybody’s guess.
Another thing you can do is check when all the timers on your system are scheduled to run or the last time the ran:
systemctl list-timers --all
Figure 2: Check when your timers are scheduled to fire or when they fired last.
The final thing to take into consideration is the format you should use to express the periods of time. Systemd is very flexible in that respect: 2 h, 2 hours or 2hr will all work to express a 2 hour delay. For seconds, you can use seconds, second, sec, and s, the same way as for minutes you can use minutes, minute, min, and m. You can see a full list of time units systemd understands by checking man systemd.time.
Next Time
You’ll see how to use calendar dates and times to run services at regular intervals and how to combine timers and device units to run services at defined point in time after you plug in some hardware.
See you then!
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
This month saw the release of a fascinating oral history, in which 76-year-old Brian Kernighan remembers the origins of the Unix command grep.
Kernighan is already a legend in the world of Unix — recognized as the man who coined the term Unix back in 1970. His last initial also became the “k” in awk — and the “K” when people cite the iconic 1978 “K&R book” about C programming. The original Unix Programmer’s Manual calls Kernighan an “expositor par excellence,” and since 2000 he’s been a computer science professor at Princeton University — after 30 years at the historic Computing Science Research Center at Bell Laboratories.
The original Unix Programmer’s Manual calls grep one of “the more memorable mini-revolutions” that Unix experienced, saying it irrevocably ingrained the “tools” outlook into Unix. “Already visible in utilities such as wc, cat, and uniq, the stream-transformation model was deliberately followed in the design of later programs such as tr, m4, sed, and a flurry of language preprocessors.” grep, of course, is used when searching for text patterns — whether that text is coming from input files or from “piped” text (output from another command).
Learn how to write for an international audience in this article from our archives.
Writing in English for an international audience does not necessarily put native English speakers in a better position. On the contrary, they tend to forget that the document’s language might not be the first language of the audience. Let’s have a look at the following simple sentence as an example: “Encrypt the password using the ‘foo bar’ command.”
Grammatically, the sentence is correct. Given that “-ing” forms (gerunds) are frequently used in the English language, most native speakers would probably not hesitate to phrase a sentence like this. However, on closer inspection, the sentence is ambiguous: The word “using” may refer either to the object (“the password”) or to the verb (“encrypt”). Thus, the sentence can be interpreted in two different ways:
Encrypt the password that uses the ‘foo bar’ command.
Encrypt the password by using the ‘foo bar’ command.
As long as you have previous knowledge about the topic (password encryption or the ‘foo bar’ command), you can resolve this ambiguity and correctly decide that the second reading is the intended meaning of this sentence. But what if you lack in-depth knowledge of the topic? What if you are not an expert but a translator with only general knowledge of the subject? Or, what if you are a non-native speaker of English who is unfamiliar with advanced grammatical forms?
Know Your Audience
Even native English speakers may need some training to write clear and straightforward technical documentation. Raising awareness of usability and potential problems is the first step. This article, based on my talk at Open Source Summit EU, offers several useful techniques. Most of them are useful not only for technical documentation but also for everyday written communication, such as writing email or reports.
1. Change perspective. Step into your audience’s shoes. Step one is to know your intended audience. If you are a developer writing for end users, view the product from their perspective. The persona technique can help to focus on the target audience and to provide the right level of detail for your readers.
2. Follow the KISS principle. Keep it short and simple. The principle can be applied to several levels, like grammar, sentences, or words. Here are some examples:
Words: Uncommon and long words slow down reading and might be obstacles for non-native speakers. Use simpler alternatives:
“utilize” → “use”
“indicate” → “show”, “tell”, “say”
“prerequisite” → “requirement”
Grammar: Use the simplest tense that is appropriate. For example, use present tense when mentioning the result of an action: “Click OK. The Printer Options dialog appears.”
Sentences: As a rule of thumb, present one idea in one sentence. However, restricting sentence length to a certain amount of words is not useful in my opinion. Short sentences are not automatically easy to understand (especially if they are a cluster of nouns). Sometimes, trimming down sentences to a certain word count can introduce ambiquities, which can, in turn, make sentences even more difficult to understand.
3. Beware of ambiguities. As authors, we often do not notice ambiguity in a sentence. Having your texts reviewed by others can help identify such problems. If that’s not an option, try to look at each sentence from different perspectives: Does the sentence also work for readers without in-depth knowledge of the topic? Does it work for readers with limited language skills? Is the grammatical relationship between all sentence parts clear? If the sentence does not meet these requirements, rephrase it to resolve the ambiguity.
4. Be consistent. This applies to choice of words, spelling, and punctuation as well as phrases and structure. For lists, use parallel grammatical construction. For example:
Why white space is important:
It focuses attention.
It visually separates sections.
It splits content into chunks.
5. Remove redundant content. Keep only information that is relevant for your target audience. On a sentence level, avoid fillers (basically, easily) and unnecessary modifications:
“already existing” → “existing”
“completely new” → “new”
As you might have guessed by now, writing is rewriting. Good writing requires effort and practice. But even if you write only occasionally, you can significantly improve your texts by focusing on the target audience and by using basic writing techniques. The better the readability of a text, the easier it is to process, even for an audience with varying language skills. When it comes to localization especially, good quality of the source text is important: Garbage in, garbage out. If the original text has deficiencies, it will take longer to translate the text, resulting in higher costs. In the worst case, the flaws will be multiplied during translation and need to be corrected in various languages.
Tanja Roth, Technical Documentation Specialist at SUSE Linux GmbH
Driven by an interest in both language and technology, Tanja has been working as a technical writer in mechanical engineering, medical technology, and IT for many years. She joined SUSE in 2005 and contributes to a wide range of product and project documentation, including High Availability and Cloud topics.