Home Blog Page 320

Troubleshooting Node.js Issues with llnode

The llnode plugin lets you inspect Node.js processes and core dumps; it adds the ability to inspect JavaScript stack frames, objects, source code and more. At Node+JS Interactive, Matheus Marchini, Node.js Collaborator and Lead Software Engineer at Sthima, will host a workshop on how to use llnode to find and fix issues quickly and reliably, without bloating your application with logs or compromising performance. He explains more in this interview.

Linux.com: What are some common issues that happen with a Node.js application in production?

Matheus Marchini: One of the most common issues Node.js developers might experience — either in production or during development — are unhandled exceptions. They happen when your code throws an error, and this error is not properly handled. There’s a variation of this issue with Promises, although in this case, the problem is worse: if a Promise is rejected but there’s no handler for that rejection, the application might enter into an undefined state and it can start to misbehave.

The application might also crash when it’s using too much memory. This usually happens when there’s a memory leak in the application, although we usually don’t have classic memory leaks in Node.js. Instead of unreferenced objects, we might have objects that are not used anymore but are still retained by another object, leading the Garbage Collector to ignore them. If this happens with several objects, we can quickly exhaust our available memory.

Memory is not the only resource that might get exhausted. Given the asynchronous nature of Node.js and how it scales for a large number of requests, the application might start to run out on other resources such as opened file descriptions and a number of concurrent connections to a database.

Infinite loops are not that common because we usually catch those during development, but every once in a while one manages to slip through our tests and get into our production servers. These are pretty catastrophic because they will block the main thread, rendering the entire application unresponsive.

The last issues I’d like to point out are performance issues. Those can happen for a variety of reasons, ranging from unoptimized function to I/O latency.

Linux.com: Are there any quick tests you can do to determine what might be happening with your Node.js application?

Marchini: Node.js and V8 have several tools and features built-in which developers can use to find issues faster. For example, if you’re facing performance issues, you might want to use the built-in V8 CpuProfiler. Memory issues can be tracked down with V8 Sampling Heap Profiler. All of these options are interesting because you can open their results in Chrome DevTools and get some nice graphical visualizations by default.

If you are using native modules on your project, V8 built-in tools might not give you enough insights, since they focus only on JavaScript metrics. As an alternative to V8 CpuProfiler, you can use system profiler tools, such as perf for Linux and Dtrace for FreeBSD / OS X. You can grab the result from these tools and turn them into flamegraphs, making it easier to find which functions are taking more time to process.

You can use third-party tools as well: node-report is an amazing first failure data capture which doesn’t introduce a significant overhead. When your application crashes, it will generate a report with detailed information about the state of the system, including environment variables, flags used, operating system details, etc. You can also generate this report on demand, and it is extremely useful when asking for help in forums, for example. The best part is that, after installing it through npm, you can enable it with a flag — no need to make changes in your code!

But one of the tools I’m most amazed by is llnode.

Linux.com: When would you want to use something like llnode; and what exactly is it?

Marchini: llnode is useful when debugging infinite loops, uncaught exceptions or out of memory issues since it allows you to inspect the state of your application when it crashed. How does llnode do this? You can tell Node.js and your operating system to take a core dump of your application when it crashes and load it into llnode. llnode will analyze this core dump and give you useful information such as how many objects were allocated in the heap, the complete stack trace for the process (including native calls and V8 internals), pending requests and handlers in the event loop queue, etc.

The most impressive feature llnode has is its ability to inspect objects and functions: you can see which variables are available for a given function, look at the function’s code and inspect which properties your objects have with their respective values. For example, you can look up which variables are available for your HTTP handler function and which parameters it received. You can also look at headers and the payload of a given request.

llnode is a plugin for lldb, and it uses lldb features alongside hints provided by V8 and Node.js to recreate the process heap. It uses a few heuristics, too, so results might not be entirely correct sometimes. But most of the times the results are good enough — and way better than not using any tool.

This technique — which is called post-mortem debugging — is not something new, though, and it has been part of the Node.js project since 2012. This is a common technique used by C and C++ developers, but not many dynamic runtimes support it. I’m happy we can say Node.js is one of those runtimes.

Linux.com: What are some key items folks should know before adding llnode to their environment?

Marchini: To install and use llnode you’ll need to have lldb installed on your system. If you’re on OS X, lldb is installed as part of Xcode. On Linux, you can install it from your distribution’s repository. We recommend using LLDB 3.9 or later.

You’ll also have to set up your environment to generate core dumps. First, remember to set the flag –abort-on-uncaught-exception when running a Node.js application, otherwise, Node.js won’t generate a core dump when an uncaught exception happens. You’ll also need to tell your operating system to generate core dumps when an application crashes. The most common way to do that is by running `ulimit -c unlimited`, but this will only apply to your current shell session. If you’re using a process manager such as systemd I suggest looking at the process manager docs. You can also generate on-demand core dumps of a running process with tools such as gcore.

Linux.com: What can we expect from llnode in the future?

Marchini: llnode collaborators are working on several features and improvements to make the project more accessible for developers less familiar with native debugging tools. To accomplish that, we’re improving the overall user experience as well as the project’s documentation and installation process. Future versions will include colorized output, more reliable output for some commands and a simplified mode focused on JavaScript information. We are also working on a JavaScript API which can be used to automate some analysis, create graphical user interfaces, etc.

If this project sounds interesting to you, and you would like to get involved, feel free join the conversation in our issues tracker or contact me on social @mmarkini. I would love to help you get started!

Learn more at Node+JS Interactive, coming up October 10-12, 2018 in Vancouver, Canada.

How to Monitor Network Traffic with Linux and vnStat

If you’re a network or a Linux admin, sometimes you need to monitor network traffic coming and going to/from your Linux servers. As there are a number of tools with which to handle this task, where do you turn? One very handy tool is vnStat. With vnStat you get a console-based network traffic monitor that is capable of monitoring and logging traffic on selected interfaces for specific dates, times, and intervals. Along with vnStat, comes a PHP script that allows you to view network traffic of your configured interface via a web-based interface.

I want to show you how to install and use both vnStat and vnStat-PHP on Linux. I’ll demonstrate on Ubuntu Server 18.04, but the tool is available for most distributions.

Read more at TechRepublic

How to Land an IT Job: Showcase your Adaptability

Where do you see your career in 10 years? This classic interview question is getting harder to answer. It’s likely that many of the jobs people will hold in the year 2028 haven’t been invented yet. It’s even more likely that all jobs, especially those in IT, will be different in some way – altered, improved, extinguished, or created as a result of technology.  

That’s one reason why adaptability is becoming a must-have skill in IT. In a new report from Harvard Business Review Analytic Services, CIOs stress that in this era of agile work styles and digital disruption, every single person in IT must be able to cope with changing roles and responsibilities, learn new skills, and work with a wider range of colleagues. 

“The nature of work is changing,” says Malhotra. “Job descriptions are starting to become hybrid in nature, and the millennial workforce is taking on positions that require multiple skills from several disciplines. IT hires who are unable to make clever transitions will be at a distinct disadvantage.” 

Read more at Enterprisers

Top 3 Benefits of Company Open Source Programs

Many organizations, from Red Hat to internet-scale giants like Google and Facebook, have established open source programs (OSPO). The TODO Group, a network of open source program managers, recently performed the first annual survey of corporate open source programs, and it revealed some interesting findings on the actual benefits of open source programs. According to the survey, the top three benefits of managing an open source program are:

  • awareness of open source usage/dependencies
  • increased developer agility/speed
  • better and faster license compliance

Corporate open source programs on the rise

The survey also found that 53% of companies have an open source program or plan to establish one in the near future:

Read more at OpenSource.com

Yubico Launches New Lineup of Multifactor FIDO2 Security Keys

It’s an open secret that passwords aren’t the most effective way to protect online accounts. Alarmingly, three out of four people use duplicate passwords, and 21 percent of people use codes that are over 10 years old. (In 2014, among the five most popular passwords were “password,” “123456,” and “qwerty.”) Two-factor SMS authentication adds a layer of protection, but it isn’t foolproof — hackers can fairly easily redirect text messages to another number.

A much more secure alternative is hardware authentication keys, and there’s good news this week for folks looking to pick one up. During Microsoft’s Ignite conference in Orlando, Florida, Yubico unveiled the YubiKey 5 Series: The YubiKey 5C, YubiKey 5 NFC, YubiKey 5 Nano, and YubiKey 5C Nano. The company claims they’re the first multi-protocol security keys to support the FIDO2 (Fast IDentity Online 2) standard.

Read more at Venture Beat

Redefining Security Technology in Zephyr and Fuchsia

If you’re the type of person who uses the word “vuln” as a shorthand for code vulnerabilities, you should check out the presentation from the recent Linux Security Summit called “Security in Zephyr and Fuchsia.” In the talk, two researchers from the National Security Agency discuss their contributions to the nascent security stacks of two open source OS projects: Zephyr and Fuchsia.

If you’re worried about the fact that Edward Snowden’s old employer is helping to write next generation OSes that could run our lives in 10 years, consider the upsides. First, since these are open source projects, any nefarious backdoors would be clearly visible. Second, the NSA knows a thing or two about security. Stephen Smalley and James Carter, who discussed security in Zephyr and Fuchsia, respectively, are computer security researchers at the NSA’s Information Assurance Research group, which developed and maintains the security-enhanced SELinux and SE Android distributions. Smalley leads the NSA’s Security Enhancements (SE) for the Internet of Things project and is a kernel and userspace maintainer for SELinux.

The Linux Foundation hosted Zephyr Project, which is creating the IoT-oriented Zephyr RTOS, is the more mature of the two projects. Google’s Fuchsia OS has a longer way to go — especially if you believe that Fuchsia will replace Android and Chrome OS over the next decade.

The developers of Zephyr and Fuchsia have a rare opportunity to develop novel, up-to-date security stacks from scratch. One of the main reasons Google chose to build Fuchsia from a new microkernel was that it could avoid the hodgepodge of legacy code layered on top of Linux, thereby improving security. Attempts to boost security in Linux are always going to be like patching holes in a boat. Zephyr and Fuchsia aim to be the OS equivalents of hovercraft.

Zephyr and Fuchsia are very different OSes, and they implement security in different ways. Zephyr is designed for constrained devices running on microcontrollers, such as Cortex-M4 chips, whereas Fuchsia will target phones and desktops running on applications processors, such as Cortex-A53 and Intel Core.

“Zephyr and Fuchsia were both open sourced in 2016, but they have been developed for very different use cases,” said Smalley. “Their architectures are very different, and each is also very different from Linux.”

Zephyr security

Like Linux and Fuchsia, Zephyr has RO/NX memory protection, stack depth overflow prevention, and stack buffer overflow detection. However, there’s still no kernel or user space ASLR (address space layout randomization), which “will likely move to a build time randomization and a small boot time relocation,” said Smalley.

Among other architectural differences with Linux, “There’s no process isolation in Zephyr, only a userspace thread model,” explained Smalley. “The process abstraction model has yet to be implemented, and the kernel/user boundary is still being fleshed out.”

In Zephyr, “you’re generally working with a single application, and security is highly dependent on particular SoCs and kernel configurations,” said Smalley. By comparison, “In Linux, there are a number of core OS security features that are neutral and independent.”

The original Zephyr release had a single executable with a single address space with all threads in supervisor mode and no memory protection or virtual memory, said Smalley. “As Zephyr added OS protections, it sought to minimize changes to kernel APIs in order to be backward compatible,” he added. “A key Zephyr design philosophy is to do as much as possible at build time, and then as much as possible at last view time, thereby minimizing runtime overheads and ensuring bounded latency for real-time.”

Zephyr security is complicated by the fact that some of the MCUs it targets include memory protection units (MPUs) while others do not. Beginning in releases 1.8 and 1.9, Zephyr began to provide memory protections, with allowances for both types of MCUs.

The NSA team developed a set of kernel memory protection tests modeled on lkdtm tests from the Kernel Self Protection Project (KSPP) for Linux. “The tests were helpful in catching bugs in Zephyr MPU drivers, and they are now used for regression testing,” said Smalley.

Zephyr added userspace support in versions 1.10 and 1.11 that provided basic support for user mode threads with isolated memory. Smalley’s team developed a set of userspace tests “that sought to validate the security properties for user mode threads were being enforced.”  Zephyr’s userspace memory model is still limited to a single executable and address space, and there’s no virtual memory. “It can support user mode threads but not full processes,” explained Smalley.

Sign up to receive updates on Open Source Summit and ELC+OpenIoT Europe:

Zephyr security features include an object permissions model in which user threads must first be granted permissions to an object to enable access. “A kernel mode thread can grant access to a user mode thread, and an inheritance mechanism allows those permissions to be propagated down,” explained Smalley. “It’s an all or nothing model — all user threads can access all app global variables.”

This all-or-nothing approach “poses a high burden on the application developer, who has to manually organize the application global variable memory layout to meet MPU restrictions,” said Smalley. To help compensate, the NSA team developed a feature due in release 1.13 that “supports a slightly more developer friendly way of grouping application globals based on desired protections. It’s a small step forward, not a panacea.”

Future Zephyr security work includes adding MPU virtualization, which “would allow us to support a larger number of regions instead of just eight that can be swapped in and out of the MPU on demand,” said Smalley. “We also hope to provide full support for multiple applications and program loading.”

In Zephyr, kernel code is fully trusted. “We would like to see Linux-like mitigations for kernel vulns using KSPP kernel self-protection features while minimizing runtime overheads,” said Smalley. Other wish-list items include leveraging armv8-m for Cortex-M MCUs, thereby enabling TrustZone security. There’s also a long-term plan to “develop a MAC suited to RTOSes that’s more oriented to build-time app partitioning.”

Fuchsia security

Fuchsia differs from Linux and Zephyr in that it’s a microkernel OS with security based on object capability. Like Linux it offers process isolation. In addition, “The plumbing for kernel or user space ASLR is there,” said the NSA’s James Carter.

Compared to the “large and monolithic” Linux, Fuchsia has a small, decomposed TCB (trusted computing base),” said Carter. “It also uses object capabilities instead of DAC and MAC.”

Fuchsia is based on the Zircon Microkernel, which is derived from the little kernel (lk), “an RTOS used in the Android bootloader,” explained Carter. Fuchsia extends lk to support 64-bit, user mode, processes, IPC, and other advanced features. “The lk is the only thing that runs in supervisor mode. Drivers, filesystem, and network all run in user mode.”

Fuchsia security mechanisms include regular handles and resource handles using Zircon object capabilities. “Regular handles are usually the only way that userspace can access kernel objects,” said Carter. “Fuchsia differs from most OSes in that it uses a push model in which a client creates the handle and pushes it to a server. Handles are per-process and unforgeable, and they identify both the object and a set of access rights to the object. Access rights include duplicating them with equal or lesser rights or passing them across IPC or using them to obtain handles to child objects with equal or lesser writes.”

Fuchsia handles “are good because they separate rights for propagation vs. use and separate rights for different operations,” said Carter. “You can also reduce rights through handle duplication.”

Handles still pose some problems, however. For example, “with object_get_child(), if you have a handle to a job, you can acquire a handle to anything in that job or any child jobs,” said Carter. “Also, a leak of root job handle is fatal to security. We’d like to see more work on making everything able to be least privilege, and more control over handle propagation and revocation. Not all operations currently check access rights and some rights are unimplemented.”

Resource handles, which are the type of handle used for platform resources like memory mapped I/O, I/O ports, and IRQs, let developers specify the type of resource and optional range. On the plus side, they offer “fine-grained, hierarchical resource restrictions,” said Carter. “However, right now the root resource check isn’t very granular, and as with regular handles, leaks can be fatal. We need to work on propagation, revocation, and refining to least privilege.”

Zircon security primitives include job policy and vDSO enforcement. “In Fuchsia everything is part of a job,” said Carter. “Processes don’t have child processes – jobs have child jobs. Jobs can be nested, containing jobs and other processes, and job policy is applied to all processes within the job. Policies are inherited from the parent and can only be made more restrictive.”

On the pro side, you can create fine-grained object creation policies, as well as hierarchical job policies that are mixed,” explained Carter. “However, the W^X policy is not yet implemented, and when it is it will cause problems with strict hierarchical policies because if a child needs to map something W^X, then all ancestors would need to beta map it W^X as well.”

In Fuchsia, the vDSO (virtual dynamic shared object) primitive “is only meant to invoke system calls,” said Carter. “It’s fully read-only and is mapping constrained by the kernel.”

Fuchsia’s vDSO makes the OS more secure by “limiting the kernel attack surface, enforcing the use of the public API, and supporting per process system call restrictions,” said Carter. “It’s also good that vDSO is not trusted by the kernel so its system call arguments are fully validated.” On the other hand, the current version offers the potential for tampering with or bypassing vDSO, added Carter.

Carter went on to explain Fuchsia namespaces and sandboxes. Advantages of the namespaces implementation include “the lack of a global namespace and the fact that object reachability is determined by initial namespace,” said Carter. “But we’d like to see more granularity.” For sandboxes, which are used for isolating applications, “We’d like to see an expansion to system services. There’s also no independent validation of the sandbox configuration.”

As with Zephyr, the NSA team recommends that Fuchsia eventually add a MAC framework, which would help to “control propagation, support revocation, and apply least privilege,” said Carter. “A MAC could support finer grained check and generalize job policy, as well as validate namespaces and sandboxes. It could also provide a unified framework for defining, enforcing, and validating security goals.”

Options for integrating a MAC with Fuchsia start with building it entirely in user space with no microkernel support, said Carter. Alternatively, you could “extend the existing mechanism” by building it “mostly in user space with limited microkernel support.” A third choice would be to “create security policy logic in user space with full microkernel enforcement for its objects, as we did with DTMach in SELinux.”

In conclusion, Carter emphasized that Fuchsia’s security stack is a work in progress. “We’re just trying to evaluate the thing.” You can watch the entire video below.

Running a Container with a Non-Root User

One best practice when running a container is to launch the process with a non root user. This is usually done through the usage of the USER instruction in the Dockerfile. But, if this instruction is not present it doesn’t necessary mean the process is run as root.

The rational

By default, root in a container is the same root (uid 0) as on the host machine. If a user manages to break out of an application running as root in a container, he may be able to gain access to the host with the same root user. This access would be ever easier to gain if the container was run with incorrect flags or with bind mouts of host folders in R/W.

Running a MongoDB container

If you do not it yet, I highly recommend to give Play With Docker a try.

Read more at Medium

Shared Storage with NFS and SSHFS

Up to this point, my series on HPC fundamentals has covered PDSH, to run commands in parallel across the nodes of a cluster, and Lmod, to allow users to manage their environment so they can specify various versions of compilers, libraries, and tools for building and executing applications. One missing piece is how to share files across the nodes of a cluster.

File sharing is one of the cornerstones of client-server computing, HPC, and many other architectures. You can perhaps get away without it, but life just won’t be easy any more. This situation is true for clusters of two nodes or clusters of thousands of nodes. A shared filesystem allows all of the nodes to “see” the exact same data as all other nodes. For example, if a file is updated on cluster node03, the updates show up on all of the other cluster nodes, as well.

Fundamentally, being able to share the same data with a number of clients is very appealing because it saves space (capacity), ensures that every client has the latest data, improves data management, and, overall, makes your work a lot easier. The price, however, is that you now have to administer and manage a central file server, as well as the client tools that allow the data to be accessed.

Although you can find many shared filesystem solutions, I like to keep things simple until something more complex is needed. A great way to set up file sharing uses one of two solutions: the Network File System (NFS) or SSH File System (SSHFS).

Read more at ADMIN Magazine

Tune In to the Free Live Stream of Keynotes at Open Networking Summit Europe, September 25-27

Open Networking Summit Europe is taking place in Amsterdam, September 25-27. Can’t make it? You’ll be missed, but you don’t have to miss out on the action. Tune in to the free livestream to catch all of the keynotes live from your desktop, tablet or phone! Sign Up Now >>

Live video streaming of the keynote sessions from Open Networking Summit Europe 2018 will take place during the following times:

Tuesday, September 25

13:15 – 14:55 (CEST)

Watch keynotes from Cloud Native Computing Foundation, Red Hat, China Mobile, Intel, Orange Group Network and The Linux Foundation.

Wednesday, September 26

9:00 – 10:30 (CEST)

Watch keynotes from Türk Telekom, IBM, IHS/Infonetics Research, Huawei, China Mobile, and Vodafone Group.

Thursday, September 27

9:00 – 10:35 (CEST)

Watch keynotes from Deutsche Telekom AG, Imperial College London, China Mobile, AT&T, and Amdocs, Huawei, VMware and The Linux Foundation.

View the full Keynote Session Schedule

Sign up for free live stream now >>

This article originally appeared at The Linux Foundation

Deepin Linux: As Gorgeous As It Is User-Friendly

Deepin Linux. You may not have heard much about this distribution, and the fact that it’s often left out of the conversation is a shame. Why? Because Deepin Linux is as beautiful as it is user-friendly. This distribution has plenty of “wow” factor and very little disappointment.

For the longest time, Deepin Linux was based on Ubuntu. But with the release of 15.7, that all changed. Now, Deepin’s foundation is Debian, but the desktop is still that beautiful Deepin Desktop. And when I say it’s beautiful, it truly is one of the most gorgeous desktop environments you’ll find on any operating system. That desktop uses a custom-built QT5 toolkit, which runs as smoothly and with as much polish as any I’ve ever used. Along with that desktop, comes a few task-specific apps, built with the same toolkit, so the experience is consistent and integrated.

What makes the 15.7 release special is that it comes just two short months after the 15.6 release and is focused primarily on performance. Not only is the ISO download size smaller, many core components have been optimized with laptop battery performance in mind. To that end, the developers have gained up to 20 percent better battery life and a much-improved memory usage. Other additions to Deepin Linux are:

  • NVIDIA Prime support (for laptops with hybrid graphics).

  • On-screen notifications (for the likes of turning on or off the microphone and/or Wi-Fi).

  • New drag and drop animation.

  • Added power saving mode and auto-mode switching for laptops.

  • Application categories in mini mode.

  • Full disk installation.

For a full list of improvements and additions, check out the 15.7 Release notes.

Let’s install Deepin Linux and see just what makes this distribution so special.

Installation

In similar fashion to the desktop, the Deepin Linux installer is one of the most beautiful OS installers you will find (Figure 1). Not only is the installer a work of art, it’s incredibly simple. As with most modern Linux distributions, installing Deepin is only a matter of answering a few questions and clicking Next a few times.

Figure 1: Installing Deepin Linux is as easy as it is beautiful.

Installation shouldn’t take more than 10 minutes tops. In fact, based on the download experience I had with the main download mirror, the installation will go faster than the ISO download. To that end, you might went to pick one of the following mirrors to snag a copy of Deepin Linux:

Once you’ve installed Deepin Linux, you can then log onto your new desktop.

First Steps

Upon first login, you’ll be greeted by a setup wizard that walks you through the configuration of the desktop (Figure 2).

Figure 2: The Deepin Desktop setup wizard.

In this wizard, you will be asked to configure the following:

  • Desktop Mode: Between Efficient (a more standard layout) and Fashion (a GNOME 3-like layout).

  • Window Effects: Enable or disable.

  • Icon theme.

Once you’ve select those options, you’ll find yourself on the Deepin Desktop (Figure 3).

Figure 3: The Deepin Desktop is at the ready.

Applications

The application list might surprise some users, especially those who have grown accustomed to certain applications being installed by default. What you’ll find on Deepin Linux is a list of applications that includes:

  • WPS Office

  • Google Chrome

  • Spotify

  • Deepin Store

  • Deepin Music

  • Deepin Movie

  • Steam

  • Deepin Screenshot

  • Foxit Reader

  • Thunderbird Mail

  • Deepin Screen Recorder

  • Deepin Voice Recorder

  • Deepin Cloud Print

  • Deepin Cloud Scan

  • Deepin Font Installer

  • ChmSee

  • Gparted

What the developers have done is to ensure users have as complete a desktop experience as possible, out of the box. In other words, most every average user wouldn’t have to bother installing any extra software for some time. And for those who question the choice of WPS Office, I’ve used it on plenty of occasions and it is quite adept at not only creating stand-alone documents, but collaborating with those who work with other office suites. The one caveat to that is WPS Office isn’t open source. However, Deepin Linux doesn’t promote itself as a fully open desktop, so having closed-source applications (such as the Spotify client and WPS Office) should surprise no one.

Control Center

Deepin takes a slightly different approach to the Control Center. Instead of it being a stand-alone, windowed application, the Control Center serves as a sidebar (Figure 4), where you can configure users, display, default applications, personalization, network, sound, time/date, power, mouse, keyboard, updates, and more.

Figure 4: The Deepin Linux Control Center.

Click on any one of the Control Center categories and you can see how well the developers have thought out this new means of configuring the desktop (Figure 5).

Figure 5: The Control Center in action.

Hot Corners

The Deepin Desktop also has a nifty hot corners feature on the desktop. With this feature, you can set each corner to a specific action, such that when you hover your mouse over a particular corner, the configured action will occur. Available actions are:

  • Launcher

  • Fast Screen Off

  • Control Center

  • All Windows

  • Desktop

  • None

To set the hot corners, right-click on the desktop and select Corner Settings from the pop-up menu. You can then hover your cursor over one of the four corners and select the action you want associated with that corner (Figure 6).

Figure 6: Setting hot corners on Deepin Desktop.

A Must-Try Distribution

If you’re looking for your next Linux desktop distribution, you’d be remiss if you didn’t give Deepin Linux 15.7 a try. Yes, it is beautiful, but it’s also very efficient, very user-friendly, and sits on top of a rock solid Debian foundation. It’s a serious win-win for everyone. In fact, Deepin 15.7 is the first distribution to come along in a while to make me wonder if there might finally be a contender to drag me away from my long-time favorite distro… Elementary OS.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.