Home Blog Page 267

How to Install LDAP Account Manager on Ubuntu Server 18.04

Welcome back to this three-party journey to getting OpenLDAP up and running so that you can authenticate your Linux desktop machines to the LDAP server. In part one, we installed OpenLDAP on Ubuntu Server 18.04 and added our first LDAP entries to the directory tree via the Command Line Interface (CLI).

The process of manually adding data can be cumbersome and isn’t for everyone. If you have staff members that work better with a handy GUI tool, you’re in luck, as there is a very solid web-based tool that makes entering new users a snap. That tool is the LDAP Account Manager (LAM).

LAM features:

  • Support for 2-factor authentication

  • Schema and LDAP browser

  • Support for multiple LDAP servers

  • Support for account creation profiles

  • File system quotas

  • CSV file upload

  • Automatic creation/deletion of home directories

  • PDF output for all accounts

  • And much more

We’ll be installing LAM on the same server we installed OpenLDAP, so make sure you’ve walked through the process from the previous article. With that taken care of, let’s get LAM up and running, so you can more easily add users to your LDAP directory tree.

Installation

Fortunately, LAM is found in the standard Ubuntu repository, so installation is as simple as opening a terminal window and issuing the command:

sudo apt-get install ldap-account-manager -y

When the installation finishes, you can then limit connections to LAM to local IP addresses only (if needed), by opening a specific .conf file with the command:

sudo nano /etc/apache2/conf-enabled/ldap-account-manager.conf

In that file, look for the line:

Require all granted

Comment that line out (add a # character at the beginning of the line) and add the following entry below it:

Require ip 192.168.1.0/24

Make sure to substitute your network IP address scheme in place of the one above (should yours differ). Save and close that file, and restart the Apache web server with the command:

sudo systemctl restart apache2

You are now ready to access the LAM web interface.

Opening LAM

Point your web browser to http://SERVER_IP/lam (where SERVER_IP is the IP address of the server hosting LAM). In the resulting screen (Figure 1), click LAM configuration in the upper right corner of the window.

Figure 1: The LAM login window.

In the resulting window, click Edit server profiles (Figure 2).

Figure 2: The LAM edit options.

You will be prompted for the default profile password, so type lam and click OK. You will then be presented with the Server settings page (Figure 3).

Figure 3: The LAM Server settings page.

In the Server Settings section, enter the IP address of your LDAP server. Since we’re installing LAM on the same server as OpenLDAP, we’ll leave the default. If your OpenLDAP and LAM servers are not on the same machine, make sure to enter the correct IP address for the OpenLDAP server here. In the Tree suffix entry, add the domain components of your OpenLDAP server in the form dc=example,dc=com.

Next, take care of the following configurations:

In the Security settings section (Figure 4), configure the list of valid users in the form cn=admin,dc=example,dc=com (make sure to use your LDAP admin user and domain components).

Figure 4: The Security settings section.

In the Account Types tab (Figure 5), configure the Active account types LDAP options. First, configure the LDAP suffix, which will be in the form ou=group,dc=example,dc=com. This is the suffix of the LDAP tree from where you will search for entries. Only entries in this subtree will be displayed in the account list. In other words, use the group attribute if you have created a group on your OpenLDAP server that all of your users (who will be authenticating against the LDAP directory tree) will be a member of. For example, if all of your users who will be allowed to log in on desktops machines are part of the group login, use that.

Figure 5: The Groups configuration for LAM.

Next, configure the List attributes. These are the attributes that will be displayed in the account list, and are predefined values, such as #uid, #givenName, #sn, #uidNumber, etc. Fill out both the LDAP suffix and List attributes for both Users and groups.

After configuring both users and groups, click Save. This will also log you out of the Server profile manager and take you back to the login screen. You can now log into LAM using your LDAP server admin credentials. Select the user from the User name drop-down, type your LDAP admin password, and click Login. This will take you to the LAM Users tab (Figure 6), where you can start adding new users to the LDAP directory tree.

Figure 6: Our user listing in LAM.

Click New User and the New User window will open (Figure 7), where you can fill in the necessary blanks.

Figure 7: Adding a new user with LAM.

Make sure to click Set password, so you can create a password for the new user (otherwise the user won’t be able to log into their account). Also make sure to click on the Unix tab, where you can set the username, home directory, primary group, login shell, and more. Once you’ve entered the necessary information for the user, click Save and the user account can then be found in the LDAP directory tree.

Welcome to Simpler User Creation

The LDAP Account Manager makes working with OpenLDAP exponentially easier. Without using this tool, you’ll spend more time entering users to the LDAP tree than you probably would like. The last thing you need is to take more time than necessary out of your busy admin day to create and manage users in your LDAP tree via command line.

In the next (and final entry) in this three-part series, I will walk you through the process of configuring a Linux desktop machine, such that it can authenticate against the OpenLDAP server.

Linux Kernel 4.20 Reached End of Life, Users Urged to Upgrade to Linux 5.0

Renowned Linux kernel developer and maintainer Greg Kroah-Hartman announced the end of life of the Linux 4.20 kernel series, urging users to upgrade to a newer kernel series as soon as possible.

“I’m announcing the release of the 4.20.17 kernel. Note, this is the LAST release of the 4.20.y kernel. It is now end-of-life, please move to the 5.0.y kernel tree at this point in time. All users of the 4.20 kernel series must upgrade,” Greg Kroah-Hartman said in a mailing list announcement.

Read more at Softpedia

SREs Wish Automation Solved All Their Problems

Although the SRE job role is often defined as being about automation, the reality is that 59 percent of SREs agree there is too much toil (defined as manual, repetitive, tactical work that scales linearly) in their organization. Based on 188 survey responses from people holding SRE job roles, Catchpoint’s second annual SRE Report surprisingly found that almost half (49 percent) of the SREs believe their organization has not used automation to reduce toil.

Often being inspired by DevOps, SREs have high expectations for automation. Yet, there are key differences between the two and SRE responsibilities are much closer to those associated with systems administrators. SREs have the capability to automation and innovate but are often burdened by IT operations historical focus on incident management and reliability.

Read more at The New Stack

Handling Complex Memory Situations

Jérôme Glisse felt that the time had come for the Linux kernel to address seriously the issue of having many different types of memory installed on a single running system. There was main system memory and device-specific memory, and associated hierarchies regarding which memory to use at which time and under which circumstances. This complicated new situation, Jérôme said, was actually now the norm, and it should be treated as such.

The physical connections between the various CPUs and devices and RAM chips—that is, the bus topology—also was relevant, because it could influence the various speeds of each of those components.

Jérôme wanted to be clear that his proposal went beyond existing efforts to handle heterogeneous RAM. He wanted to take account of the wide range of hardware and its topological relationships to eek out the absolute highest performance from a given system.

Read more at Linux Journal

Solus 4 Linux Gaming Report: A Great Nvidia, Radeon And Steam User Experience

This article is the third in a series on Linux-powered gaming that aims to capture the various nuances in setup, as well as uncover potential performance variations between nine different desktop Linux operating systems. 

Solus is a fascinating Linux distribution. It’s built from scratch, falls under the category of rolling release and by default ships with the Budgie desktop environment — which was also developed by the Solus Project. Other desktop environment ISOs like Gnome and MATE are available. … Read more at Forbes

Kubernetes 1.14: Production-level support for Windows Nodes, Kubectl Updates, Persistent Local Volumes GA

We’re pleased to announce the delivery of Kubernetes 1.14, our first release of 2019!

Kubernetes 1.14 consists of 31 enhancements: 10 moving to stable, 12 in beta, and 7 net new. The main themes of this release are extensibility and supporting more workloads on Kubernetes with three major features moving to general availability, and an important security feature moving to beta.

More enhancements graduated to stable in this release than any prior Kubernetes release. This represents an important milestone for users and operators in terms of setting support expectations. In addition, there are notable Pod and RBAC enhancements in this release, which are discussed in the “additional notable features” section below.

Let’s dive into the key features of this release:

Production-level Support for Windows Nodes

Up until now Windows Node support in Kubernetes has been in beta, allowing many users to experiment and see the value of Kubernetes for Windows containers. Kubernetes now officially supports adding Windows nodes as worker nodes and scheduling Windows containers, enabling a vast ecosystem of Windows applications to leverage the power of our platform. Enterprises with investments in Windows-based applications and Linux-based applications don’t have to look for separate orchestrators to manage their workloads, leading to increased operational efficiencies across their deployments, regardless of operating system.

Read more at Kubernetes.io

Can Better Task Stealing Make Linux Faster?

Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements.

Load balancing via scalable task stealing

The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10’s to 100’s of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused.

I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all.

Results

Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats:

  • %find – percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap.
  • steal – number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time.

​​CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel:

Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate.

The code

As of this writing, this work is not yet upstream, but the latest patch series is at https://lkml.org/lkml/2018/12/6/1253. If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using


  # grep -q STEAL /sys/kernel/debug/sched_features && echo Yes
  Yes

If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in https://lkml.org/lkml/2018/12/6/1250 .However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger).

Future work

After the basic stealing algorithm is pushed upstream, I am considering the following enhancements:

  • If stealing within the last-level cache does not find a candidate, steal across LLC’s and NUMA nodes.
  • Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues.
  • Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported.
  • Maintain a bitmap to identify idle cores and idle CPUs, for push balancing.

This article originally appeared at Oracle Developers Blog.

Linux Release Roundup: Applications and Distros Released This Week

This is a continually updated article that lists various Linux distribution and Linux-related application releases of the week.

At It’s FOSS, we try to provide you with all the major happenings of the Linux and Open Source world. But it’s not always possible to cover all the news, specially the minor releases of a popular application or a distribution.

Hence, I have created this page, which I’ll be continually updating with the links and short snippets of the new releases of the current week. Eventually, I’ll remove releases older than 2 weeks from the page.

Read more at It’s FOSS

How to Install NTP Server and Client(s) on Ubuntu 18.04 LTS

NTP or Network Time Protocol is a protocol that is used to synchronize all system clocks in a network to use the same time. When we use the term NTP, we are referring to the protocol itself and also the client and server programs running on the networked computers. NTP belongs to the traditional TCP/IP protocol suite and can easily be classified as one of its oldest parts.

When you are initially setting up the clock, it takes six exchanges within 5 to 10 minutes before the clock is set up. Once the clocks in a network are synchronized, the client(s) update their clocks with the server once every 10 minutes. This is usually done through a single exchange of message(transaction). These transactions use port number 123 of your system.

In this article, we will describe a step-by-step procedure on how to:

  • Install and configure the NTP server on a Ubuntu machine.
  • Configure the NTP Client to be time synced with the server.

We have run the commands and procedures mentioned in this article on a Ubuntu 18.04 LTS system.

Read more at Vitux

Linux Foundation Welcomes LVFS Project

The Linux Foundation welcomes the Linux Vendor Firmware Service (LVFS) as a new project. LVFS is a secure website that allows hardware vendors to upload firmware updates. It’s used by all major Linux distributions to provide metadata for clients, such as fwupdmgr, GNOME Software and KDE Discover.

To learn more about the project’s history and goals, we talked with Richard Hughes, upstream maintainer of LVFS and Principal Software Engineer at Red Hat.

Linux Foundation: Briefly, what is Linux Vendor Firmware Service (LVFS)? Can you give us a little background on the project?

Richard Hughes:  A long time ago I wanted to design and build an OpenHardware colorimeter (a device used to measure the exact colors on screen) as a weekend hobby. To update the devices, I also built a command line tool and later a GUI tool to update just the ColorHug firmware, downloading a list of versions as an XML file from my personal homepage. I got lots of good design advice from Lapo Calamandrei for the GUI (a designer from GNOME), but we concluded it was bad having to reinvent the wheel and build a new UI for each open hardware device.

A few months prior, Microsoft made UEFI UpdateCapsule a requirement for the “Windows 10 sticker.” This meant vendors had to start supporting system firmware updates via a standardized format that could be used from any OS. Peter Jones (a colleague at Red Hat) did the hard work of working out how to deploy these capsules on Linux successfully. The capsules themselves are just binary executables, so what was needed was the same type of metadata that I was generating for ColorHug, but in a generic format.

Some vendors like Dell were already generating some kinds of metadata and trying to support Linux. A lot of the tools for applying the firmware updates were OEM-specific, usually only available for Windows, and sometimes made dubious security choices. By using the same container file format as proposed by Microsoft (the reason we use a cabinet archive, rather than .tar or .zip) vendors could build one deliverable that worked on Windows and Linux.

Dell has been a supporter ever since the early website prototypes. Mario Limonciello (Senior Principal Software Development Engineer from Dell) has worked with me on both the lvfs-website project and fwupd in equal measure, and I consider him a co-maintainer of both projects. Now the LVFS supports firmware updates on 72 different devices, from about 30 vendors, and has supplied over 5 million firmware updates to Linux clients.

The fwupd project is still growing, supporting more hardware with every release. The LVFS continues to grow, adding important features like 2 factor authentication, OAuth and various other tools designed to get high-quality metadata from the OEMs and integrate it into ODM pipelines. The LVFS is currently supported by donations, which funds the two server instances and some of the test hardware I use when helping vendors.

Hardware vendors upload redistributable firmware to the LVFS site packaged up in an industry-standard .cab archive along with a Linux-specific metadata file. The fwupd daemon allows session software to update device firmware on the local machine. Although fwupd and the LVFS were designed for desktops, both are also usable on phones, tablets, IoT devices and headless servers.

The LVFS and fwupd daemon are open source projects with contributions from dozens of people from many different companies. Plugins allow many different update protocols to be supported.

Linux Foundation: What are some of the goals of the LVFS project?

Richard Hughes: The short-term goal was to get 95% of updatable consumer hardware supported. With the recent addition of HP that’s now a realistic target, although you have to qualify the 95% with “new consumer non-enterprise hardware sold this year” as quite a few vendors will only support hardware no older than a few years at most, and most still charge for firmware updates for enterprise hardware. My long-term goal is for the LVFS to be seen like a boring, critical part of infrastructure in Linux, much like you’d consider an NTP server for accurate time, or a PGP keyserver for trust.

With the recent Spectre and Meltdown issues hitting the industry, firmware updates are no longer seen as something that just adds support for new hardware or fixes the occasional hardware issue. Now the EFI BIOS is a fully fledged operating system with networking capabilities, companies and government agencies are realizing that firmware updates are as important as kernel updates, and many are now writing in “must support LVFS” as part of any purchasing policy.

Linux Foundation: How can the community learn more and get involved?

Richard Hughes: The LVFS is actually just a Python Flask project, and it’s all free code. If there’s a requirement that you need supporting, either as an OEM, ODM, company, or end user we’re really pleased to talk about things either privately in email, or as an issue or pull request on GitHub. If a vendor wants a custom flashing protocol added to fwupd, the same rules apply, and we’re happy to help.

Quite a few vendors are testing the LVFS and fwupd in private, and we agree to only do the public announcement when everything is working and the legal and PR teams gives the thumbs up. From a user point of view, we certainly need to tell hardware vendors to support fwupd and the LVFS, before the devices are sitting on shelves.

We also have a low-volume LVFS announce mailing list, or a user fwupd mailing list for general questions. Quite a few people are helping to spread the word, by giving talks at local LUGs or conferences, or presenting information in meetings or elsewhere. I’m happy to help with that, too.

This article originally appeared at Linux Foundation