Home Blog Page 507

Questions about SysAdmin Training from The Linux Foundation? Join the Next #AskLF

Let the knowledge-sharing continue! On Monday, July 31 at 10 AM PT, The Linux Foundation will present another installment in its #AskLF program: a series of monthly Twitter chats hosted by The Linux Foundation thought leaders and experts. The program allows the open source community to ask a designated host questions about the organization’s offerings and strategies. Previous topics have included open networking, Linux Foundation Training and Certification programs, the basics of Cloud Foundry, and gender & diversity inclusion at Linux Foundation events. This fifth chat in the series will focus on The Linux Foundation’s SysAdmin Training offerings, hosted by Nate Kartchner, Training Marketing Manager at The Linux Foundation.

Nate Kartchner#AskLF began earlier this year as a way to shed light on the organization’s various experts on open source industry topics, as well as the organization’s strategy and vision. The series also gives @linuxfoundation followers a way to access its many resources — and engage with one another over shared interests. Nate Kartchner has been with The Linux Foundation since 2014, helping lead the evolution of The Linux Foundation’s growing role in training the open source pros of tomorrow.

His #AskLF chat will take place the Monday after SysAdmin Day: a professional holiday the organization has recognized for years.

From free introductory MOOCs on a handful of important open technology topics to mid-senior level open source training, The Linux Foundation has steadily expanded its training initiatives to include professional training on the open source projects that matter most. Students at various places in their career trajectories can find guidance in Linux Foundation Training offerings such as LFS201: Essentials of System Administration.

@linuxfoundation followers will have the chance to ask Nate questions about how Linux Foundation Training can help guide their burgeoning and existing SysAdmin career journeys.

Sample questions include:

  • How does LFS201 prepare students for real-world SysAdmin challenges?

  • How can an absolute beginner kick off their SysAdmin career?

  • How will a Linux Foundation SysAdmin certification help me get hired ?

Here’s how you can participate in the #AskLF:

  • Follow @linuxfoundation on Twitter: Hosts will take over The Linux Foundation’s account during the session.

  • Save the date: July 31, 2017 at 10 a.m. PT.

  • Use the hashtag #AskLF: To ask Nate your questions while he hosts. Click here to spread the news of #AskLF with your Twitter community.

More dates and details for future #AskLF sessions to come! We’ll see you on Twitter, July 31st at 10 a.m. PT.

Get more tips for SysAdmins considering a Linux Foundation certification here. 

The Truth About Sysadmins

You’ve probably heard many stereotypes about system administrators and the job itself. Like most stereotypes, they have varying levels of accuracy, so it’s worth digging a little deeper if you’re considering a career change.

Here’s the truth about are some of the things you may have heard about network and system administration.

xkcd Devotion to Duty comic

“Devotion to Duty,” xkcdCC BY-NC 2.

Read more at OpenSource.com

Linus Torvalds: Gadget Reviewer

If you know anything about Linus Torvalds, you know he’s the mastermind and overlord of Linux. If you know him at all well, you know he’s also an enthusiastic scuba diver and author of SubSurface, a do-it-all dive log program. And, if you know him really well, you’d know, like many other developers, he loves gadgets. Now, he’s starting his own gadget review site on Google+Working Gadgets.

The title says it all, but Torvalds explained:

I was throwing out a lot of old gadgets that I no longer use. Because I love crazy gadgets, and not all of them are great or stay useful. It’s not always even computer stuff: my wife can attest to the addition of crazy kitchen gadgets I have tried.

Read more at ZDNet

The Difference Between SOA and Microservices Isn’t Size

For those that have been in the technology industry for some time, there is a tendency to compare or even equate the current microservices phenomenon with the more archaic Service Oriented Architecture (SOA) approach. This is done implicitly in many cases, but also quite explicitly with statements such as “microservices is nothing more than the new SOA” or “Amazon is the only company to get SOA right.”

This is unsurprising, because it’s rooted in fact. For all of its other faults, SOA was a vision of enterprises that looks remarkably like what progressive organizations are building today with cloud native architectures composed of, among other things, microservices. Stripped to its core, SOA was the idea that architectures should be composed of services rather than monolithic applications.

Read more at RedMonk

Sustainable Open Source – Where Are the Vendors?

Harvard Business Review has an article comparing old, crusty open source code to the Y2K ordeal. Go ahead and read it – it’s worth your time.

What if I told you that the entire NTP relies on the sole effort of a 61-year-old who has pretty much volunteered his own time for the last 30 years? His name is Harlan Stenn…

For a number of years Stenn has worked on a shoestring budget. He is putting in 100 hours a week to put patches on code, including requests from big corporations like Apple… And this has led to delays in fixing security issues and complaints.

Read more at OSEN

Multi-Server Samba Installation to Protect Your Network Against Outages and Network Attacks

The recent outages of AWS and the attacks on the DNS network have shown the need of distributing critical infrastructures across multiple cloud providers. This distribution is particularly important for centralized authentication services, which provide users and permissions for various services and organizational offices. Building on the last tutorial, that talked about connecting clients to cloud-based Samba 4 domain controllers, this article will explain how to extend the network by an additional Samba 4 based site server. I will lead you through the step by step process and explain the best practices of running a multi-server installation.

This guide can also be used to connect two on-premises Samba 4 installation or an on-premises installation with a cloud-based one, for example, Univention Corporate Server on Amazon.

Using a multi-server setup protects the network from the failure of a single data center and allows a continued operation even if a single server is unavailable. At the same time, using locations that are, from a network perspective, closer to your workstations, can speed up the login process and enable a more efficient usage of time.

At the same time, an incorrect setup can at best slow down the network. At the worst, it might not replicate the data correctly, and thus an outage of one server might cause an interruption of all systems.

 

Prerequisites

Server 1

This guide assumes that the first Samba 4 domain controller is already running without any issue.

Univention Corporate Server (UCS) provides a powerful, yet easy to use identity management solution that includes a preconfigured Samba 4. UCS’ unified user and rights management solution is based upon a domain concept that will allow you to skip some steps within this guide. If you are using this guide for a UCS-based system, the server 1 should be a UCS master or backup.

If you are planning on using a Debian or Ubuntu based system, the Samba Wiki has an excellent Getting Started Guide with all the steps needed to get the domain provisioned. The server or virtual machine will need to use a fixed IP for this guide to work flawlessly.

Server 2

We will also assume that you have set up the second server at the target location. If using UCS, it will be considerably easier to finish the installation once the VPN connection has been established. Also disable the automatic join, as we want to change some site settings.

If you are using Debian or Ubuntu, please install Samba4 from the package management system

$ sudo apt-get install samba

 

VPN Endpoints

For simplicity and security, this guide assumes that the VPN is running on two dedicated servers, thus reducing the load on the domain controllers. It, however, is possible to run OpenVPN on the domain controllers.

VPN Connection

Samba uses multiple ports and protocols to connect two or more servers, including LDAP, Kerberos, and DNS. Using a VPN allows reducing the number of ports and protocols exposed to the Internet to two, thus making securing the connection considerably easier. As previously, we will use OpenVPN to connect the two systems.

If not installed already, you can install it with the following command:

$ sudo apt-get install openvpn

Considering that in most cases port 1194 is used for client-server connections, this example will use 1195 for the connection, which consequently needs to be opened in the firewall.

On UCS based systems the configuration registry can be used to open the port

$ sudo ucr set security/packetfilter/udp/1195/all=ACCEPT
$ sudo service univention-firewall restart

On Debian and Ubuntu, you can manually add the port to your IP tables configuration

$ sudo iptables -A INPUT -p "udp"  --dport 1195 -j ACCEPT

 

Configuration

First, a secret key is needed to connect the two sites. OpenVPN can create it for you using the command 

$ sudo openvpn --genkey --secret /etc/openvpn/static.key

Both sides will need nearly identical configuration files saved in /etc/openvpn

## Topology and protocol settings
dev tun
proto udp
management /var/run/management-udp unix

## the shared secret for the connection
secret /etc/openvpn/static.key

## Encryption Cypher to use for the VPN
cipher AES-256-CBC

## Compression algorithm to use
comp-lzo

## The port on which the VPN Server should listen on
port 1195

## The address used internally by OpenVPN
ifconfig 10.255.255.10 10.255.255.11

## Route traffic to remote network
## The network should be the one, used by the remote server
route 10.200.10.0 255.255.255.0

## Additional server configuration
keepalive 10 120
persist-key
persist-tun

## Configure the logfile and the verbosity
verb 1  
mute 5
status /var/log/openvpn-status.log

For the second server, the route has to match the respectively other network and the addresses in ifconfig statement have to be switched. Additionally, the keyword remote has to be used to denote the endpoint. The full resulting config file thus looks like this:

## Topology and protocol settings
dev tun
proto udp
management /var/run/management-udp unix

## the shared secret for the connection
secret /etc/openvpn/static.key

## Encryption Cypher to use for the VPN
cipher AES-256-CBC

## Compression algorithm to use
comp-lzo

## The external DNS name or IP of the other VPN
remote vpnserver.univention.com 1195

## The address used internally by OpenVPN
ifconfig 10.255.255.11 10.255.255.10

## Route traffic to remote network
## The network should be the one, used by the remote server
route 10.200.10.0 255.255.255.0

## Additional server configuration
keepalive 10 120
persist-key
persist-tun

## Configure the logfile and the verbosity
verb 1  
mute 5
status /var/log/openvpn-status.log

 

Completing the Connection

Once the static key and configuration are copied to the correct location, establish the VPN connection by restarting VPN on both systems.

$ sudo service openvpn restart

 

Changes to Server 1

Once the VPN is established, it is time to join the second server to the domain. For this, some small changes are needed on the first server.

First, server one should use server two as the backup name server. In this case, even when there is an issue with DNS on the first server, the domain should still function correctly.

To set the name resolution on UCS execute

$ sudo ucr set nameserver2=10.200.10.11

On Debian/Ubuntu add the following line to /etc/resolve.conf

nameserver  10.200.10.11

 

Server 2 Join

Preparations

Similar to the server 1, server 2 should also use the other system as DNS fallback.

Again on UCS execute the following with the proper IP:

$ sudo ucr set nameserver2=10.210.237.171

On Debian/Ubuntu add the following line to /etc/resolve.conf:

nameserver  10.210.237.171

You also need to ensure that the NTP is getting the time from server 1 to guarantee that both systems have a synchronized clock.

On UCS, using the right IP, run:

$ sudo ucr set timeserver=10.210.237.171

On Debian/Ubuntu edit /etc/ntp.conf and add:

server 10.210.237.171

On a Debian or Ubuntu system, you will need to configure Kerberos before trying to join the domain. Overwrite /etc/krb5.conf with the following settings, changing the default_realm as needed:

[libdefaults]
    dns_lookup_realm = false
    dns_lookup_kdc = true
    default_realm = KORTE.UNIVENTION.COM

You can test the settings by running:

$ kinit administrator

 

Domain Join

Once all the previous steps have been taken, it is time to join the Samba domain, including setting a new AD-site. Sites give a preference for a certain client to prefer a particular DC or group of DCs. If no site is configured, the server will join the default site.

To define the site on UCS run:

ucr set samba4/join/site=my_secondsite

replacing my_secondsite with the actual name of your site

The last step is to execute the Univention join.

$ sudo univention-join

On Debian/Ubuntu Systems, you will need to issue the following command to join the server to a site:

$ sudo  samba-tool domain join kevin.univention.com DC -U"kevinadministrator" --dns-backend=SAMBA_INTERNAL --site=secondsite

 

Additional Considerations on Debian/Ubuntu

If you are using UCS, please skip this section. The UCS domain join takes care of the following two tasks.

Verify the DNS Records

Some versions of Samba 4 do not create all needed DNS records when joining a second DC. Thus you need to verify that the host record and objectGUID record have been created.

First, verify the host record with the following command on server 1:

$ host -t A server2.$(hostname -d)

Replace backup with the actual name of your server.

If you do not get a result, you can create the entry on server 1 with the following command:

$ sudo samba-tool dns add server1 $(hostname -d) server2 A 10.200.10.11
-Uadministrator

Then determine the objectGUID using the Samba database:

$ sudo ldbsearch -H /var/lib/samba/private/sam.ldb '(invocationId=*)' --cross-ncs objectguid

Results will look similar to this:

# record 1
dn: CN=NTDS Settings,CN=SERVER2,CN=Servers,CN=second,CN=Sites,CN=Configuration,DC=kevin,DC=univention,DC=com
objectGUID: 1b6f180e-5bc2-471f-a029-8c078e58c656

# record 2
dn: CN=NTDS Settings,CN=SERVER1,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=kevin,DC=univention,DC=com
objectGUID: d7e12d36-2588-4d2c-b51c-3c762eab046b

# returned 2 records
# 2 entries
# 0 referrals

Select the entry where the DN contains the name of your server and us the objectGUID in the following command:

$ host -t CNAME 1b6f180e-5bc2-471f-a029-8c078e58c656
._msdcs.$(hostname -d)

If not found, you can add it with the following command:

$ sudo samba-tool dns add server1 _msdcs.$(hostname -d) 1b6f180e-5bc2-471f-a029-8c078e58c656 CNAME server2.$(hostname -d) -Uadministrator

Please remember to replace the server names and objectGUID in all of these commands whenever appropriate.

Sysvol Synchronization

Lastly, the group policies need to be synchronized. This job can be done in many different ways. From using a simple cron job to sophisticated multi-server synchronization. The samba wiki has an overview over the different approaches.

Conclusion

Setting up multi-location server systems can be a daunting task that requires some consideration and planning. However, the result of having a more robust and faster network is often enough to justify placing domain controllers off-site or in the cloud. Additionally, an additional cloud component in your identity management solution, no matter whether a second cloud DC or your first one, can often serve as the point to connect third-party services in a resilient manner.

The right tools, such as UCS, can accelerate many of the more complex tasks of setting up a multi-server domain. A professional and integrated domain ensures compliant and fail-safe authentication and policy services across locations and clouds, such as AWS.

Industry Experts from Yelp, IBM, Netflix, and More Will Speak at MesosCon in Los Angeles

Conference highlights for MesosCon North America — taking place Sept. 13-15 in Los Angeles, CA — include real-world experiences and insight from companies deploying Mesos in the datacenter.

This annual conference brings together users and developers to share and learn about the Mesos project and its growing ecosystem. The conference features two days of sessions focused on the Apache Mesos Core and related technologies, as well as a one-day hackathon.  

Session highlights include:

  • How Yelp.com Runs on Mesos in AWS Spot Fleet for Fun and Profit, Kyle Anderson, Yelp

  • Distributed Deep Learning on Mesos with GPUs and Gang Scheduling, Min Cai and Alex Sergeev, Uber

  • DataStax Enterprise on DC/OS – Yes, it’s Possible; Customer Case Studies, Kathryn Erickson and Ravi Yadav, DataStax

  • Introduction to Multi-tenancy in Mesos, Jay Guo, IBM

  • Real time event processing and handling stateful applications on Mesos, Balajee Nagarajan and Venkatesh Sivasubramanian, GE Digital

  • OpenWhisk as a Mesos Framework, Tyson Norris, Adobe

  • Practical container scheduling: juggling optimizations, guarantees, and trade-offs at Netflix, Sharma Podila, Netflix

  • Fault tolerant frameworks – making use of CNI without docker, Aaron Wood, Verizon

You can view the full schedule of sessions and activities and save $200 when you register by July 25. Register Now!

Remote Sessions Over IPv6 with SSH, SCP, and Rsync

Our familiar old file-copying friends SSH, SCP, and Rsync are all IPv6-ready, which is the good news. The bad news is they have syntax quirks which you must learn to make them work. Before we get into the details, though, you might want to review the previous installments in our meandering IPv6 series:

SSH and SCP

Like all good Linux admins, you know and use SSH and SCP. Both have some differences and quirks for IPv6 networks. These quirks are in the remote addresses, so once you figure those out, you can script SSH and SCP just like you’re used to, and use public key authentication.

By default, the sshd daemon listens for both IPv4 and IPv6 protocols. You can see this with netstat:

$ sudo netstat -pant|grep sshd
tcp   0  0 0.0.0.0:22  0.0.0.0:*  LISTEN   1228/sshd       
tcp6  0  0 :::22       :::*       LISTEN   1228/sshd

You may disable either one with the AddressFamily setting in sshd_config. This example disable IPv6:

AddressFamily inet

The default is any. inet6 means IPv6 only.

On the client side, logging in over IPv6 networks is the same as IPv4, except you use IPv6 addresses. This example uses a global unicast address in the private LAN address range:

$ ssh carla@2001:db8::2

Just like IPv4, you can log in, run a command, and exit all at once. This example runs a script to back up my files on the remote machine:

$ ssh carla@2001:db8::2 backup

You can also streamline remote root logins. Wise admins disable root logins over SSH, so you have to log in as an unprivileged user and then change to a root login. This is not so laborious, but we can do it all with a single command:

$ ssh -t  carla@2001:db8::2 "sudo su - root -c 'shutdown -h 120'" 
carla@2001:db8::2's password: 
[sudo] password for carla:

Broadcast message from carla@remote-server
        (/dev/pts/2) at 9:54 ...

The system is going down for halt in 120 minutes!

The shutdown example will stay open until it finished running, so you can change your mind and cancel the shutdown in the usual way, with Ctrl+c.

Another useful SSH trick is to force IPv6 only, which is great for testing:

$ ssh -6 2001:db8::2

You can also force IPv4 with with -4.

You may access hosts on your link local network by using the link local address. This has an undocumented quirk that will drive you batty, except now you know what it is: you must append your network interface name to the remote address with a percent sign.

$ ssh carla@fe80::ea9a:8fff:fe67:190d%eth0

scp is weird. You have to specify the network interface with the percent sign for link local addresses, enclose the address in square braces, and escape the braces:

$ scp filename [fe80::ea9a:8fff:fe67:190d%eth0]:
carla@fe80::ea9a:8fff:fe67:190d's password:
filename

You don’t need the interface name for global unicast addresses, but still need the escaped braces:

$ scp filename [2001:db8::2]:
carla@2001:db8::2's password: 
filename

This example logs into a different user account on the remote host, specifies the remote directory to copy the file into, and changes the filename:

scp filename userfoo@[fe80::ea9a:8fff:fe67:190d%eth0]:/home/userfoo/files/filename_2

Rsync

rsync requires enclosing the remote IPv6 address in various punctuations. Global unicast addresses do not need the interface name:


$ rsync -av /home/carla/files/ 'carla@[2001:db8::2]':/home/carla/stuff
carla@f2001:db8::2's password: 
sending incremental file list

sent 100 bytes  received 12 bytes  13.18 bytes/sec
total size is 6,704  speedup is 59.86

Link local addresses must include the interface name:


$ rsync -av /home/carla/files/ 'carla@[fe80::ea9a:8fff:fe67:190d%eth0]':/home/carla/stuff

As always, remember that the trailing slash on your source directory, for example /home/carla/files/, means that only the contents of the directory are copied. Omitting the trailing slash copies the directory and its contents. Trailing slashes do not matter on your target directory.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

​The Ultimate Linux Workstation: The Dell 5720 AIO

Want a cheap Linux desktop? Look elsewhere. But, if you want a kick-rump-and-take-names desktop for serious graphics or development work, you want the Dell 5720 AIO workstation.

This take-no-prisoners workstation starts at $1,699, but the model I looked at costs over $3,200. It’s worth it.

This model came with a Quad Core 3.8Ghz Intel Xeon Processor E3-1275. In a word, it’s fast.

It also comes with 64GB of 2133MHz DDR4 ECC RAM. That’s fast, too. The main memory is backed by a 512GB M.2 PCIe SSD and a pair of 1TB 2.5-inch SATA (7,200 RPM) hard drives. Yes, they’re really fast, too.

Read more at ZDNet

Yandex Open Sources CatBoost, A Gradient Boosting Machine Learning Library

Artificial intelligence is now powering a growing number of computing functions, and today the developer community today is getting another AI boost, courtesy of Yandex. Today, the Russian search giant — which, like its US counterpart Google, has extended into a myriad of other business lines, from mobile to maps and more — announced the launch of CatBoost, an open source machine learning library based on gradient boosting — the branch of ML that is specifically designed to help “teach” systems when you have a very sparse amount of data, and especially when the data may not all be sensorial (such as audio, text or imagery), but includes transactional or historical data, too.

CatBoost is making its debut in two ways today. (I think ‘Cat’, by the way, is a shortening of ‘category’, not your feline friend, although Yandex is enjoying the play on words. If you visit the CatBoost site you will see what I mean.)

Read more at TechCrunch