Tackling Unix security in large organisations, part 2

14

Author: Iain Roberts

Yesterday we began discussing techniques for securing Unix systems in large organizations. Systems are relatively easy; it is a security
truism that the weakest link is the people.

In trying to get their job done employees may bend security rules to make life easier. Why have obscure passwords when a simple one will do? Why keep passwords secret when sharing them saves time? Why not just enable ftp or rsh to get a job done, even if no one remembers to disable it again afterwards?

Administrators (and, if they log onto the servers, users) have to care about security of the system. Why? Maybe you are concerned about attackers breaking in, or maybe auditors or insurers require you maintain a certain level of security. Make sure that everyone understands what the drivers are. This seems obvious but is surprisingly rare. Security staff tend to see security as intrinsically a good thing that needs no justification, so they forget to give one. But administrators tend to see security as
intrinsically an annoyance which make it more difficult for them to do their jobs, so they very much need one. Of course, telling all your staff where the security weak points are in detail may be counterproductive, but an overview that includes real-life examples of security breaches can get people interested in the topic.

Having understood why security is important to your organisation, the administration staff need to know what the rules are. There are a couple of things you can be pretty certain of: Your administrators will think that some of the security rules are pointless, and they will think that some of the rules are completely unworkable. They will probably be entirely correct in some of their concerns. A rule that seems entirely workable when devised often turns out to have a major flaw in real life. For example, a rule might ban NFS, but if the company has just invested heavily in an application that requires it, security planners are going to have to give some ground.

Listen to your administrators. If they say that a security rule isn’t workable, don’t ignore them. Find out
their reasons. Ideally, everyone should end up in agreement (if grudging) that either the rule needs amending or there is a workable way to implement it. Not involving the administrators is just asking for the rules to be bypassed, and whilst it might seem easy to blame the staff for this, the security people are at least as much at fault. In some cases, making a rule workable might require having a vendor make software changes. If the vendor is one of the big boys and you’re not a major customer, this may be unrealistic (but you can try). If the vendor is a smaller company or the software is open source, you should have a lot more luck getting changes put in, or least getting a good explanation of why it can’t be done that way with suggestions of alternatives.

Because of the lack of security updates, security rules that prevent unauthorised command line access are especially important, as they protect the server perimeter (not to be confused with the network perimeter). On the technical side, this means rules that prevent passwords going across accessible networks in
the clear, and rules which prevent a server being tricked into a trusted relationship with another host. They cover protocols such as telnet, rsh, ftp, DNS, NIS, and NFS. They cover password rules: technical rules preventing weak passwords and enforcing password aging, for example. Most Unix/Linux flavours have other advanced account controls you can use as well, such as the ability to check passwords against a dictionary, prevent a password being reused within a time limit, and expire passwords after a period of time (normally one month).

Enforcing strong passwords is critical. Running a password cracker such as Crack or John the Ripper can be a real eye-opener: If you haven’t trained your users in choosing good passwords, it’s very likely that you’ll crack at least a few accounts within a minute.

Ross Anderson at Cambridge University has done some research on different methods for choosing passwords. He looked at randomly chosen simple passwords to see how secure and how easy to remember (Post-it Note resistant) they were. He found one method that was easy to remember and difficult to crack. In this initial letter method, the password is made up of the initial letters of a phrase that is meaningful to the user, with some numbers and punctuation substituted for letters. For example, if the phrase was “Hit me baby one more time — Britney Spears,” the password might be *Mb1mtBs.

Encourage administrators to use the initial letter method of choosing passwords, and run regular cracking sessions to spot users who don’t. A quiet notification that
someone’s password has been cracked, along with a reminder of how to choose a strong password and the reasons for doing so, should be enough to get the user to change his ways.

With all this security in place across a large server environment, the risk goes up that either the security or the functionality of the system will get broken by someone either making a mistake or changing something on purpose. For example, file permissions may be changed, or a configuration file might be incorrectly amended. There has to be some ongoing check for compliance, and an audit every two years won’t be enough. There doesn’t have to be anything too
complex or expensive. A simple shell script that runs daily or weekly should be sufficient. It should check everything it can, report non-compliance, and,
if at all possible, automatically correct problems to bring the server in line with the standards. A great deal of caution is appropriate when having a script
make automatic corrections to a server, but the alternative — having someone manually fix the problems — probably will not stand the test of time.

Having everyone protect their user accounts is important, but may not be critical if there are accounts no one is looking after. These may be accounts of
people who have left the organization, or they could be accounts installed by an application that has never been used. Of course you
have a well-documented process for deleting accounts when someone leaves the company or moves departments, but processes are never perfect, so you should have a backup method. In a script (like the script to check ongoing compliance), check for dormant accounts that have never been used, or haven’t been used for a few months. Be careful — no one’s going to thank you if the guy on call can’t log onto a server at 3 a.m. because his account was automatically deleted the
previous month. You can automatically delete accounts, but if you do, you have to be very confident that all your administrators and users log onto
every server they need access to on a regular basis, and you need to exclude application accounts. One way of doing this is to have administrators manually run a script once a month that automatically logs onto every server they manage, just as a handshake to update the last login time (though you can use it to do other things as well, such as change passwords).

Social engineering is an obvious way for an attacker to gain command-line access to a server. Many IT departments are large and geographically spread out, maybe even with teams in different countries. Some of your administrators may never have met each other. If a new person joins the team, some people may not find out
for some time. These sorts of departments are easy targets for a social engineering attack. An attacker with some minimal knowledge of how things work
in the organisation has a good chance of getting an account created or a password reset with a well-placed, convincing phone call.

It may sound odd, but make sure your administrators know who is on the team. It’s not uncommon these days for companies to have administration teams spread across different locations; maybe even across countries or continents. When someone gets a phone call, he needs to know who is a legitimate administrator and who isn’t. It’s amazing how often new staff are introduced to the people in their own office, but no one else is even told they exist. (Even introductions within the office can be missed, until a few days later someone timidly asks who the guy in the corner is and what he’s doing.) Following the same principle, the security team should be familiar to the administrators (and vice versa) with easy communication routes in both directions, whether it is just being contactable on the telephone or sending out a regular security newsletter. This all seems simple, but in modern companies it can be hard to figure out which department you’re in yourself after the latest reorganisation, without worrying about who is in other departments.

Recognise that keeping the servers secure is an achievement to be proud of, and the people who manage it should be shown appreciation. Keep
up the security training and awareness with boosters for existing staff as well as education for new people. Keep involving
the administrators and listening to them. Consider having non-security staff going on external security training courses as part of their
personal development.

Be aware of security fatigue. Time goes by and there are no (known) security breaches. People naturally start dropping their guard. What was all that security business about anyway? We don’t have a security problem. Before you know it, you’re back to square one. There are a few strategies to counter this. Easiest of all, keep an eye on security break-ins reported in the media and pass on details to staff. This at least keeps awareness high that the criminals are out there, even if they aren’t in here right now. Repeat audits are important, but do the audit correctly. There is very little point in auditing the security
documentation: most attackers won’t check your security rules before breaking in, they’ll just see what is actually implemented. Better to
do spot checks on real servers, even if only a small number. Also, run some penetration tests. Give some external consultant access inside your perimeter and have him try to break into the computers. Involving external people is useful as they might spot problems you’ve missed, and they should be able to give you some insight as to how you compare with other organisations.

All being well, you end up with security that is good enough to protect the business whilst being workable for the administrators and acceptable for the owners and users. All being really well, you’ll still have that security in place a year or two down the line.

Iain Roberts is a freelance IT consultant with 10 years’ experience working in large Unix environments.

Category:

  • Management