Configuration File Organization
I have said this to many but I will say it again here, the true power of running Linux comes from Text file based configuration files. These files by default are organized into logic groups of options than can be batched for specific functions, in addition you have the ability to comment out options and add notes, these notes can save you from major issues when trying to reconfigure a system. Additionally by utilizing logical organization and proper comments you build the core for simple tracking of unauthorized or incorrect changes.
File and Directory Organization
As many Linux based distro users have noted the directory structure in Linux based systems is different than with windows, this is because we follow guidelines for logical file containers. There are specific directories which are intended to hold only one type of file, for a description of these directories and their contents please refer to http://www.tuxfiles.org/linuxhelp/linuxdir.html. this organization makes it easy to locate the files for your specific needs, as an example all executables are kept in different directories than the associated library files.
You as an administrator are responsible for ensuring that this level of organization is followed, if it is properly maintained navigation of your system(s) become very simple. In addition to following the default directory structure organization you will often have needs for custom file groups which would be easier to work with if they were organized and named by some set of rules (I will give an example in a future posting about dual booting).
Next you will want to review the organization of your Physical or Virtual Machines on you network, these system will be easier to navigate and maintain i you break the systems into logical groups. The most common methods of grouping are by physical location, ip address subnets and physical wiring. When your systems are properly organized in the network you simplify the maintenance and troubleshooting by having clear paths to specific hosts, address based relationships for user tracking and simplified location groups of the physical machines (which is very important in server farms).
What about hiding components and utilizing security through obscurity?
First of all security through obscurity is not a true security principal, it only delays the inevitable, it is better to address the issues directly than trying to hide them behind a curtain. But if you are bent on utilizing obscurity then you can also find ways to reorganize the files on your system through links and custom naming conventions so they are still navigatable with proper knowledge but still mildy obscure. The same goes for the computers on your network. But you must remember that without clear logical organization it will take you longer to address and resolve issues which will in turn hurt your productivity and uptime.