A version of this article was originally published in POWERlines Volume 3, No. 8.
This is intended as a general ramble about UNIX and UNIX-like operating systems, not as a specific list of instructions on how to select or configure a UNIX system.
Why is UNIX becoming an important part of the commercial computer market when it has been around for 20-plus years? The answer can be over-simplified and stated as "Open Systems".
But that then begs the question, "What is an Open System?" An open system can be considered as one where the purchaser is not tied to a vendor for an upgrade path when their need for computer power exceeds that available from their current hardware. They can purchase a small machine from vendor X, and at a later date, when they need greater computer power, they can purchase a larger machine from vendor Y without having to retrain all their staff.
This is not a complete, or even necessarily accurate, summation of the concept of open systems. The number of definitions is probably twice that of the number of people with an opinion about open systems.
The first point that needs examination when looking at a UNIX system is "How big a machine do I need?". There are no simple answers to this question. It is not simply a matter of how many users do I need to support, and how many in the foreseeable future. It also involves what the users will be doing.
Manufacturers' guidelines are simply that, guidelines, but they can be useful when you take into account the usage mix they are normally based on. UNIX is a General Purpose Operating System. It is designed to allow a (possibly large) number of users to use a single computer to do their various tasks. How large that number is depends on what they are doing. A machine that is used for occasional data entry, end-of-week and end-of-month processing can support more users than the same machine used for high speed data entry.
UNIX machines have a traditional weakness in the area of transaction processing when compared to purpose-built and proprietary transaction processing systems, but this is changing with the ongoing development of techniques to increase the performance of UNIX systems, both at a hardware and software level.
UNIX systems traditionally perform best with a number of users who are doing different and unrelated tasks, i.e. one user is editing, compiling and testing programs; one is using a word processor; one is accessing a database; another is accessing a separate database; and so forth.
A rule of thumb I have found useful in dealing with the problem of "How big should it be?" is to halve the manufacturer's suggested user numbers. If the machine is claimed to support 20 users, I would expect it to support 10 happily, and 15 with some degradation. The problem with this rule of thumb, like all such generalised rules, is that it can fail when a system really will support the rated number of users. This is happening more frequently with the increasing market requirement for Open Systems.
UNIX systems like two things: a lot of memory, and very fast disks. How much memory is a lot? On a RISC workstation, 16 megabytes is considered a minimum, while a multi-user machine with the same amount of memory could happily support 10 users, and possibly more, depending on the usage mixture.
To determine how much memory is needed, the simplest way is the most direct: test the machine in question. Look at how many users it supports running a known application; use the system monitor tools that are provided to examine memory usage; and if the machine is swapping to and from disk, get more memory.
The disk that is fast enough is the fastest disk system that you can afford. Look at using a RAID (Redundant Array of Inexpensive Disks) system, as they allow you to use disk striping techniques, which speed up accesses, and can be configured to allow the removal of a failed disk without any loss of data.
Use SCSI-II disk controllers, currently the fastest peripheral control method available for micro- and mini-computers. Use hardware-cached disk controllers, to get even more speed.
Raise the system buffers to 10 Megabytes or more, so that more disk data stays in memory. (Of course, you can only do this if you have enough memory.) Purchase an Uninterruptable Power Supply, so that you will not lose data when there is a power failure.
Purchase a machine that uses a Journaled File System, or purchase an add-on Journaled File System for the machine, as these allow the recovery of a file system after a power failure to the state it was in just prior to the failure, at a slight performance cost.
Examine disk-partition schemes, putting all the data on one disk, the programs on another, and your temporary file space on a third.
Terminals should be connected using intelligent terminal controllers or terminal servers, see the section on "Terminal Controllers vs. Terminal Servers" for an examination of the pros and cons. Dumb terminal controllers increase the load on the CPU, and therefore reduce the performance of the system.
Again, this depends on what you are doing. Don't raise the number of files openable to 11,000 simply because that is the maximum the system allows. Determine the greatest number that is likely to be used, and set it to that. Where you can raise the number of files openable by a single process, raise that to the likeliest value, given the number of users and what they will be doing.
Don't create a single user ID and have all the users of the system use that id and password. Create multiple users, and if they all share a common environment, use a single home directory for them. This makes it much easier to tell who is using the system at any time, and to locate the people who leave at night without logging out.
Make friends with the Computer Science department at your local university, college or the like. Most CS departments use UNIX as it is effectively free to them. Contact your vendor, they may be able to help. Obtain access to the Internet, the single largest collection of UNIX experts on Earth, because it covers almost the whole of Earth!
If you want to read more about UNIX systems, go to a university library, or to any large bookstore that caters to computer users, particularly academic users. If you really want to learn how UNIX works, I can recommend "The Design of the UNIX Operating System", by Maurice J. Bach (ISBN 0-13-201757-1 025), published by Prentice-Hall, but it is not what I would call light reading.
A terminal controller controls the terminals connected to it, reducing the load on the CPU by handling all the I/O to the terminals.
A terminal server is similar, in that it controls the terminals connected to it, but it communicates with the machine over a network connection, usually Ethernet, which requires intervention by the CPU at some point to move the data on and off the network. Controllers load systems the least, but require cabling, servers allow you to run a single cable from the machine to an area where you wish to install terminals or make use of an existing network.