With all this in mind we are now ready to embark on the layout. I have based this on my own method developed when I got hold of 3 old SCSI disks and boggled over the possibilities.
The tables in the appendices are designed to simplify the mapping process. They have been designed to help you go through the process of optimizations as well as making an useful log in case of system repair. A few examples are also given.
Determine your needs and set up a list of all the parts of the file system you want to be on separate partitions and sort them in descending order of speed requirement and how much space you want to give each partition.
The table in Appendix A section is a useful tool to select what directories you should put on different partitions. It is sorted in a logical order with space for your own additions and notes about mounting points and additional systems. It is therefore NOT sorted in order of speed, instead the speed requirements are indicated by bullets ('o').
If you plan to RAID make a note of the disks you want to use and what partitions you want to RAID. Remember various RAID solutions offers different speeds and degrees of reliability.
(Just to make it simple I'll assume we have a set of identical SCSI disks and no RAID)
Then we want to place the partitions onto physical disks. The point of the following algorithm is to maximise parallelizing and bus capacity. In this example the drives are A, B and C and the partitions are 987654321 where 9 is the partition with the highest speed requirement. Starting at one drive we 'meander' the partition line over and over the drives in this way:
A : 9 4 3
B : 8 5 2
C : 7 6 1
This makes the 'sum of speed requirements' the most equal across each drive.
Use the table in Appendix B section to select what drives to use for each partition in order to optimize for paralellicity.
Note the speed characteristics of your drives and note each directory under the appropriate column. Be prepared to shuffle directories, partitions and drives around a few times before you are satisfied.
After that it is recommended to select partition numbering for each drive.
Use the table in Appendix C section to select partition numbers in order to optimize for track characteristics. At the end of this you should have a table sorted in ascending partition number. Fill these numbers back into the tables in appendix A and B.
You will find these tables useful
when running the partitioning program (fdisk
or
cfdisk
) and when doing the installation.
After this there are usually a few partitions that have to be 'shuffled' over the drives either to make them fit or if there are special considerations regarding speed, reliability, special file systems etc. Nevertheless this gives what this author believes is a good starting point for the complete setup of the drives and the partitions. In the end it is actual use that will determine the real needs after we have made so many assumptions. After commencing operations one should assume a time comes when a repartitioning will be beneficial.
For instance if one of the 3 drives in the above mentioned example is very slow compared to the two others a better plan would be as follows:
A : 9 6 5
B : 8 7 4
C : 3 2 1
Often drives can be similar in apparent overall speed but some advantage can be gained by matching drives to the file size distribution and frequency of access. Thus binaries are suited to drives with fast access that offer command queueing, and libraries are better suited to drives with larger transfer speeds where IDE offers good performance for the money.
Avoid drive contention by looking at tasks: for instance if you are
accessing /usr/local/bin
chances are you will soon also need files
from /usr/local/lib
so placing these at separate drives allows less
seeking and possible parallel operation and drive caching. It is
quite possible that choosing what may appear less than ideal drive
characteristics will still be advantageous if you can gain parallel
operations. Identify common tasks, what partitions they use and try
to keep these on separate physical drives.
Just to illustrate my point I will give a few examples of task analysis here.
such as editing, word processing and spreadsheets are typical examples of low intensity software both in terms of CPU and disk intensity. However, should you have a single server for a huge number of users you should not forget that most such software have auto save facilities which cause extra traffic, usually on the home directories. Splitting users over several drives would reduce contention.
readers also feature auto save features on home directories so ISPs should consider separating home directories
News spools are notorious for their deeply nested directories and
their large number of very small files. Loss of a news spool
partition is not a big problem for most people, too, so they are good
candidates for a RAID 0 setup with many small disks to distribute
the many seeks among multiple spindles. It is recommended in the
manuals and FAQs for the INN news server to put news spool
and .overview
files on separate drives for larger installations.
There is also a web page dedicated to INN optimising well worth reading.
applications can be demanding both in terms of drive usage and speed requirements. The details are naturally application specific, read the documentation carefully with disk requirements in mind. Also consider RAID both for performance and reliability.
reading and sending involves home directories as well as in- and outgoing spool files. If possible keep home directories and spool files on separate drives. If you are a mail server or a mail hub consider putting in- and outgoing spool directories on separate drives.
Losing mail is an extremely bad thing, if you are managing an ISP or major hub. Think about RAIDing your mail spool and consider frequent backups.
can require a large number of directories
for binaries, libraries, include files as well as source and project
files. If possible split as much as possible across separate
drives. On small systems you can place /usr/src
and project files on
the same drive as the home directories.
is becoming more and more popular. Many browsers have a local cache which can expand to rather large volumes. As this is used when reloading pages or returning to the previous page, speed is quite important here. If however you are connected via a well configured proxy server you do not need more than typically a few megabytes per user for a session. See also the sections on Home Directories and WWW.
One way to avoid the aforementioned pitfalls is to only set off fixed
partitions to directories with a fairly well known size such as swap,
/tmp
and /var/tmp
and group together the remainders
into the remaining partitions using symbolic links.
Example: a slow disk (slowdisk
),
a fast disk (fastdisk
) and an
assortment of files. Having set up swap
and tmp
on fastdisk
;
and /home
and root on slowdisk we have (the fictitious) directories
/a/slow
, /a/fast
, /b/slow
and /b/fast
left to allocate on the partitions
/mnt.slowdisk
and /mnt.fastdisk
which represents the
remaining partitions of the two drives.
Putting /a
or /b
directly on either drive gives the same
properties to the subdirectories. We could make all 4 directories
separate partitions but would lose some flexibility in managing
the size of each directory. A better solution is to make
these 4 directories symbolic links to appropriate directories on
the respective drives.
Thus we make
/a/fast point to /mnt.fastdisk/a/fast or /mnt.fastdisk/a.fast
/a/slow point to /mnt.slowdisk/a/slow or /mnt.slowdisk/a.slow
/b/fast point to /mnt.fastdisk/b/fast or /mnt.fastdisk/b.fast
/b/slow point to /mnt.slowdisk/b/slow or /mnt.slowdisk/b.slow
and we get all fast directories on the fast drive without having to set up a partition for all 4 directories. The second (right hand) alternative gives us a flatter files system which in this case can make it simpler to keep an overview of the structure.
The disadvantage is that it is a complicated scheme to set up and plan in the first place and that all mount points and partitions have to be defined before the system installation.