Chapter 7: Disks, Tapes, Partitions and Physical Directories
  1. Creating Filesystems on the Disks
  2. Advanced Filesystems
  3. Figure 1. Table of partitions to be mounted at boot time
  4. Allowing Other Machines Access To Local Disks
  5. Netgroups
  6. Suggested Directory Structure
  7. Small Computer System Interface (SCSI) Terminology
  8. Tapes and CDROMS

  9.  
     
     
     
Creating Filesystems on the Disks

Partitions

These days, all disks on UNIX workstations are SCSI devices . All disks have to be formatted but it is unlikely you will have to do this yourself as it is normally done by the vendor. Part of the format process involves writing a disk label which, among other things, contains a partition table. This defines areas (partitions) on the disk in terms of their starting cylinder and size in cylinders. The partitions become useful when the system administrator creates a UNIX filesystem in them. Raw partitions which do not have a file system may be used for swap space; a usual default is the second partition on the system disk..  Conventions for names and
permissible uses of partitions vary between operating systems; see Fig 1 for some examples.  Note that it is often necessary to increase swap space greatly to run certain software (such as ARC/INFO). As data in existing partitions is lost if the partitions are resized, it is much preferable to get the partitioning correct at the first attempt. We have also found that complicated partition tables with many small partitions are not a good idea. Large, multi-purpose partitions give greater flexibility.

Making Unix File Systems

Once the appropriate partition table is on the disk then file systems have to be created within the desired partitions.

The command used to do this is usually mkfs ( on SUN this is packaged as newfs) but the options vary, so check the 'man' pages. Also, the default file system parameters are somewhat wasteful for large ( > 1 GB) partitions - they create too many inodes for example. If you are in any doubt about what you are doing with the newfs command then contact Systems Group for advice.

Note:Once the newfs command is used, the existing files within the bounds of the partition will be destroyed! There are sometimes limits to the sizes of partitions which can be created under older operating systems; SunOS has a maximum size of 2GB, whereas Solaris has effectively no such restriction. However, the size of the backup media should be considered when making huge partitions, if a straight UNIX dump is to be used. Legato can dump large partitions as it streams across tapes. Whether huge partitions are sensible even then, is another question however.

Details about making partitions and kinds of file system are included in all our operating-system installation scripts.

Back to contents
 

After running newfs, it is good practice to run fsck (file system check program) on the new file system before mounting it. The fsck program is run at boot time (it must not be run on a mounted file system) and is used to find and correct minor inconsistencies in a file system. It will, for example, remove files which are not in any directory, recover blocks which are not in any file and the like. Such things are normal if a machine is not shut down properly (e.g. after a kernel panic or a power failure).

Mounting the Partitions

Apart from the normal mount command there is a file (/etc/fstab,/etc/vfstab or /etc /checklist) (see Figure 1) which defines what partitions should be mounted when a system boots. This file should be edited to include any new partitions. The format of this file, and the naming convention for the partitions, varies between architectures Examples of the most common architectures are given in Figure 1. Be careful when editing this file - if it gets corrupted, or worse still renamed, the workstation will not boot at all.

Advanced Filesystems

These days, all the modern UNIX vendors offer some kind of advanced file-system as an option. Examples are Online DiskSuite (SUN), XFS (SG), PolyCenter (Digital),....These are all rather similar and offer features such as -

 

logical volumes (filesystems spread over several physical volumes)

journalled filesystems, saving fsck at boot time.

mirror disks

software RAID support

 

We have used such advanced file systems successfully in NERC and now recommend them for large or critical file servers. Details of these are beyond the scope of this manual, however !

Back to contents

Figure 1. Table of partitions to be mounted at boot time

 
 ARCHITECTURE: sun4

File Used: /etc/fstab  

Syntax: <partition name> <mount point> <filesystem type> <mount options> <dump marker> <fsck pass>  
 

  Example: /dev/sd0a      /            4.2   rw     1 1
           /dev/sd0d      /local       4.2   rw     1 2
        swap           /tmp         tmp   rw     0 0
wlcomms:/var/mail /var/spool/mail   nfs   rw     0 0
   

# mount -a,umount -a Mount, unmount all filesystems 

# mount Displays what filesystems are currently mounted  
NOTE: swap is /dev/sd0b plus optionally other raw partitions.

ARCHITECTURE: solaris

File Used: /etc/vfstab  

Syntax: <partition name> <raw partion name> <mount point> <filesystem type> <fsck pass> <mount at boot time> <mount options>  
 

  Example: /dev/dsk/c0t3d0s0 /dev/rdsk/c0t3d0s0 /       ufs   1 no    -
       /dev/dsk/c0t2d0s7 /dev/rdsk/c0t2d0s7 /local   ufs   1 yes   -
       /dev/dsk/c0t2d0s1  -                 -        swap  - no    -
wlcomms:/var/spool/mail   -                 /usr/mail nfs  - yes   -
Useful Commands:   

# mountall, umountall Mount, unmount all filesystems  

# mount Displays what filesystems are currently mounted  
NOTE:swap is s1 on system disk by default.

ARCHITECTURE: ip12,irix

File Used: /etc/fstab  

Syntax: <partition name> <mount point> <filesystem type> <mount options and raw partition name> <0> <0>  
 

  Example: /dev/root         /       xfs   rw,raw=/dev/rroot         0 0
       /dev/dsk/dks0d1s5 /local   efs   rw,raw=/dev/rdsk/dks0d1s5 0 0
wlcomms:/var/spool/mail  /usr/mail nfs  rw,bg,intr                0 0
  

Useful Commands: 

#mount -a, umount -a Mount, unmount all filesystems.  

#mount Displays what filesystems are currently mounted  

Note: filesystem can be either efs or xfs under IRIX 6.2 and later editions of 5.3  
Note: /dev/dsk/dks0d1s1 is swap by default. 
 

ARCHITECTURE: alpha

File Used: /etc/fstab  

Syntax: <partition name> <mount point> <filesystem type> <mount options and raw partition name> <0> <0>  

 
Example: /dev/rz3a      /       ufs           rw,raw=/dev/rroot 0 0 
      /proc          /proc    procfs       rw                 0 0 
     /dev/rz3b      swap      ufs          sw                 0 0 
    /dev/rz3g       /local    ufs          rw                 0 0 
mailhost:/var/spool/mail /var/spool/mail nfs rw,bg,intr 0 0  
Useful Commands: 

# /sbin/mount -a, /sbin/umount -a Mount, unmount all filesystems.  

# /sbin/mount Displays what filesystems are currently mounted  
Note: swap is /dev/rz3b by default, but is always specified.

ARCHITECTURE: hpux 9.0x

File Used: /etc/checklist  

Syntax: <partition name> <mount point> <filesystem type> <mount options > <0> <fsck pass>  
 

  Example: /dev/dsk/6s0       /      hfs defaults 0 3 
       /dev/dsk/c207d0s1 /local10 hfs defaults 0 3 
  

Useful Commands: 

# mount -a, umount -a Mount, unmount all filesystems.  

# mount Displays what filesystems are currently mounted  
 

ARCHITECTURE: hpux 10.20

File Used: /etc/fstab  

Syntax: <partition name> <mount point> <filesystem type> <mount options> <0> <fsck pass>  
 

  Example: /dev/dsk/c0t6d0   /         hfs    defaults   0 1
        /dev/dsk/c0t1d0   /local    hfs    defaults   0 2
wlcomms:/var/spool/mail   /var/mail nfs    rw,bg,intr 0 0
  

Useful Commands: 

# mount -a, umount -a Mount, unmount all filesystems.  

# mount Displays what filesystems are currently mounted 

 
Back to contents Allowing Other Machines Access To Local Disks

By default local disks/directories cannot be mounted by other machines, but most of the time we will want all the machines on the LAN to access each other's directory structures via NFS mounts, possibly using the automounter. Allowing this to happen is called exporting or sharing the directory tree. This is done in a file called (generally) /etc/exports. Each entry should define a directory and a list of hosts allowed to mount that directory. Again, the format of this file differs between architectures therefore check the man page! Examples of the most common ones are given in Figure 14.2

NOTE: Do not use unrestricted entries in /etc/exports because if your site is on JIPS then anyone on the Internet can mount those directories! At most, allow access by hosts in the NIS netgroup "allhosts". See discussion on netgroups below. This netgroup should contain a list of all hosts (not just those running UNIX) which need to do NFS mounts.

Once the /etc/exports file is edited then you must run the "exportfs -a" command (or its equivalent) to update the kernel tables. If this is the first time you have exported anything the workstation will need to be rebooted to start the mount daemon.

Figure 2. Exporting filesystems

 
ARCHITECTURE: sun4, ip12, irix, hpux ,alpha

File Used: /etc/exports  

Syntax: <file system to be exported> -access=<host|netgroup>[:<host|netgroup>...]  

  

Example: /local3/users -access=allhosts,root=wlcomms   

  

Activating Changes:  

# exportfs -a except on alpha, where change is immediate. 

Useful commands:  

# exportfs or # showmount -e Both show what is actively being exported in different formats.  

# showmount -a Displays what local filesystems are currently being mounted by remote hosts 

ARCHITECTURE: solaris 

File Used: /etc/dfs/dfstab  

Syntax: share -F nfs -o rw=<host|netgroup>[:<host|netgroup>...] <file system to be exported>  

  

Example: share -F nfs -o rw=allhosts,root=wlcomms /local3/users   

  

Activating Changes:  

# shareall  

Useful Commands: 

# share or # showmount -e Both show what is actively being exported, in different formats.  

# showmount -a As sun4 etc. above.

 
Back to contents Netgroups

Netgroups are network wide groups of system hosts and users defined in the file /nerc/<domain>/yp/netgroup and distributed through NIS. The netgroups can be used for permission checking during NFS mounts, logins, remote logins and remote shells. Their most common use is in the /etc/exports file or its equivalent.

A netgroup is defined in the file /nerc/<domainname>/yp/netgroup in the following manner...

 

<groupname> <member1> [<member2> ...]

 

..where a <member> is another netgroup or a combination of a hostname, username and YP domain name in the following format...

([hostname],[username],[domainname])
 
 

Suggested Netgroup Use

What you should do is create a set of netgroups which split your network hosts in a rational manner (remember PCs can live on your network and NFS mount file systems) and then amalgamate those groups into a local supergroup called allhosts.

Example:

If you have a network with 3 PCs, 2 Suns and 16 SGs you may consider these lines in your /nerc/<domainname>/yp/netgroup file...

allpc (pc1,,) (pc2,,) (pc3,,)
allsun (sun1,,) (sun2,,)
allsg (sg1,,) \
(sg2,,) (sg3,,)\
...etc...
(sg16,,)
allhosts allpc allsun allsg

 

This netgroup file defines netgroups allpc, allsun, allsg and allhosts which will be valid in any YP domain (it doesn't matter much on our single domain sites). The netgroups allow any user on those machines to be valid if the netgroup is used in, say, a .rhosts file. Notice also, the continution line characters (\) used in defining the allsg group.

The name allhosts is not significant; however it has been used at all sites in the South. In the North site-specific names such as itehost have been used for the same purpose.

Once you have edited your netgroup file don't forget to make it, just like any other YP map. The exportfs command will remember the netgroup name and recheck the netgroup to validate each new mount request, thus new hosts will have access to NFS file systems without having to add their names to the export lists of all your serving machines and doing an exportfs -a.

NOTE: With Solaris 2.6 a new feature of exports is introduced; instead of netgroups the IP number of the permitted networks
can be used. This is an advantage, since a corrupted netgroup map can prevent anything being exported at all!

Back to contents

Suggested Directory Structure

It is suggested that the directory structure on additional disks should follow the directory structure in Figure 14 .3 and as more disks are added to a particular host then they should be mounted under /local1, /local2 etc. Do NOT mount a partition as /local/users or /local/packages, as this makes it awkward to put packages on the former or users or data on the latter - stick to the simple /local<n> mount points. A convention like this has been found to be an enormous time-saver if disks are being reorganised e.g. when a new file-server is installed.

Figure 3. Recommended physical directory structure

 
 
/local[n]-|-packages-|-<arch>-|-<package>
                              |-<package1>
                     |-<arch1>-|-<package>
                               |-<package1>
/local[n]-|-users-|-<group>-|-<userid>
                            |-<userid1>
                  |-<group1>-|-<userid>
/local[n]-|-users-|-<userid->
/local[n]-|-data-|-<datadir>
 

The structure, or more likely one or two branches of it on any one disk, is used in conjunction with the automounter to give the 'logical' (network-visible) directory structure shown in figure.4. The 'logical' structure is used to reference all packages and user areas regardless of the physical location of the directories.

 

 

 
Back to contents
 

Figure 4. The 'logical' directory structure for disks.

 
 

     /users              /nerc/packages                /data
  _____|____                   |                   ________|_______
  |||     |||                 |||                     |||
<group>   <id>             <package>            <data dir.>
   |
  |||
  <id>
 

/local[n]/users

This is used for users' home directories and is accessed using the automount trigger /users/<group> or /users/<userid> depending on how the automounter map auto.users has been set up. One option, favoured at Keyworth, is to trigger on the userid - this enables you to allocate users to particular partitions on particular machines so there is complete flexibilty but you have to add a line to the automounter map for every user, and a separate mount will be triggered for each user. On a large multiuser system this can have implications for the NFS load and is not recommended for file servers.

The alternative, used in the Southern Area, is to group users under an additional subdirectory e.g. /local/users/ncs/dr. The auto.users table then has a single entry for all those users, thus reducing the size of the automounter map. Note that the "group" here does not have to be an organisation in NERC or even a UNIX group ( /etc/group). It is simply a trigger word for the automounter and a collection of userids whose home directories are in the same partition.

/local[n]/data

This directory structure is used to give users access to a personal directory on an alternative partition using the directory path /data/[<group>/]/<userid> or to a common data area using the directory path /data/<data dir.>. The partition can be on a disk anywhere in the domain.

The creation and administration of directories under /local[n]/data is the same as that for user login directories except that entries will be made in the auto.data automounter table.

For example, the ncs group may purchase a new disk for their own personal use but keep their login directories in their current partition. The whole disk can be used as one partition. A file system would be created with newfs in the partition which covers the entire disk (the c or s2 partition on SUN, s7 on Silicon Graphics) and mounted under /local[n] and the /local/data/ncs/<ncs user ids> directories created. One entry is then put into auto.data map in the form...

<group> <host>:/local[n]/data/<group>

e.g.

ncs kwsa:/local3/data/ncs

Note: Don't forget to 'export' the partition and update the /etc/exports and /etc/fstab so that the new partition will be mounted at boot time and accessible over the network.

Back to contents

/local[n]/packages/<architecture>

The additional complication here is the <architecture> directory level. This is to distinguish between, for example, the Silicon Graphics SAS and the SunOS SAS package. These would be loaded under /local[n]/packages/irix/sas and /local[n]/packages/sun4/sas but users on the relevant workstations will only see one, as /nerc/packages/sas. Beneath each architecture directory a subdirectory will be created for each package in the partition, and an entry will be put in the appropriate auto.packages<arch> automounter map file.

Small Computer System Interface (SCSI) Terminology

SCSI was defined (1986) as an ANSI standard in an attempt to define a "standard" computer peripheral interface. The standard was recognised as inadequate for disk drives, even before the original ANSI approval and all disk vendors agreed to use the Common Command Set (CCS).

There are 2 modes - asynchronous (handshaking) with a speed of less than 1 MHz and sychronous (transfer blocks of data) with a speed up to 5 MHz. The data path is 8 bits wide so the maximum data rate of synchronous SCSI is 5 MBytes/s. Each SCSI device is identified by a "target" which is set by jumpers or a small thumb wheel device. The target ranges from 0 to 7 but 7 is always the controller in the CP (except on SG where it is 0). The maximum cable length is nominally 6m but really the shorter the better. This is especially so if there are different kinds of cable or lots of devices (ie anything which will cause electrical reflections.) You must also have a terminator, though some workstations can manage without one if there are no external devices.

SCSI-2 is a 1991 extension agreed by ANSI. It incorporates CCS. There is also a new "fast" synchronous mode with a speed up to 10MHz and new "wide" modes - 16 or 32 bits wide. Another important term is "differential", as opposed to the original single-ended" SCSI. This was part of the 1986 standard and intended for hostile environments and longer cables. It is now recomended for large, "fast" SCSI disk systems.

At present, the most common flavours are :-

10MHZ /8 bits wide /single-ended (Most workstations)

10MHZ/16 bits wide/ differential (Recommended for file servers and other systems with a significant number of disks)

Be aware, however, that some current SUN disk products use 10MHZ/16 bits wide/single-ended SCSI.

Wide SCSI (16 bits wide) targets range from 0 to 15 (the controller is still generally 7!). Also wide SCSI usually (not always!) uses 68 pin connectors.

We expect to see even more, faster SCSI flavours in the next few years - ULTRASCSI (20 MHZ) and ultimately serial SCSI (optical fibres).

It is also strongly recommended that you use an "active" (possibly called "regulated") terminator for fast SCSI. The idea here is to improve the immunity of the terminating voltage from supply fluctuations but using a few diodes instead of a resistive voltage divider.
However, be warned that the very latest SUN products are self-terminating and adding a terminator can cause them to stop
working!

Connectors are the main cause of problems with SCSI. Unfortunately there are several types in common use. DB50 is used on Arrow and older SUN equipment. SCSI-2 (or micro-D) connectors are used on many SUNs and SG Indigo 2s. ( This does not mean they support the SCSI-2 fast mode!) 68-pin SCSI (known also as SCSI-3) is the current SUN standard. Centronics connectors are used on SG Indigos and on some third-party disks. It is possible to purchase cables with necessary combinations of connectors from many sources, although the 68-pin cables are restricted to SUN at the moment.

Until very recently nearly all UNIX systems had SCSI interfaces as standard; now  some are being produced without and a
separate SCSI adapter is required if external disks are to be attached.
Clearly this whole area is becoming a minefield for potential disk purchasers and you are recommended to check carefully with vendors when specifying new disk systems.

Back to contents

SunOS SCSI target assignment

There are predefined values of the SCSI targets in the SunOS kernel which are less than obvious; they are as follows;

On the first or only SCSI bus

First (internal) disk 3

Second disk 1

Third disk 2

Fourth disk 0

First tape 4

Second tape 5

CD-ROM reader 6

It is therefore inadvisable to make any of the external disks SCSI target 3 unless, of course, there is no internal disk. If a second internal disk is installed this will be SCSI target 1. The same set of targets applies for all additional SCSI controllers which may be installed. If you can, keep to the four-disk, two-tape sequence; if not, let Systems Group know as a small change to the kernel configuration file will be necessary. It is usually possible to change the SCSI target of a disk, tape or CD, as mentioned above, but sometimes this requires the assistance of the vendor, so try to stay within the guidelines above. Keeping to standard numbers also assists visiting Applications or Systems people who bring their own CDs or tape drives, and will be unable to work if the numbers they expect to use have been reassigned.

Solaris and IRIX kernels can accept any combination of disks and tapes, with the usual SCSI limitations per bus. However if a new device is added to a Solaris system it must be rebooted after issuing the command

touch /reconfigure

to make it construct the device files. IRIX will sense the new peripherals and build a new kernel and device files automatically. HPUX 10.x can also take any devices but requires a specific kernel rebuild if new drivers are required, e.g. for the first tape on a system. DEC Unix updates its kernel when new peripherals are found but may need the device files to be constructed by hand. For the latest information on all these operating systems see the iTSS 'install..' work instructions.

Tapes and CD ROM Drives

While we are on SCSI topics, here is a collected list of the device files used to access tapes and CD ROM drives on the major architectures.

 
Back to contents

 
  ARCHITECTURE: sun4
Tape Special Device files: /dev/rst*, /dev/nrst* 
CD Rom Special Device files: /dev/sr0 
Example: /dev/rst0, /dev/nrst0             SCSI target 4,  rewind and no rewind 
                /dev/rst1, /dev/nrst1             SCSI target 5,  rewind and no rewind 
               /dev/sr0                                 SCSI target 6
 
These are all on the internal SCSI controller - which is where we recommend all tape devices stay.
Further tapes or CD rom devices may be defined by editing the kernel configuration file to exchange a third disk for a tape: 
               /dev/rst2, /dev/nrst2                   SCSI target 5,  rewind and no rewind
However SunOS is now a very out-of-date operating system and serious consideration should be given to replacing  
any server systems or at least upgrading them to Solaris. 
 
ARCHITECTURE: solaris
 
Tape Special Device files: /dev/rmt/* 
CD Rom Special Device files: /dev/sr0 
Example: /dev/rmt/0, /dev/rmt/0n         First tape attached, rewind and no rewind 
                /dev/rmt/1, /dev/rmt/1n         Second tape attached, rewind and no rewind 
               /dev/sr0                                 First CD Rom reader 
Note: CD Roms are automatically mounted by the Volume Manager if they contain UNIX file systems; this device is only needed for High Sierra file systems.
ARCHITECTURE: ip12 and irix

Tape Special Device files: /dev/rmt/tps*, also /dev/tape 

CD Rom Special Device files: /dev/dsk/dks0d* 

  
 

Example: /dev/tape ( or /dev/rmt/tps0d4)          Tape with SCSI target 4, rewind 
                /dev/nrtape ( or /dev/rmt/tps0d4nr)      " " no rewind 
                /dev/dsk/dks0d5s7                              CD Rom reader with SCSI target 5 
Note: the /dev/tape name is a link to one of the /dev/rmt/tps[n] devices added after installation, and may not correspond to the tape you want to use. Use the longer form for safety. 

Note also: CD-Rom readers look like disks; there is no special nomenclature for them.

ARCHITECTURE: alpha

Tape Special Device files: /dev/rExample: /dev/rmt0h First tape, high density ( DDS2?) rewind  

/dev/nrmt1h Second tape, high density, no rewind  mt* 

CD Rom Special Device files: /dev/rmt* 

  
 
 

  

/dev/rz4c CD Rom reader with SCSI target 4 

Note: CD Roms look like disks and there is no special nomenclature for them. 

 

ARCHITECTURE: hpux 9.0x

Tape Special Device files: /dev/rmt/* 

CD Rom Special Device files: /dev/dsk/* 
 

Example: /dev/rmt/0c          First tape, high density ( DDS2?) rewind  
                /dev/rmt/1cn          Second tape, high density, no rewind   
               /dev/2s0                 CD Rom reader with SCSI target 2 
 
ARCHITECTURE: hpux 10.20

Tape Special Device files: /dev/rmt/* 

CD Rom Special Device files: /dev/dsk/* 

  
 

Example: /dev/rmt/0h                         First tape, high density ( DDS2) rewind  
                /dev/rmt/1mn                       Second tape, medium density, no rewind   
               /dev/dsk/c0t5d0                   CD Rom reader with SCSI target 5 
Note: CD Roms look like disks and there is no special nomenclature for them. 

 

 
Back to contents



This page last updated February 16th 1998 by rfcr@itss.nerc.ac.uk