0 Blade server

IBM HS20 blade server. Two bays for SCSI hard drives can be noticed in the upper left area of the image.
Blade servers are self-contained computer servers, designed for high density. Whereas a standard rack-mount server can exist with (at least) a power cord and network cable, blade servers have many components removed for space, power and other considerations while still having all the functional components to be considered a computer. A blade enclosure provides services such as power, cooling, networking, various interconnects and management - though different blade providers have differing principles around what should and should not be included in the blade itself (and sometimes in the enclosure altogether). Together these form the blade system.
In a standard server-rack configuration, 1U (one rack unit, 19" wide and 1.75" tall) is the minimum possible size of any equipment. The principal benefit of, and the reason behind the push towards, blade computing is that components are no longer restricted to these minimum size requirements. The most common computer rack form-factor being 42U high, this limits the number of discrete computer devices directly mounted in a rack to 42 components. Blades do not have this limitation; densities of 100 computers per rack and more are achievable with the current generation of blade systems.
Server blade
In the purest definition of computing (a Turing machine, simplified here), a computer requires only;
memory to read input commands and data
a processor to perform commands manipulating that data, and
memory to store the results.
Today (contrast with the first general-purpose computer) these are implemented as electrical components requiring (DC) power, and in operation produce heat. Other components such as hard drives, power supplies, storage and network connections, basic IO (such as KVM and serial) etc. only support the basic computing function, yet add bulk, heat and complexity, not to mention moving parts that are more prone to failure than solid-state components.
In practice, these components are all required if the computer is to perform real-world work. In the blade paradigm, most of these functions are removed from the blade computer, being either provided by the blade enclosure (e.g. DC power supply), virtualised (e.g. iSCSI storage, remote console over IP) or discarded entirely (e.g. serial ports). The blade itself becomes vastly simpler, hence smaller and (in theory) cheaper to manufacture.
Blade enclosure
The enclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade computers require components that are bulky, hot and space-inefficient, and duplicated across many computers that may or may not be performing at capacity. By locating these services in one place and sharing them between the blade computers, the overall utilisation is more efficient. The specifics of which services are provided and how vary by vendor.
Power
Computers operate over a range of DC voltages, yet power is delivered from utilities as AC, and at higher voltages than required within the computer. Converting this current requires power supply units (or PSUs). To ensure that the failure of one power source does not affect the operation of the computer, even entry-level servers have redundant power supplies, again adding to the bulk and heat output of the design.
The blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may be in the form of a power supply in the enclosure or a dedicated separate PSU supplying DC to multiple enclosures [http://h18004.www1.hp.com/products/quickspecs/12330_div/12330_div.html]. This setup not only reduces the number of PSUs required to provide a resilient power supply, but it also improves efficiency because it reduces the number of idle PSUs.
Cooling
Operating the electrical and mechanical components of a computer produces heat, which must be displaced to ensure proper function of all these components. Fans are the most common method used to remove this heat in computers, but these add bulk and more moving parts. The blade enclosure typically provides fans to remove hot air from within the blades.
A frequently underestimated conflict in the design of a high-performance computer is the trade-off between design for density and the ability of the fans to move hot air away from the system. Since much of the bulk of a traditional server is removed from a blade, it can be designed to allow for excellent airflow.
Networking
Computers are increasingly being produced with high-speed, integrated network interfaces, and most are expandable to allow for the addition of connections that are faster, more resilient and run over different media (copper and fiber). These may require extra engineering effort in the design and manufacture of the blade, consume space in both the installation and capacity for installation (empty expansion slots) and hence more complexity. High-speed network topologies require expensive, high-speed integrated circuits and media, while most computers do not utilise all the bandwidth available.
The blade enclosure provides one or more network buses to which the blade will connect, and either presents these ports individually in a single location (versus one in each computer chassis), or aggregates them into fewer ports, reducing the cost of connecting the individual devices. These may be presented in the chassis itself, or in networking blades.
Storage
While computers typically need hard-disks to store the operating system, application and data for the computer, these are not necessarily required locally. Many storage connection methods (e.g. FireWire, SATA, SCSI, DAS, Fibre Channel and iSCSI) are readily moved outside the server, though not all are used in enterprise-level installations. Implementing these connection interfaces within the computer presents similar challenges to the networking interfaces (indeed iSCSI runs over the network interface), and similarly these can be removed from the blade and presented individually or aggregated either on the chassis or through other blades.
In particular, the ability to boot the blade from a Storage Area Network (SAN) allows for an entirely disk-free blade, resulting in exceptional reliability and space utilisation.
Other blades
Since the blade enclosure provides a standard method for delivering basic services to computer devices, these can be leveraged by other types of devices. Blades providing switching, routing, SAN and fibre-channel access can be inserted into the enclosure to provide these services to all members of the enclosure.
Uses
http://en.wikipedia.org/wiki/Image:Pile_of_IBM_HS20s.jpg

A pile of IBM HS20 blade servers. Each "blade" has two 2.8 GHz Xeon CPUs, two 36 GB Ultra-320 SCSI hard drives and 2 GB RAM.
Blade servers are ideal for specific purposes such as web hosting and cluster computing. Individual blades are typically hot-swappable.
Although blade server technology in theory allows for open, cross-vendor solutions, at this stage of development of the technology, users find there are fewer problems when using blades, racks and blade management tools from the same vendor.
Eventual standardization of the technology might result in more choices for consumers; increasing numbers of third-party software vendors are now entering this growing field.
Blade servers are not, however, the answer to every computing problem. They may best be viewed as a form of productized server farm that borrows from mainframe packaging, cooling, and power supply technology. For large problems, server farms of blade servers are still necessary, and because of blade servers' high power density, can suffer even more acutely from the HVAC problems that affect large conventional server farms.


An IBM bladecenter, with an HS20 server partially removed. The top media tray can be switched between all servers.
History
Complete microcomputers were placed on cards and packaged in standard 19-inch racks in the 1970s soon after the introduction of 8-bit microprocessors. This architecture was used in the industrial process control industry as an alternative to minicomputer control systems. Programs were stored in EPROM on early models and were limited to a single function with a small realtime executive.
The name blade server appeared when cards included small hard disks or flash memory program storage. This allowed complete server operating systems to be packaged on the blade.
The architecture of blade servers is expected to move closer to mainframe architectures. Although current systems act as a cluster of independent computers, future systems may add resource virtualization and higher levels of integration with the operating system to increase reliability.
The first company to produce a blade server was Houston-based RLX Technologies (although not proven), which consisted of mostly former Compaq Computer Corp employees. RLX was later acquired by Hewlett Packard (HP) in 2005.
At present IBM remains the global leader in blade servers in terms of market share and revenue with their BladeCenter system and http://www.blade.org industry collaboration initiative. IBM also supports an Open Architecture called the "Blade Open Specification"
Other major players in the blade server market include Hewlett-Packard (HP), Dell, Rackable (Hybrid Blade) and Verari Systems.
Read more

0 SUID - The Sticky Bit

Sometimes, unprivileged users must be able to accomplish tasks that require privileges. An example is the passwd program, which allows you to change your password. Changing a user's password requires modifying the password field in the /etc/passwd file. However, you should not give a user access to change this file directly - the user could change everybody else's password as well! Likewise, the mail program requires that you be able to insert a message into the mailbox of another user, yet you should not to give one user unrestricted access to another's mailbox.

To get around these problems, UNIX allows programs to be endowed with privilege. Processes executing these programs can assume another UID or GID when they're running. A program that changes its UID is called a SUID program (set-UID); a program that changes its GID is called a SGID program (set-GID). A program can be both SUID and SGID at the same time.

When a SUID program is run, its effective UID[22] becomes that of the owner of the file, rather than of the user who is running it. This concept is so clever that AT&T patented it.[23]

5.5.1 SUID, SGID, and Sticky Bits

If a program is SUID or SGID, the output of the ls -l command will have the x in the display changed to an s. If the program is sticky, the last x changes to a t as shown in Table 5.13 and Figure 5.3.

Figure 5.3: Additional file permissions

Figure 5.3
Table 5.13: SUID, SGID, and Sticky Bits

Contents

Permission

Meaning

---s------

SUID

A process that execs a SUID program has its effective UID set to be the UID of the program's owner.

------s---

SGID

A process that execs a SGID program has its effective GID changed to the program's GID. Files created by the process can have their primary group set to this GID as well, depending on the permissions of the directory in which the files are created. Under Berkeley-derived UNIX, a process that execs an SGID program also has the program's GID temporarily added to the process's list of GIDs. Solaris and other System V-derived versions of UNIX use the SGID bit on data files to enable mandatory file locking.

---------t

sticky

This is obsolete with files, but is used for directories. See "The Origin of `Sticky' " sidebar later in this chapter.

In each of the cases above, the designator letter is capitalized if the bit is set, and the corresponding execute bit is not set. Thus, a file that has its sticky and SGID bits set, and is otherwise mode 444, would appear in an ls listing as

% ls -l /tmp/example
-r--r-Sr-T 1 root user 12324 Mar 26 1995 /tmp/example

An example of a SUID program is the su command,

% ls -l /bin/su 
-rwsr-xr-x 1 root user 16384 Sep 3 1989 /bin/su
%

5.5.2 Problems with SUID

Any program can be SUID, SGID, or both SUID and SGID. Because this feature is so general, SUID/SGID can open up some interesting security problems.

For example, any user can become the superuser simply by running a SUID copy of csh that is owned by root. Fortunately, you must be root already to create a SUID version of csh that is owned by root. Thus, an important objective in running a secure UNIX computer is to ensure that somebody who has superuser privileges will not leave a SUID csh on the system, directly or indirectly.

If you leave your terminal unattended, an unscrupulous passerby can destroy the security of your account simply by typing the commands:

% cp /bin/sh /tmp/break-acct 
% chmod 4755 /tmp/break-acct
%

These commands create a SUID version of the sh program. Whenever the attacker runs this program, the attacker becomes you - with full access to all of your files and privileges. The attacker might even copy this SUID program into a hidden directory so that it would only be found if the superuser scanned the entire disk for SUID programs. Not all system administrators do such scanning on any regular basis.

Note that the program copied need not be a shell. Someone with malicious intent can cause you misery by creating a SUID version of other programs. For instance, consider a SUID version of the editor program. With it, not only can he read or change any of your files, but he can also spawn a shell running under your UID.

Most SUID system programs are SUID root; that is, they become the superuser when they're executing. In theory, this aspect is not a security hole, because a compiled program can perform only the function or functions that were compiled into it. (That is, you can change your password with the passwd program, but you cannot alter the program to change somebody else's password.) But many security holes have been discovered by people who figured out ways of making a SUID program do something that it was not designed to do. In many circumstances, programs that are SUID root could easily have been designed to be SUID something else (such as daemon, or some UID created especially for the purpose). Too often, SUID root is used when something with less privilege would be sufficient.

5.5.3 SUID Shell Scripts

Under most versions of UNIX, you can create shell scripts[24] that are SUID or SGID. That is, you can create a shell script and, by setting the shell script's owner to be root and setting its SUID bit, you can force the shell script to execute with superuser privileges.

[24] Actually, any interpreted scripts.

You should never write SUID shell scripts.

Because of a fundamental flaw with the UNIX implementation of shell scripts and SUID, you cannot execute SUID shell scripts in a completely secure manner on systems that do not support the /dev/fd device. This flaw arises because executing a shell script under UNIX involves a two-step process: when the kernel determines that a shell script is about to be run, it first starts up a SUID copy of the shell interpreter, then the shell interpreter begins executing the shell script. Because these two operations are performed in two discrete steps, you can interrupt the kernel after the first step and switch the file that the shell interpreter is about to execute. In this fashion, an attacker could get the computer to execute any shell script of his or her choosing, which essentially gives the attacker superuser privileges. Although this flaw is somewhat mitigated by the /dev/fd device, even on systems that do support a /dev/fd device, SUID shell scripts are very dangerous and should be avoided.

Some modern UNIX systems ignore the SUID and SGID bits on shell scripts for this reason. Unfortunately, many do not. Instead of writing SUID shell scripts, we suggest that you use the Perl programming language for these kinds of tasks. A version of Perl called taintperl [25] will force you to write SUID scripts that check their PATH environment variable and that do not use values supplied by users for parameters such as filenames unless they have been explicitly "untainted." Perl has many other advantages for system administration work as well.

5.5.3.1 write: Example of a possible SUID/SGID security hole

The authors of SUID and SGID programs try to ensure that their software won't create security holes. Sometimes, however, a SUID or SGID program can create a security hole if the program isn't installed in the way the program author planned.

For example, the write program, which prints a message on another user's terminal, is SGID tty. For security reasons, UNIX doesn't normally let users read or write information to another's terminal; if it did, you could write a program to read another user's keystrokes, capturing any password that she might type. To let the write program function, every user's terminal is also set to be writable by the tty group. Because write is SGID tty, the write program lets one user write onto another user's terminal. It first prints a message that tells the recipient the name of the user who is writing onto her terminal.

But write has a potential security hole - its shell escape. By beginning a line with an exclamation mark, the person using the write program can cause arbitrary programs to be run by the shell. (The shell escape is left over from the days before UNIX had job control. The shell escape made it possible to run another command while you were engaged in a conversation with a person on the computer using write.) Thus, write must give up its special privileges before it invokes a shell; otherwise, the shell (and any program the user might run) would inherit those privileges as well.

The part of the write program that specifically takes away the tty group permission before the program starts up the shell looks like this:

setgid(getgid()); /* Give up effective group privs */
execl(getenv("SHELL"),"sh","-c",arg,0);

Notice that write changes only its GID, not its effective UID. If write is installed SUID root instead of SGID tty, the program will appear to run properly but any program that the user runs with the shell escape will actually be run as the superuser! An attacker who has broken the security on your system once might change the file permissions of the write program, leaving a hole that he or she could exploit in the future. The program, of course, will still function as before.

5.5.3.2 Another SUID example: IFS and the /usr/lib/preserve hole

Sometimes, an interaction between a SUID program and a system program or library creates a security hole that's unknown to the author of the program. For this reason, it can be extremely difficult to know if a SUID program contains a security hole or not.

One of the most famous examples of a security hole of this type existed for years in the program called /usr/lib/preserve (which is now given names similar to /usr/lib/ex3.5preserve). This program, which is used by the vi and ex editors, automatically makes a backup of the file being edited if the user is unexpectedly disconnected from the system before writing out changes to the file. The preserve program writes the changes to a temporary file in a special directory, then uses the /bin/mail program to send the user a notification that the file has been saved.

Because people might be editing a file that was private or confidential, the directory used by the older version of the preserve program was not accessible by most users on the system. Therefore, to let the preserve program write into this directory, and let the recover program read from it, these programs were made SUID root.

Three details of the /usr/lib/preserve implementation worked together to allow knowledgeable system crackers to use the program to gain root privileges:

  1. preserve was installed SUID root.

  2. preserve ran /bin/mail as the root user to alert users that their files had been preserved.

  3. preserve executed the mail program with the system() function call.

The problem was that the system function uses sh to parse the string that it executes. There is a little-known shell variable called IFS, the internal field separator, which sh uses to figure out where the breaks are between words on each line that it parses. Normally, IFS is set to the white space characters: space, tab, and newline. But by setting IFS to the slash character (/) then running vi, and then issuing the preserve command, it was possible to get /usr/lib/preserve to execute a program in the current directory called bin. This program was executed as root. (/bin/mail got parsed as bin with the argument mail.)

If a user can convince the operating system to run a command as root, that user can become root. To see why this is so, imagine a simple shell script which might be called bin, and run through the hole described earlier:[26]

[26] There is actually a small bug in this shell script; can you find it?

#
# Shell script to make an SUID-root shell
#
cd /homes/mydir/bin
cp /bin/sh ./sh
# Now do the damage!
chown root sh
chmod 4755 sh

This shell script would get a copy of the Bourne shell program into the user's bin directory, and then make it SUID root. Indeed, this is the very way that the problem with /usr/lib/preserve was exploited by system crackers.

The preserve program had more privilege than it needed - it violated a basic security principle called least privilege. Least privilege states that a program should have only the privileges it needs to perform the particular function it's supposed to perform, and no others. In this case, instead of being SUID root, /usr/lib/preserve should have been SGID preserve, where preserve would have been a specially created group for this purpose. Although this restriction would not have completely eliminated the security hole, it would have made its presence considerably less dangerous. Breaking into the preserve group would have only let the attacker view files that had been preserved.

Although the preserve security hole was a part of UNIX since the addition of preserve to the vi editor, it wasn't widely known until 1986. For a variety of reasons, it wasn't fixed until a year after it was widely publicized.

NOTE: If you are using an older version of UNIX that can't be upgraded, remove the SUID permission from /usr/lib/preserve to patch this security hole.

Newer editions of UNIX sh ignore IFS if the shell is running as root or if the effective user ID differs from the real user ID. Many other shells have similarly been enhanced, but not all have. The idea that there are still programs being shipped by vendors in 1995 with this same IFS vulnerability inside is interesting and very depressing. The general problem has been known for over 10 years, and people are still making the same (dumb) mistakes.

5.5.4 Finding All of the SUID and SGID Files

You should know the names of all SUID and SGID files on your system. If you discover new SUID or SGID files, somebody might have created a trap door that they can use at some future time to gain superuser access. You can list all of the SUID and SGID files on your system with the command:

# find / \(-perm -004000
-o -perm -002000 \) -type f -print

This find command starts in the root directory (/) and looks for all files that match mode 002000 (SGID) or mode 004000 (SUID). The -type f option causes the search to be restricted to files. The -print option causes the name of every matching file to be printed.

NOTE: If you are using NFS, you should execute find commands only on your file servers. You should further restrict the find command so that it does not try to search networked disks. Otherwise, use of this command may cause an excessive amount of NFS traffic on your network. To restrict your find command, use the following:

# find / \( -local -o -prune \)
\( -perm -004000 -o -perm -002000 \) -type f -print

NOTE: Alternatively, if your find command has the -xdev option, you can use it to prevent find from crossing filesystem boundaries. To search the entire filesystem using this option means running the command multiple times, once for each mounted partition.

Be sure that you are the superuser when you run find, or you may miss SUID files hidden in protected directories.

5.5.4.1 The ncheck command

The ncheck command is an old UNIX command that prints a list of each file on your system and its corresponding inode number. When used with the -s option, ncheck restricts itself to listing all of the "special" inodes on your system - such as the devices and SUID files.

The ncheck command runs on a filesystem-by-filesystem basis. For example:

# ncheck -s | cat -ve -
/dev/dsk/c0t3d0s0:
125 /dev/fd/0
513 /dev/fd/1
514 /dev/fd/2

...

533 /dev/fd/21
534 /dev/fd/22
535 /dev/fd/23
3849 /sbin/su
3850 /sbin/sulogin

(The cat -ve command is present in the above to print control characters so that they will be noticed, and to indicate the end of line for filenames that end in spaces.)

The ncheck command is very old, and has largely been superseded by other commands. It may not be present on all versions of UNIX, although it is present in SVR4. If you run it, you may discover that it is substantially faster than the find command, because ncheck reads the inodes directly, rather than searching through files in the filesystem. However, ncheck still needs to read some directory information to obtain pathnames, so it may not be that much faster.

Unlike find, ncheck will locate SUID files that are hidden beneath directories that are used as mount-point. In this respect, ncheck is superior to find, because find can't find such files because they do not have complete pathnames as long as the mounts are mounted.

You must be superuser to run ncheck.

5.5.5 Turning Off SUID and SGID in Mounted Filesystems

If you mount remote network filesystems on your computer, or if you allow users to mount their own floppy disks or CD-ROMS, you usually do not want programs that are SUID on these filesystems to be SUID on your computer as well. In a network environment, honoring SUID files means that if an attacker manages to take over the remote computer that houses the filesystem, he can also take over your computer, simply by creating a SUID program on the remote filesystem and running the program on your machine. Likewise, if you allow users to mount floppy disks containing SUID files on your computer, they can simply create a floppy disk with a SUID ksh on another computer, mount the floppy disk on your computer, and run the program - making themselves root.

You can turn off the SUID and SGID bits on mounted filesystems by specifying the nosuid option with the mount command. You should always specify this option when you mount a foreign filesystem unless there is an overriding reason to import SUID or SGID files from the filesystem you are mounting. Likewise, if you write a program to mount floppy disks for a user, that program should specify the nosuid option (because the user can easily take his or her floppy disk to another computer and create a SUID file).

For example, to mount the filesystem athena in the /usr/athena directory from the machine zeus with the nosuid option, type the command:

# /etc/mount -o nosuid zeus:/athena /usr/athena

Some systems also support a -nodev option that causes the system to ignore device files that may be present on the mounted partition. If your system supports this option, you should use it, too. If your user creates a floppy with a mode 777 kmem, for instance, he can subvert the system with little difficulty if he is able to mount the floppy disk. This is because UNIX treats the /dev/kmem on the floppy disk the same way that it treats the /dev/kmem on your main system disk - it is a device that maps to your system's kernel memory.

5.5.6 SGID and Sticky Bits on Directories

Although the SGID and sticky bits were originally intended for use only with programs, Berkeley UNIX, SunOS, Solaris and other operating systems also use these bits to change the behavior of directories, as shown in Table 5.14.

Table 5.14: Behavior of SGID and Sticky Bits with Directories

Bit

Effect

SGID bit

The SGID bit on a directory controls the way that groups are assigned for files created in the directory. If the SGID bit is set, files created in the directory have the same group as the directory if the process creating the file also is in that group. Otherwise, if the SGID bit is not set, or if the process is not in the same group, files created inside the directory have the same group as the user's effective group ID (usually the primary group ID).

Sticky bit

If the sticky bit is set on a directory, files inside the directory may be renamed or removed only by the owner of the file, the owner of the directory, or the superuser (even if the modes of the directory would otherwise allow such an operation); on some systems, any user who can write to a file can also delete it. This feature was added to keep an ordinary user from deleting another's files in the /tmp directory.

For example, to set the mode of the /tmp directory on a system so any user can create or delete her own files but can't delete another's files, type the command:

# chmod 1777 /tmp

Many older versions of UNIX (System V prior to Release 4, for instance) do not exhibit either of these behaviors. On those systems, the SGID and sticky bits on directories are ignored by the system. However, on a few of these older systems (including SVR3), setting the SGID bit on the directory resulted in "sticky" behavior.

5.5.7 SGID Bit on Files (System V UNIX Only): Mandatory Record Locking

If the SGID bit is set on a nonexecutable file, AT&T System V UNIX implements mandatory record locking for the file. Normal UNIX record locking is discretionary; processes can modify a locked file simply by ignoring the record-lock status. On System V UNIX, the kernel blocks a process which tries to access a file (or the portion of the file) that is protected with mandatory record locking until the process that has locked the file unlocks it. Mandatory locking is enabled only if none of the execute permission bits are turned on.

Mandatory record locking shows up in an ls listing in the SGID position as a capital "S" instead of a small "s":

% ls -F data*
-rw-rwS--- 1 fred 2048 Dec 3 1994 database
-r-x--s--x 2 bin 16384 Apr 2 1993 datamaint*
Read more

0 Network Storage

The Basics.

Network Storage - The Basics


Are you new to network storage? If so then this series of articles is for you! Over the next few months we are going to take a look at the basic principles of network storage and answer questions like 'What is network storage?' and 'Why do we use it?' After covering the basics, subsequent articles will look at specific technologies in more detail. All of the articles in the series will have one simple aim; to educate and inform you about network storage. So, without further ado, lets get to it!

In basic terms, network storage is simply about storing data using a method by which it can be made available to clients on the network. Over the years, the storage of data has evolved through various phases. This evolution has been driven partly by the changing ways in which we use technology, and in part by the exponential increase in the volume of data we need to store. It has also been driven by new technologies, which allow us to store and manage data in a more effective manner.

In the days of mainframes, data was stored physically separate from the actual processing unit, but was still only accessible through the processing units. As PC based servers became more commonplace, storage devices went 'inside the box' or in external boxes that were connected directly to the system. Each of these approaches was valid in its time, but as our need to store increasing volumes of data and our need to make it more accessible grew, other alternatives were needed. Enter network storage.

Network storage is a generic term used to describe network based data storage, but there are many technologies within it which all go to make the magic happen. Here is a rundown of some of the basic terminology that you might happen across when reading about network storage.

Direct Attached Storage (DAS)

Direct attached storage is the term used to describe a storage device that is directly attached to a host system. The simplest example of DAS is the internal hard drive of a server computer, though storage devices housed in an external box come under this banner as well. DAS is still, by far, the most common method of storing data for computer systems. Over the years, though, new technologies have emerged which work, if you'll excuse the pun, out of the box.

Network Attached Storage (NAS)

Network Attached Storage, or NAS, is a data storage mechanism that uses special devices connected directly to the network media. These devices are assigned an IP address and can then be accessed by clients via a server that acts as a gateway to the data, or in some cases allows the device to be accessed directly by the clients without an intermediary.

The beauty of the NAS structure is that it means that in an environment with many servers running different operating systems, storage of data can be centralized, as can the security, management, and backup of the data. An increasing number of companies already make use of NAS technology, if only with devices such as CD-ROM towers (stand-alone boxes that contain multiple CD-ROM drives) that are connected directly to the network.

Some of the big advantages of NAS include the expandability; need more storage space, add another NAS device and expand the available storage. NAS also bring an extra level of fault tolerance to the network. In a DAS environment, a server going down means that the data that that server holds is no longer available. With NAS, the data is still available on the network and accessible by clients. Fault tolerant measures such as RAID, which we'll discuss later), can be used to make sure that the NAS device does not become a point of failure.

Storage Area Network (SAN)

A SAN is a network of storage devices that are connected to each other and to a server, or cluster of servers, which act as an access point to the SAN. In some configurations a SAN is also connected to the network. SAN's use special switches as a mechanism to connect the devices. These switches, which look a lot like a normal Ethernet networking switch, act as the connectivity point for SAN's. Making it possible for devices to communicate with each other on a separate network brings with it many advantages. Consider, for instance, the ability to back up every piece of data on your network without having to 'pollute' the standard network infrastructure with gigabytes of data. This is just one of the advantages of a SAN which is making it a popular choice with companies today, and is a reason why it is forecast to become the data storage technology of choice in the coming years. According to research company IDC, SAN's will account for 70% of all network storage by 2004.

Irrespective of whether the network storage mechanism is DAS, NAS or SAN, there are certain technologies that you'll find in almost every case. The technologies that we are referring to are things like SCSI and RAID. For years SCSI has been providing a high speed, reliable method for data storage. Over the years, SCSI has evolved through many standards to the point where it is now the storage technology of choice. Related, but not reliant on SCSI, is RAID. RAID (Redundant Array of Independent Disks) is a series of standards which provide improved performance and/or fault tolerance for disk failures. Such protection is necessary as disks account for 50% of all hardware device failures on server systems. Like SCSI, RAID, or the technologies used to implement it, have evolved, developed and matured over the years.

In addition to these mainstays of storage technology, other technologies feature in our network storage picture. One of the most significant of these technologies is Fibre channel (yes, that that's fiber with an 're'). Fibre Channel is a technology used to interconnect storage devices allowing them to communicate at very high speeds (up to 10Gbps in future implementations). As well as being faster than more traditional storage technologies like SCSI, Fibre Channel also allows for devices to be connected over a much greater distance. In fact, Fibre Channel can be used up to six miles. This allows devices in a SAN to be placed in the most appropriate physical location.

Other developments are coming through that will change the way that we use and access network storage. One of these advances pegged to make a large contribution to the growing success of network storage in general is iSCSI. iSCSI is a technology that allows data to be transported to and from storage devices over an IP network. What it actually does is serialize the data from a SCSI connection. Using iSCSI, the concept of network storage can be taken anywhere that IP can go, which as the Internet proves, is basically anywhere. Technologies like Fibre Channel and iSCSI are a big factor in how fast people are able to afford and implement network storage solutions.

Over the coming months, we'll be taking a detailed look at all of the technologies that we have discussed in this introductory article. In our next article we'll start by taking a detailed look at perhaps the most significant element of today's network storage environment - SAN's. We'll also examine the devices used to create them. In addition, we'll be asking and answering the question 'How can a SAN benefit your business?' Stay tuned.




































Storage Basics - Storage Area Networks


Many IT organizations today are scratching their heads debating whether the advantages of implementing a SAN solution justify the associated costs. Others are trying to get a handle on today's storage options and whether SAN is simply Network Attached Storage spelled backwards. In this article, I introduce the basic purpose and function of a SAN and examine its role in modern network environments. I also look at how SANs meet the network storage needs of today's organizations and answer the question, could a SAN be right for you.

Peel away the layers of even the most complex technologies and you are likely to find that they provide the most basic of functions. This is certainly true of storage area networks (SANs). Behind the acronyms and revolutionary headlines, lies a technology designed to provide a way of offering one of the oldest of network services, that of making access to data storage devices available to clients.

In very basic terms, a SAN can be anything from two servers on a network accessing a central pool of storage devices to several thousand servers accessing many millions of megabytes of storage. Conceptually, a SAN can be thought of as a separate network of storage devices physically removed from, but still connected to, the network. SANs evolved from the concept of taking storage devices, and therefore storage traffic, off the LAN and creating a separate back-end network designed specifically for data.

SANs represent the evolution of data storage technology to this point. Traditionally, on client server systems, data was stored on devices either inside or directly attached to the server. Next in the evolutionary scale came Network Attached Storage (NAS) which took the storage devices away from the server and connected them directly to the network. SANs take the principle one step further by allowing storage devices to exist on their own separate network and communicate directly with each other over very fast media. Users can gain access to these storage devices through server systems which are connected to both the LAN and the SAN.

This is in contrast to the use of a traditional LAN for providing a connection for server-storage, a strategy that limits overall network bandwidth. SANs address the bandwidth bottlenecks associated with LAN based server storage and the scalability limitations found with SCSI bus based implementations. SANs provide modular scalability, high-availability, increased fault tolerance and centralized storage management. These advantages have led to an increase in the popularity of SANs as they are quite simply better suited to address the data storage needs of today's data intensive network environments.

The advantages of SANs are numerous, but perhaps one of the best examples is that of the serverless backup (also commonly referred to as 3rd Party Copying). This system allows a disk storage device to copy data directly to a backup device across the high-speed links of the SAN without any intervention from a server. Data is kept on the SAN, which means the transfer does not pollute the LAN, and the server processing resources are still available to client systems.

SANs are most commonly implemented using a technology called Fibre channel (yes, that's fibre with an 're', not an 'er'). Fibre Channel is a set of communication standards developed by the American National Standards Institute (ANSI). These standards define a high-performance data communications technology that supports very fast data rates (over 2Gbps). Fibre channel can be used in a point-to-point configuration between two devices, in a 'ring' type model known as an arbitrated loop, and in a fabric model.

Devices on the SAN are normally connected together through a special kind of switch, called a Fibre Channel switch, which performs basically the same function as a switch on an Ethernet network, in that it acts as a connectivity point for the devices. Because Fibre channel is a switched technology, it is able to provide a dedicated path between the devices in the fabric so that they can utilize the entire bandwidth for the duration of the communication.

The storage devices are connected to Fibre Channel switch using either multimode or single mode fiber optic cable. Multimode for short distances (up to 2 kilometers), single mode for longer. In the storage devices themselves, special fiber channel interfaces provide the connectivity points. These interfaces can take the form of built in adapters, which are commonly found in storage subsystems designed for SANs, or can be interface cards much like a network card, which are installed into server systems.

So, the question that remains is this. Should you be moving away from your current storage strategy and towards a SAN? The answer is not a simple one. If you have the need to centralize or streamline your data storage then a SAN may be right for you. There is, of course, one barrier between you and storage heaven, and that's money. While SANs remain the domain of big business, the price tag's of SAN equipment is likely to remain at a level outside the reach of small or even medium sized businesses. As the prices fall, however, SANs will find their way into organizations of all sizes, including, if you want, yours.



Read more

0 Network Storage - II

Network-Attached Storage

1. Introduction

2. What is a NAS Device?

3. What is a Filer?

4. Network-Attached Storage Versus Storage Area Networks

5. NAS Solutions for Today's Business Issues

6. NAS and Sun

7. Summary - NAS Filers Serve e-Time Storage Needs

1. Introduction

The torrents of information storming in and out and through today's businesses could hardly have been foreseen when the first computer systems achieved desktop status. These units came equipped with the storage capacity of a goldfish bowl, by today's standards. Building on this early direct-attached storage architecture, IT departments soon answered increasing information demands with general-purpose servers and direct-attached storage, typically attached using a SCSI high-speed interface. Now, these processing and storage initiatives are hard pressed to support and direct the monumental data requirements of ERP, MIS, and data warehousing for today's companies.

Thanks in a large part to the Internet, today's information influx does not stop. Data is created, transmitted, stored, and delivered around the clock. And both internal and external customers are becoming more dependent on rapid, reliable access to company data. Those companies that are not yet Net-operational feel the pressure to get there, fast. This scenario also leaves Internet and applications service providers as well as dot-com organizations scrambling for reliable, scalable solutions. Overall, businesses need to meet skyrocketing storage needs and they'd like to do so without an exponential increase in IT talent-professionals who are difficult to find and expensive to hire. Network-Attached Storage (NAS) may be the answer.

2. What is a NAS Device?

Network-attached storage (NAS) is a concept of shared storage on a network. It communicates using Network File System (NFS) for UNIX« environments, Common Internet File System (CIFS) for Microsoft Windows environments, FTP, http, and other networking protocols. NAS brings platform independence and increased performance to a network, as if it were an attached appliance.

A NAS device is typically a dedicated, high-performance, high-speed communicating, single-purpose machine or component. NAS devices are optimized to stand alone and serve specific storage needs with their own operating systems and integrated hardware and software. Think of them as types of plug-and-play appliances, except with the purpose of serving your storage requirements. The systems are simplified to address specific needs as quickly as possible-in real time. NAS devices are well suited to serve networks that have a mix of clients, servers, and operations and may handle such tasks as Web cache and proxy, firewall, audio-video streaming, tape backup, and data storage with file serving.

This paper introduces readers to a category of NAS devices called filers. These highly optimized servers enable file and data sharing among different types of clients. It also defines NAS benefits with respect to storage area networks (SANs). Finally, the paper introduces Sun Microsystems' entry-level NAS device, the Sun StorEdge N8200 filer.

3. What is a Filer?

NAS devices known as filers focus all of their processing power solely on file service and file storage. As integrated storage devices, filers are optimized for use as dedicated file servers. They are attached directly to a network, usually to a LAN, to provide file-level access to data. Filers help you keep administrative costs down because they are easy to set up and manage, and they are platform-independent.

NAS filers can be located anywhere on a network, so you have the freedom to place them close to where their storage services are needed. One of the chief benefits of filers is that they relieve your more expensive general-purpose servers of many file management operations. General-purpose servers often get bogged down with CPU-intensive activities, and thus can't handle file management tasks as efficiently as filers. NAS filers not only improve file-serving performance but also leave your general-purpose servers with more bandwidth to handle critical business operations.

Analysts at International Data Corporation (IDC) recommend NAS to help IT managers handle storage capacity demand, which the analysts expect will increase more than 10 times by 2003. Says IDC, "Network-attached storage (NAS) is the preferred implementation for serving filers for any organization currently using or planning on deploying general-purpose file servers. Users report that better performance, significantly lower operational costs, and improved client/user satisfaction typically results from installing and using specialized NAS appliance platforms." (Source: Taming the Storage Growth Beast with Network-Attached Storage (NAS), ¨2000, International Data Corporation.)

4. Network-Attached Storage Versus Storage Area Networks

Some people confuse NAS with storage area networks (SANs); after all NAS is SAN spelled backwards. The technologies also share a number of common attributes. Both provide optimal consolidation, centralized data storage, and efficient file access. Both allow you to share storage among a number of hosts, support multiple different operating systems at the same time, and separate storage from the application server. In addition, both can provide high data availability and can ensure integrity with redundant components and redundant array of independent disks (RAID).


Others may view NAS as competitive to SAN, when both can, in fact, work quite well in tandem. Their differences? NAS and SAN represent two different storage technologies and they attach to your network in very different places. NAS is a defined product that sits between your application server and your file system (see Figure 1). SAN is a defined architecture that sits between your file system and your underlying physical storage (see Figure 2). A SAN is its own network, connecting all storage and all servers. For these reasons, each lends itself to supporting the storage needs of different areas of your business.

NAS: Think Network Users

NAS is network-centric. Typically used for client storage consolidation on a LAN, NAS is a preferred storage capacity solution for enabling clients to access files quickly and directly. This eliminates the bottlenecks users often encounter when accessing files from a general-purpose server.

NAS provides security and performs all file and storage services through standard network protocols, using TCP/IP for data transfer, Ethernet and Gigabit Ethernet for media access, and CIFS, http, and NFS for remote file service. In addition, NAS can serve both UNIX and Microsoft Windows users seamlessly, sharing the same data between the different architectures. For client users, NAS is the technology of choice for providing storage with unen-cumbered access to files.

Although NAS trades some performance for manageability and simplicity, it is by no means a lazy technology. Gigabit Ethernet allows NAS to scale to high performance and low latency, making it possible to support a myriad of clients through a single interface. Many NAS devices support multiple interfaces and can support multiple networks at the same time. As networks evolve, gain speed, and achieve latency (connection speed between nodes) that approaches locally attached latency, NAS will become a real option for applications that demand high performance.

SANs: Think Back-End/Computer Room Storage Needs

A SAN is data-centric - a network dedicated to storage of data. Unlike NAS, a SAN is separate from the traditional LAN or messaging network. Therefore, a SAN is able to avoid standard network traffic, which often inhibits perfor-mance. Fibre channel-based SANs further enhance performance and decrease latency by combining the advantages of I/O channels with a distinct, dedi-cated network.

SANs employ gateways, switches, and routers to facilitate data movement between heterogeneous server and storage environments. This allows you to bring both network connectivity and the potential for semi-remote storage (up to 10 km distances are feasible) to your storage management efforts. SAN architecture is optimal for transferring storage blocks. Inside the computer room, a SAN is often the preferred choice for addressing issues of bandwidth and data accessibility as well as for handling consolidations.

Due to their fundamentally different technologies and purposes, you need not choose between NAS and SAN. Either or both can be used to address your storage needs. In fact, in the future, the lines between the two may blur a bit according to Evaluator Group, Inc. analysts. For example, down the road you may choose to back up your NAS devices with your SAN, or attach your NAS devices directly to your SAN to allow immediate, nonbottlenecked access to storage. (Source: An Overview of Network-Attached Storage, ¨ 2000, Evaluator Group, Inc.)

5. NAS Solutions for Today's Business Issues

IDC predicts that by 2003, more than $6.5 billion will be spent annually on NAS storage solutions. (Source: Taming the Storage Growth Beast with Network-Attached Storage (NAS), ¨ 2000, International Data Corporation.) The analyst group believes the demands of Internet service providers, application service providers, and dot-coms for reliable, cost-effective, and rackable systems will help drive the proliferation of NAS solutions.

Decreased IT Staff Costs

On the front end, businesses welcome extreme amounts of information and strive to manipulate it for use in real time. On the back end, IT professionals, with their current infrastructures, scramble to accommodate the exponentially increasing data burden. General-purpose servers, especially, require large amounts of skilled personnel time to solve storage and file access challenges.

In contrast, a NAS device requires little IT staff time and effort. Management is accomplished through a graphical user interface (GUI) in a Web browser, which enables NAS access from anywhere on the network. Since a NAS filer is preconfigured to support specific file-serving needs, administration is simplified, and this ease of use results in fewer operator errors. Also, because more capacity can be managed per administrator with NAS than is possible with general-purpose server storage activity, the total cost of ownership is lower.

Scale Fast, Without Downtime

Dot-coms and other rapidly scaling companies endeavor to make sure their IT infrastructures keep pace with their dynamic business realities. Building on the structure of your general server or servers may be required in some business areas. But burdening these servers with escalating storage needs can be ineffective and run counter to your accelerated business practices. As you add capacity for your general-purpose server, you'll face downtime. When you bring the system down to increase its storage, your business applications will be unavailable, which may slow-if not halt-productivity.

On the other hand, expanding storage with NAS is simple and nonintrusive. You can install a new filer within 15 minutes as opposed to hours or days required to install or add traditional storage. More advanced NAS devices can increase storage on-the-fly, eliminating the need for you to add another node on your network. This means your users access what they need when they need it, responding in real time to a marketplace that demands immediate action.

Relief for Your Server

A NAS filer helps by offloading tedious and bandwidth-consuming file serving tasks from your server. This allows your server to use its power to process your data with improved availability and performance.

Have you checked your general-purpose server's workload lately? If it is handling file serving activities, chances are it is handling too much. You face increased risk of latency when your general-purpose server must complete high-priority file serving tasks while handling applications, electronic mail, and a myriad of other critical business tasks.

Multi-OS Connectivity and Data-Sharing

Whether your company is busy merging or acquiring, or simply growing, you will no doubt face the demands of a heterogeneous operating environment. A NAS device can answer this challenge with its capability to serve two chief operating system camps: NFS (UNIX) and CIFS (Microsoft Windows). One of the undeniable strengths of NAS is its capacity to support these protocols and allow for cross-platform data sharing. This is an increasingly important attribute as the business usage of data-intensive application files such as digital media (audio, video, and photography) becomes more common.

Leveraging Existing Infrastructure

By adding NAS nodes to your network, you can leverage your network investment and your current network administration skills. NAS can be deployed on your network anywhere it is needed. It also can be integrated with larger management tools, like Microsoft Management Console, Tivoli, and HP Openview, allowing you to maximize your use of these products. And NAS does not require costly network operating system (NOS) licenses.

Often, IT centralization is asked to simplify responsibilities and conserve company efforts, but it accomplishes neither if remote branch and satellite offices must operate without IT support. NAS can help you realize the intent of centralization by allowing you to add storage in a remote office and manage it via the Web-based GUI from anywhere on your network-including your central/home office. This means you can reap higher performance from existing infrastructure at the remote office and keep management "at home."

Transparent Backup

Another benefit of NAS is its transparent backup activities. Filer backup can be completed without affecting the performance of your general-purpose or application servers. Your CPU does not have to calculate what to back up and when. Simply direct your filer to complete backup at a specific time and it will use industry-standard procedures to complete this task.

6. NAS and Sun

The concept of attaching storage devices to a network is not new. About 20 years ago, the Remote Procedure Call (RPC) protocol enabled this break-through. RPC meant computers could share not only storage files and devices across the network, but also printers and other hardware, software, and resources. Sun Microsystems embraced the RPC concept in 1984, when the company developed the remote file and device-sharing application protocol, Network File System (NFS).

The Precursor: NFS

NFS gives all network users access to files that may be stored on different types of computers. In a client/server scenario, NFS enables computers connected to the network to operate as clients to access remote files. The same computers also can act as servers by allowing remote users to access their files. In other words, NFS makes files stored on a file server accessible to any computer on a network and eliminates the need to transfer files between users. The advantages of using NFS include:

  • Streamlined access. Users can work on just what they need. They can work on a piece of a file instead of the entire document.

  • Transparent remote access. Remote files appear to be local to your users, and they need not complete a file transfer before use.

  • Up-to-date data. Because file access protocols directly procure the server's file, the file data are always current.

  • Real-time access. Data can be provided to an application as soon as it arrives from the file server.

Sun opened NFS technology to the public, and over the years this technology has become a standard for introducing network interoperability among heterogeneous systems.

Sun StorEdge N8200 Filer

NAS has the same objective as NFS, which is to improve data access while reducing overhead and downtime. Building on these concepts, Sun is intro-ducing the Sun StorEdge N8200 filer, the first product in the Sun StorEdge N8000 filer product family. This entry-level NAS device is preconfigured with integrated hardware and software to address your file serving needs.

The Speed Imperative

Installation of the Sun StorEdge N8200 filer can be completed in as little as 10 minutes. You simply connect the filer directly to your network and answer several questions online to make your new storage available. Sun streamlined installation time to allow you to make storage available to your users as soon as possible. Because the Sun StorEdge N8200 filer also optimizes your TCP/IP stack for low latency and uses a 10/100-BaseT Ethernet connection, you are able to provide your users with truly fast storage access.

Flexibility and Scalability

With the Sun StorEdge N8200 filer, you can increase storage capabilities with 200 GB expansion arrays up to 800 GB per filer (capacities Sun expects to double in coming months), with the comfort of RAID 5 hardware. This frees the CPU from having to make parity calculations, and results in better performance. For example, if a disk fails and/or a disk needs to be rebuilt, performance impact is minimal.

The flexibility inherent in the Sun StorEdge N8200 filer lets you gain more capacity vertically, by adding more storage to your filer, or horizontally, by adding more filers to your network-all without taking your network down. Compared to the price of adding traditional storage, the modular, extensible architecture of this filer offers affordable versatility. You pay only for what you need, when you need it.

Plus, the Sun StorEdge N8200 filer supports heterogeneous environments by running NFS for UNIX-based clients and CIFS for Microsoft Windows users. You can consolidate and serve files for both UNIX and Microsoft Windows workgroups from this filer.

System Management and Structure

Ease-of-use is the chief attribute of the Sun StorEdge N8200 filer's management design. This filer's Web-based administration tool, with its user-friendly GUI, simplifies such tasks as adding users, groups, hosts, or shares. If your site uses NIS or NIS+, then the GUI is used only to manage the hosts and shares.

This solution is built on the Sun Solaris operating environment and is complementary to server and storage hardware from Sun. The Sun StorEdge N8200 filer consists of a dual-CPU controller and hardware RAID disk storage arrays with the previously described Web-based GUI administration. Two spare PCI slots are available for additional network cards or other resources. Additionally, Sun StorEdge N8000 filer product family software provides a simple configuration and tunes the Sun StorEdge 8200 filer for optimum NFS performance.

In the Sun StorEdge N8200 filer, Sun builds not only on its history of establishing NFS, the protocol that has become the industry standard for network interoperability, but also on its technology fundamentals. Scalability, reliability, availability, and serviceability are as inherent in this filer as they are in the full spectrum of Sun server and storage products.

7. Summary - NAS Filers Serve e-Time Storage Needs

Focusing processing power solely on file service and storage, NAS filers can serve any business or technology workgroup-from software design to CAD to service providers/dot-coms to engineering-that requires low cost, scalability, and high-performance in a file server. NAS also can work in tandem with your SAN environment, handling network file serving needs while the SAN tackles back-end storage tasks. Unobtrusive and accommodating, filers meld with your existing infrastructure and facilitate data sharing across heterogeneous operating environments.

Read more

0 Password Aging.

Password Aging

While it's clearly possible to use the /etc/passwd and /etc/shadow files in Solaris and other Unix systems without making use of the password aging features, you could be taking advantage of these features to encourage your users to practice better security -- and, with the right password aging values, you can configure a good password-changing policy into your system files while limiting the risk that your users will be locked out of their accounts.

In this week's column, we look at the various fields in the shadow file that govern password aging and suggest settings that might give you the right balance between user convenience and good password security.

The /etc/shadow File

To begin our review of how password aging works on a Solaris system, let's examine the format of the /etc/shadow file. Each colon-separated record looks like this:

johndoe:PaSsWoRdxye7d:13062:30:120:10:inactive:expire:

^ ^ ^ ^ ^ ^ ^ ^ ^

| | | | | | | | |

username:password:lastchg:min:max:warn:inactive:expire:flag

The first field is clearly the username. The next is the password encryption. The third is the date when the password was last changed expressed as the number of days since January 1, 1970. The min field is the number of days that a password MUST be kept after it is changed; this is used to keep users from changing their passwords and then immediately changing them back to their previous values (thereby invalidating the intended security). The max field represents the maximum number of days that any password can be used before it is expired. If you want your users to strictly change their passwords every 30 days, for example, you could set both of these fields to 30. Generally, however, the max field is set to a considerably larger value than min. The warn field specifies the number of days prior to a password expiration that a user is warned on login that his/her password is about to expire. This should not be too short a period of time since many users don't log in every day and the display of this message in the login messages is easy to overlook.

The inactive field sets the number of days that an account is allowed to be inactive. This value can help prevent idle accounts from being broken into. The expire field represents the absolute day (expressed as the number of days since January 1, 1970) that the password will expire. You might use this field if you want all of your users' passwords to expire at the end of the fiscal year or at the end of the semester. The last field, flag, is unused until Solaris 10 at which point it records the number of failed login attempts.

If the lines in your shadow file look like this:

sbob:dZlJpUNyyusab:12345::::::

The username and password are set and the date on which the password was last changed has been recorded, but no password aging is taking effect.

If it looks like this, the account is locked.

dumbo:*LK*:::::::

Various other combinations of the shadow file are possible, but the min, max and warn fields will only make sense if the lastchg field is set. For example:

jdoe:w0qjde84kr%p0:13062:60:::::

User must keep a password for 60 days once he changes it, but no password changes are required.

jdoe:w0qjde84kr%p0:13062::60::::

User must change his password every 60 days, but can change it at any time (including immediately changing it back to its previous value).

Choosing Min and Max Settings

If you want to turn on password aging, the combination of minimum (must keep) and maximum (invalid after) values enforces a practical password update scheme. Suggested settings depend in part on the security stance of your particular network. However, general consensus seems to be that passwords, once changed, should be kept for a month (min=30) and that passwords should be changed every three to six months (from max=90 to max=180).

Once a user has used a password for 30 days, he's probably not going to reset it back to its previous value. By then he should know it well enough to continue using it.

Changing a password more often than every month or so would probably make it hard for users to remember their passwords without writing them down.

The down side of min values is that this setting doesn't allow someone to change his password if he believes it has been compromised when the compromise happens within the "min" period. Whatever system you adopt should, therefore, make it painless for a user to request that his password be reset whenever he believes it may no longer be secure.

Wrap Up

We hear a lot about the tradeoff between security and convenience as it permeates so many of our decisions about how we manage our networks but, when it comes to passwords, we must be careful not to cross the line between securing logins and preventing them altogether. Locking our users too easily out of their accounts can reduce security as easily as enhance it. Using password aging with the proper settings can limit the risk that security constraints turn into unintended denials of service.

If you're starting with a group of users who have been active for a long time and not had their passwords aged, how should you go about introducing password aging?

To start, you might first take a look at the dates on which your users' passwords were last changed. To view the dates by themselves, you might use a command such as this (run as root):

# cat /etc/shadow | awk -F: '{print $3}' | sort -n | uniq -c

This command sorts the lastchg (last time the password was changed) field numerically and prints out the number of records with each particular date value.

Of course, the dates in this command's output are going to be presented to you as a list of numbers (rather than recognizable dates). You will see something that looks more or less like this:

7 6445

1 11289

2 11632

53 11676

5 11677

2 11683

1 11849

2 12038

23 12345

1 12881

1 13062

These numbers are a little hard to interpret, but the range of values and the "popular" values suggest that most users on this system have not changed their passwords in a very long time and that many of them might have last changed their passwords in response to a request to do so (since two groups of people changed their passwords on the same two days).

But let's try to pin these numbers down and get an idea what dates we are really looking at. How do you do this? Well, if you have the GNU date command installed on your system, you can view today's date with a command such as this:

% expr `date +%s` / 86400

Alternately, you can package this date conversion command in in a script such as that shown below, call it "today" and run it whenever you want to know what the current date looks like in the days-since-the-epoch format. If you're reading this column on the day that it was first published, that value would be 13062.

#!/usr/bin/perl -w

# today: a script to print date in days-since-epoch format

$now=`/usr/local/bin/date +%s`;

$_=$now / 86400;

($today)=/(\d+)./; # number of days since 01/01/1970

print "$today\n";

In both the command and the "today" script, we use the "date +%s" command to produce the current date/time as the number of seconds since midnight on January 1, 1970. We then divide this value by the number of seconds in a day (86,400) to convert this value to the number of days since January 1, 1970. The commented line lops off the digits on the right side of the decimal point (along with the decimal point itself). This gives us a value for today.

To determine how long ago one of the other dates in the lastchg list above happened to be, we can use an expr to calculate the number of days between today and the date the password was last changed. Let's choose the most popular value (line 4) for this:

# expr 13062 - 11676

1386

That's 1,386 days ago -- nearly four years! NOTE: The shadow records with 6445 in the lastchg field are disabled accounts and, thus, don't factor into our password aging concerns.

If the bulk of your users have the same last-set date, they have probably never changed their passwords -- or never changed them since they were last required to do so. Whenever you change a user's password or one of your users changes his own password, that field in the /etc/shadow file will be updated.

So, how do you introduce password aging in a situation such as this? If you add a max value when a user's password hasn't been reset for nearly four years, chances are that his password will already be expired and he will not be able to log in.

A better approach would be to initiate password aging by modifying the lastchg date in your shadow records and then selecting a max value that will give your users time to change their passwords before they run out of time. You should also publish notices explaining the change and focusing your users attention on the need to change their passwords from time to time.

For example, if you make the lastchg date of a record five months in the past and then require that the user change his password every six months, this would give him a month to change his password before he is locked out. And, from that point forward, he would need to change his password every six months.

Five months in the past would roughly put the (fictitious) lastchg date at 12912 (13062 - (5 * 30)). A shadow entry such as that shown below would, therefore, force sbob to change his password within the month and would give him a month's worth of warnings before he's locked out of his account:

sbob:dZlJpUNyyusab:12912:30:180:30:::

On login, sbob would see something like this:

Your password will expire in 30 days.

Last login: Wed Oct 6 16:28:34 2005 from corp.particles.com

Sun Microsystems Inc. SunOS 5.8 Generic February 2005

If you've never used password aging before, it's probably a good idea to get your users' attention to the fact that passwords are going to expire. The one-line warning above may not be enough to get your users' attention. Perhaps a notice like this in your /etc/motd file would be more effective:

>>> Passwords must be changed every 6 months <<<

>>> Look for password expiration information <<<

>>> in the system output above <<<

When a message like this is displayed on login for a month, your users are likely to notice and take action before their passwords expire.

You can also change the default settings for password aging in the /etc/default/passwd file. For example, if you want users to be required to keep a password for a month and change it every 6 months, your values might look like this:

MAXWEEKS=26 MINWEEKS=4 PASSLENGTH=6

In particular, we looked at the password aging fields in the /etc/shadow file and a script that displays the current day in the days-since-Jan-01-1970 (or "Unix Time") format used to record the day that a user's password was last changed.

In today's column, we're going to look at an efficient Perl command for expressing the current day in this format and a script that you can use to 1) list all users on a system whose passwords will soon be expiring and 2) print how many days are remaining along with the calendar date on which the password will expire.

The lastchg Date

If you changed your password on the day this column was first published, the value recorded in the /etc/shadow file would be 13076 and your shadow entry would look like something like this:

jdoe:jr85ys38dkrf9:13076:30:180:30:::

Since numbers such as 13076 are not the most human-friendly, we're going to look at a script for converting values such as these into calendar dates such as 11/04/2005.

Before we get into the new script, however, let's look at the Perl command for printing the current date in this Unix Time format. This Perl command first determines the current date/time in seconds since January 1, 1970 and then divides that number by the number of seconds in a day. The rest of the command then strips off the decimal point along with all of the digits following it, leaving only the 5-digit day number.

printf qq{%d\n},time/86400;

We'll use a similar command in our date conversion script. This command is more efficient that the script included in last week's column primarily because it doesn't make a call to the system to run the date command to derive a date in the seconds-since-1970 format. Instead, it uses the Perl time command. This also obviates the need for the GNU date command (not that you might not want it for other reasons) to print the date in this way.

Script for Converting Dates

The script shown below takes a Unix Time format date and changes it into a calendar date:

#!/usr/bin/perl

# caldate.pl: change Unix Time date into mm/dd/yyyy format

use strict;

use Time::Local;

my $inputDate = shift @ARGV;

if( !defined $inputDate )

{

print "Usage: $0 \n";

exit 5;

}

# date cannot exceed 24855 (01/19/2038)

if ( $inputDate > 24855 ) {

print "Error: Date entered exceeds limits on date calculations\n";

exit 5;

}

# calculate seconds since the epoch

$inputDate = $inputDate * 86400;

$ENV{TZ} = 'EST';

my($seconds,$mins,$hrs,$dom,$month,$year,$wday,$yday,$isdst) = localtime($inputDate);

$year = $year + 1900;

$month = $month + 1;

$dom="0${dom}" if($dom <>

$month = "0${month}" if($month <>

$hrs = "0${hrs}" if($hrs <>

$mins = "0${mins}" if($mins <>

print "$month/$dom/$year","\n";

Example

> ./caldate.pl 13076

10/20/2005

Notice that this script takes the date entered by the user (a date such as 13076) and multiplies it by 86400 to convert it to the number of seconds (rather than days) since the beginning of the Unix epoch. It then uses a series of commands to massage the provided date into a set of date attributes such as $month, $year and $dom (day of month).

Some of the date fields, such as $wday (weekday) and $isdst (tells whether daylight savings time is in effect) are not used in the date presented by the script but are included for completeness.

Script for Determining Expiration Dates

The following script looks through the shadow file for users with passwords that are expiring within the next two weeks and prints their usernames, days remaining and the dates on which the passwords will expire.

#!/usr/bin/perl -w

# daysleft.pl: list users w passwords expiring soon

use integer; # do integer math

@shadow=`cat /etc/shadow`;

$today = time/86400;

$soon = 14; # report passwords expiring within 14 days

foreach $record ( @shadow ) {

@shadfield=(split /:/, $record);

$username="$shadfield[0]";

$password=$shadfield[1];

$lastch=$shadfield[2];

$max=$shadfield[4];

next if ($password eq "*LK*");

next if ($lastch eq "");

next if ($max eq "");

$rem=$lastch + $max - $today;

$exp=$lastch + $max;

if ( $rem <= $soon ) {

print "$username: ";

printf qq{%d},$rem;

printf " days left, expiring on ";

$dt=`./caldate.pl $exp`;

print $dt;

}

}

Example:

# ./daysleft.pl

jdoe: 8 days left, expiring on 10/28/2005

sbob: 10 days left, expiring on 10/30/2005

jasper: 8 days left, expiring on 10/28/2005

The End of the Epoch

You might have noticed a couple oddities in our scripts. In the caldate.pl script, for example, we refused to work with Unix Time dates larger than 24,855. This leads us to some interesting observations about dates on Unix systems. Even if you know that Unix dates are stored as the number of seconds since midnight on January 1, 1970, you might not know that these dates can't extend beyond January 19, 2038.

To begin to understand why, we need to remember that dates are stored as four-byte signed numbers. The largest value that can be stored is, thus, 2**31 - 1, which we can calculate like so:

# expr 256 \* 256 \* 256 \* 128 - 1

2147483647

That's 24,855 days or roughly 68 years from January 1, 1970 -- the day on which the dates on Unix systems will presumably all reset to January 1, 1970 -- unless, of course, we solve the problem before then. This well-known problem is similar to Y2K problem.

You might have also noticed that we included a "use integer" command in our daysleft.pl script. This avoided having the result of our time/86400 calculation end up with a long string of digits following the decimal point and our subsequent calculation of days remaining before password expiration having to compensate for this partial day.

Thanks to Douglas Gray Stephens and John Gregory for their scripts and insights on processing Unix dates in Perl.

andra Henry-Stocker, ITworld.com

Before we move on to another topic, there are a few more things that we can do to improve our password management scripts. For one thing, we can modify our daysleft.pl script so that it does not need to be run from a particular directory in order to locate the secondary caldate.pl script. For another, we can generate email notifications to users with passwords that are soon to expire instead of leaving the job of notifying users to the sysadmins. Last, we can modify the format of our displayed date to make it more international. Let's take a look at how these changes can be implemented.

Directory Names in Perl

To use the equivalent of the Unix dirname command in Perl, we could use backticks and try to extract this information from the system, but using the Perl File::Basename module makes this task extremely easy. When we include the "use" statement in our script, we can determine the directory name associated with the script (whether /usr/local/bin or ".").

use File::Basename;

$dirname=dirname($0);

Once we establish the directory name ($dirname), we can then use it in our call to the caldate.pl script:

# report if less than 2 weeks remaining

if ( $rem <= 14 ) {

printf q{%s: %d days left, expiring on %s}, $username,$rem,

`$dirname/caldate.pl $exp`;

}

Now, if we run the script from / or some other location on the server by typing /usr/local/bin/daysleft.pl, the script will still find the caldate.pl script and everything will work the same as if we were in the /usr/local/bin directory and typed ./daysleft.pl.

Generating Email

If we want to send email notifications to users -- very helpful if they are unlikely to notice the password warnings that are printed when they log in or if they seldom log in, we can use Net::SMTP to provide easy commands for connecting to the mail server and sending the messages.

use Net::SMTP;

The Sys::Hostname module will allow us to easily include the name of the system on which the password is expiring in the email without hard-coding this in the script.

use Sys::Hostname;

Next, we'll set up the name of the mail server to which all of the email notifications will be sent and the name of the local system:

my $ServerName="mail.myorg.org";

my $host=hostname();

Assuming we want all the notifications to appear to be coming from the same email address, we assign an address:

my $MailFrom = "sysadmin\@$host.myorg.org";

Net::SMTP includes a number of commands for connecting to an SMTP server, sending a message and disconnecting. In the loop below, we use one command to make a new connection to the server and exit the script if the connection cannot be made.

if ( $rem <= $soon ) {

# Connect to the server

$smtp = Net::SMTP->new($ServerName);

die "Couldn't connect to server" unless $smtp;

We then send the mail server the email address of the user -- one per loop:

my $MailTo = "$username\@$mailserver";

Next, we send the sender and recipient information:

$smtp->mail($MailFrom);

$smtp->to($MailTo);

We then construct the notification with the number of days and password expiration date for that particular user:

$dt=`$dirname/caldate.pl $exp`;

$msg="Password expiring for $username: $rem days left, expiring on $dt";

Last, we send the message to the mail server and close the connection:

$smtp->data();

$smtp->datasend("Subject: $host password expiring\n");

$smtp->datasend("$msg");

$smtp->dataend();

$smtp->quit();

}

International Dates

As several readers pointed out, not everyone sees the same date when they see something like 10/12/2005. For some readers, this looks like Oct 12th while for others, it's Dec 10th. One way to make dates work for everyone (everyone who speaks English anyway) is to print them in a more obvious format, such as 12 Oct 2005.

To do this in our caldate.pl script, we'll add an array including the 3-letter month abbreviations:

month=(q{Jan},q{Feb},q{Mar},q{Apr},q{May},q{Jun},

q{Jul},q{Aug},q{Sep},q{Oct},q{Nov},q{Dec}

);

When we are ready to print our date, we can use a command like this:

printf qq{%02d %s %4d\n},$dom,$month[$month],$year;

Putting it all Together

First, the caldate.pl script:

#!/usr/bin/perl -w

use strict;

use Time::Local;

my $inputDate = shift @ARGV;

if( !defined $inputDate )

{

print "Usage: $0 \n";

exit 5;

}

my @month=(q{Jan},q{Feb},q{Mar},q{Apr},q{May},q{Jun},

q{Jul},q{Aug},q{Sep},q{Oct},q{Nov},q{Dec}

);

# date cannot exceed 24855 (01/19/2038)

if ( $inputDate > 24855 ) {

print "Error: Date entered exceeds limits on date calculations\n";

exit 5;

}

# calculate seconds since the epoch

$inputDate = $inputDate * 86400;

my($seconds,$mins,$hrs,$dom,$month,$year,$wday,$yday,$isdst) = localtime($inputDate);

$year+=1900;

$month++;

$dom="0${dom}" if($dom <>

#$month = "0${month}" if($month <>

$hrs = "0${hrs}" if($hrs <>

$mins = "0${mins}" if($mins <>

# print "$month/$dom/$year","\n";

printf qq{%02d %s %4d\n},$dom,$month[$month],$year;

Next, our daysleft.pl script:

#!/usr/bin/perl -w

use integer;

use Net::SMTP;

use File::Basename;

use Sys::Hostname;

my $host=hostname();

$domain="myorg.org";

# get password aging data

@shadow=`cat /etc/shadow`;

# get today in days-since-epoch time

$today = time/86400;

$soon = 14;

my $ServerName = "localhost";

my $MailFrom = "sysadmin\@$host.myorg.org";

my $dirname = dirname($0);

foreach $record ( @shadow ) {

@shadfield=(split /:/, $record);

# get fields from shadow record

$username="$shadfield[0]";

$password=$shadfield[1];

$lastch=$shadfield[2];

$max=$shadfield[4];

# skip record is aging not enabled

next if ($password eq "*LK*");

next if ($lastch eq "");

next if ($max eq "");

# calculate datys remaining and expiration date

$rem=$lastch + $max - $today;

$exp=$lastch + $max;

if ( $rem <= $soon ) {

# Connect to the server

$smtp = Net::SMTP->new($ServerName);

die "Couldn't connect to server" unless $smtp;

my $MailTo = "$recip\@$domain";

$smtp->mail( $MailFrom );

$smtp->to( $MailTo );

$dt=`$dirname/caldate.pl $exp`;

$msg="Password expiring for $username: $rem days left, expiring on $dt";

$smtp->data();

$smtp->datasend("Subject: $host password expiring\n");

$smtp->datasend("$msg");

$smtp->dataend();

$smtp->quit();

}

}

Read more

Popular Posts

Linux Gazette