0 Managing Disk Spaces with LVM in Linux.

Bryce Harrington and Kees Cook have come together to write this informative article titled 'Managing Disk Space with LVM' which explains the ins and outs creating, modifying and deleting of LVM in Linux.

The Linux Logical Volume Manager (LVM) is a mechanism for virtualizing disks. It can create "virtual" disk partitions out of one or more physical hard drives, allowing you to grow, shrink, or move those partitions from drive to drive as your needs change. It also allows you to create larger partitions than you could achieve with a single drive.

Traditional uses of LVM have included databases and company file servers, but even home users may want large partitions for music or video collections, or for storing online backups. LVM and RAID 1 can also be convenient ways to gain redundancy without sacrificing flexibility.

This article looks first at a basic file server, then explains some variations on that theme, including adding redundancy with RAID 1 and some things to consider when using LVM for desktop machines.

LVM Installation

An operational LVM system includes both a kernel filesystem component and userspace utilities. To turn on the kernel component, set up the kernel options as follows:

Device Drivers --> Multi-device support (RAID and LVM)

[*] Multiple devices driver support (RAID and LVM)
< > RAID support
<*> Device mapper support
< > Crypt target support (NEW)

You can usually install the LVM user tools through your Linux distro's packaging system. In Gentoo, the LVM user tools are part of the lvm2 package. Note that you may see tools for LVM-1 as well (perhaps named lvm-user). It doesn't hurt to have both installed, but make sure you have the LVM-2 tools.

LVM Basics

To use LVM, you must understand several elements. First are the regular physical hard drives attached to the computer. The disk space on these devices is chopped up into partitions. Finally, a filesystem is written directly to a partition. By comparison, in LVM, Volume Groups (VGs) are split up into logical volumes (LVs), where the filesystems ultimately reside (Figure 1).

Each VG is made up of a pool of Physical Volumes (PVs). You can extend (or reduce) the size of a Volume Group by adding or removing as many PVs as you wish, provided there are enough PVs remaining to store the contents of all the allocated LVs. As long as there is available space in the VG, you can also grow and shrink the size of your LVs at will (although most filesystems don't like to shrink).

Thumbnail, click for full-size image.
Figure 1. An example LVM layout (Click to view larger image)

Example: A Basic File Server

A simple, practical example of LVM use is a traditional file server, which provides centralized backup, storage space for media files, and shared file space for several family members' computers. Flexibility is a key requirement; who knows what storage challenges next year's technology will bring?

For example, suppose your requirements are:

400G  - Large media file storage
50G - Online backups of two laptops and three desktops (10G each)
10G - Shared files

Ultimately, these requirements may increase a great deal over the next year or two, but exactly how much and which partition will grow the most are still unknown.

Disk Hardware

Traditionally, a file server uses SCSI disks, but today SATA disks offer an attractive combination of speed and low cost. At the time of this writing, 250 GB SATA drives are commonly available for around $100; for a terabyte, the cost is around $400.

SATA drives are not named like ATA drives (hda, hdb), but like SCSI (sda, sdb). Once the system has booted with SATA support, it has four physical devices to work with:

/dev/sda  251.0 GB
/dev/sdb 251.0 GB
/dev/sdc 251.0 GB
/dev/sdd 251.0 GB

Next, partition these for use with LVM. You can do this with fdisk by specifying the "Linux LVM" partition type 8e. The finished product looks like this:

# fdisk -l /dev/sdd

Disk /dev/sdd: 251.0 GB, 251000193024 bytes
255 heads, 63 sectors/track, 30515 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Start End Blocks Id System
/dev/sdd1 1 30515 245111706 8e Linux LVM

Notice the partition type is 8e, or "Linux LVM."

Creating a Virtual Volume

Initialize each of the disks using the pvcreate command:

# pvcreate /dev/sda /dev/sdb /dev/sdc /dev/sdd

This sets up all the partitions on these drives for use under LVM, allowing creation of volume groups. To examine available PVs, use the pvdisplay command. This system will use a single-volume group named datavg:

# vgcreate datavg /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

Use vgdisplay to see the newly created datavg VG with the four drives stitched together. Now create the logical volumes within them:

# lvcreate --name medialv  --size 400G
# lvcreate --name backuplv --size 50G
# lvcreate --name sharelv --size 10G

Without LVM, you might allocate all available disk space to the partitions you're creating, but with LVM, it is worthwhile to be conservative, allocating only half the available space to the current requirements. As a general rule, it's easier to grow a filesystem than to shrink it, so it's a good strategy to allocate exactly what you need today, and leave the remaining space unallocated until your needs become clearer. This method also gives you the option of creating new volumes when new needs arise (such as a separate encrypted file share for sensitive data). To examine these volumes, use the lvdisplay command.

Now you have several nicely named logical volumes at your disposal:

/dev/datavg/backuplv     (also /dev/mapper/datavg-backuplv)
/dev/datavg/medialv (also /dev/mapper/datavg-medialv)
/dev/datavg/sharelv (also /dev/mapper/datavg-sharelv)
 

Selecting Filesystems

Now that the devices are created, the next step is to put filesystems on them. However, there are many types of filesystems. How do you choose?

For typical desktop filesystems, you're probably familiar with ext2 and ext3. ext2 was the standard, reliable workhorse for Linux systems in years past. ext3 is an upgrade for ext2 that provides journaling, a mechanism to speed up filesystem checks after a crash. ext3's balance of performance, robustness, and recovery speed makes it a fine choice for general purpose use. Because ext2 and ext3 have been the defaults for such a long time, ext3 is also a good choice if you want great reliability. For storing backups, reliability is much more important than speed. The major downside to ext2/ext3 is that to grow (or shrink) the filesystem, you must first unmount it.

However, other filesystems provide advantages in certain situations, such as large file sizes, large quantities of files, or on-the-fly filesystem growth. Because LVM's primary use is for scenarios where you need extreme numbers of files, extremely large files, and/or the need to resize your filesystems, the following filesystems are well worth considering.

For large numbers of small files, ReiserFS is an excellent choice. For raw, uncached file I/O, it ranks at the top of most benchmarks, and can be as much as an order of magnitude faster than ext3. Historically, however, it has not proven as robust as ext3. It's been tested enough lately that this may no longer be a significant issue, but keep it in mind.

If you are designing a file server that will contain large files, such as video files recorded by MythTV, then delete speed could be a priority. With ext3 or ReiserFS, your deletes may take several seconds to complete as the filesystem works to mark all of the freed data blocks. If your system is recording or processing video at the same time, this delay could cause dropped frames or other glitches. JFS and XFS are better choices in this situation, although XFS has the edge due to greater reliability and better general performance.

With all these considerations in mind, format the partitions as follows:

# mkfs.ext3 /dev/datavg/backuplv
# mkfs.xfs /dev/datavg/medialv
# mkfs.reiserfs /dev/datavg/sharelv

Mounting

Finally, to mount the file systems, first add the following lines to /etc/fstab:

/dev/datavg/backuplv   /var/backup     ext3       rw,noatime    0 0
/dev/datavg/medialv /var/media xfs rw,noatime 0 0
/dev/datavg/sharelv /var/share reiserfs rw,noatime 0 0

and then establish and activate the mount points:

# mkdir /var/media /var/backup /var/share
# mount /var/media /var/backup /var/share

Now your basic file server is ready for service.

Adding Reliability With RAID

So far, this LVM example has been reasonably straightforward. However, it has one major flaw: if any of your drives fail, all of your data is at risk! Half a terabyte is not an insignificant amount to back up, so this is an extremely serious weakness in the design.

To compensate for this risk, build redundancy into the design using RAID 1. RAID, which stands for Redundant Array of Independent Disks, is a low-level technology for combining disks together in various ways, called RAID levels. The RAID 1 design mirrors data across two (or more) disks. In addition to doubling the reliability, RAID 1 adds performance benefits for reads because both drives have the same data, and read operations can be split between them.

Unfortunately, these benefits do not come without a critical cost: the storage size is cut in half. The good news is that half a terabyte is still enough for the present space requirements, and LVM gives the flexibility to add more or larger disks later.

With four drives, RAID 5 is another option. It restores some of the disk space but adds even more complexity. Also, it performs well with reads but poorly with writes. Because hard drives are reasonably cheap, RAID 5's benefits aren't worth the trouble for this example.

Although it would have made more sense to start with a RAID, we waited until now to introduce them so we could demonstrate how to migrate from raw disks to RAID disks without needing to unmount any of the filesystems.

In the end, this design will combine the four drives into two RAID 1 pairs: /dev/sda + /dev/sdd and /dev/sdb + /dev/sdc. The reason for this particular arrangement is that sda and sdd are the primary and secondary drives on separate controllers; this way, if a controller were to die, you could still access the two drives on the alternate controller. When the primary/secondary pairs are used, the relative access speeds are balanced so neither RAID array is slower than the other. There may also be a performance benefit to having accesses evenly distributed across both controllers.

First, pull two of the SATA drives (sdb and sdd) out of the datavg VG:

# modprobe dm-mirror 
# pvmove /dev/sdb1 /dev/sda1
# pvmove /dev/sdd1 /dev/sdc1
# vgreduce datavg /dev/sdb1 /dev/sdd1
# pvremove /dev/sdb1 /dev/sdd1

Then, change the partition type on these two drives, using filesystem type fd (Linux raid autodetect):

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1 1 30515 245111706 fd Linux raid autodetect

Now, build the RAID 1 mirrors, telling md that the "other half" of the mirrors are missing (because they're not ready to be added to the RAID yet):

# mdadm --create /dev/md0 -a -l 1 -n 2 /dev/sdd1 missing
# mdadm --create /dev/md1 -a -l 1 -n 2 /dev/sdb1 missing

Add these broken mirrors to the LVM:

# pvcreate /dev/md0 /dev/md1
# vgextend datavg /dev/md0 /dev/md1

Next, migrate off of the raw disks onto the broken mirrors:

# pvmove /dev/sda1 /dev/md0 
# pvmove /dev/sdc1 /dev/md1
# vgreduce datavg /dev/sda1 /dev/sdc1
# pvremove /dev/sda1 /dev/sdc1

Finally, change the partition types of the raw disks to fd, and get the broken mirrors on their feet with full mirroring:

# fdisk /dev/sda1
# fdisk /dev/sdc1
# mdadm --manage /dev/md0 --add /dev/sda1
# mdadm --manage /dev/md1 --add /dev/sdc1

That's quite a few steps, but this full RAID 1 setup protects the LVM system without having to reinstall, copy or remount filesystems, or reboot.

 

Network Access of Files

A file server isn't much use if you can't get files off of it. There are many ways to serve files, but the most common and powerful is Network File System (NFS). NFS allows other *nix machines to mount the file shares for direct use. It's also pretty easy to set up on Linux.

First, make sure the file server has NFS enabled in the kernel (2.6.15 in this example):

File systems
Network File Systems

<*> NFS file system support
[*] Provide NFSv3 client support
<*> NFS server support
[*] Provide NFSv3 server support

Rebuild and reinstall the kernel and then reboot the file server. If you'd like to avoid rebooting, build NFS as a module and then load it with modprobe nfsd.

Next, start the NFS service. Your Linux distro will have an init script to do this. For instance, on Gentoo, you'll see:

/etc/init.d/nfs start 
* Starting portmap ... [ ok ]
* Mounting RPC pipefs ... [ ok ]
* Starting NFS statd ... [ ok ]
* Starting NFS daemon ... [ ok ]
* Starting NFS mountd ... [ ok ]

You can double-check that NFS is running by querying portmapper with the command rpcinfo -p | grep nfs:

program  vers proto port  service
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs

Next, you must specify which directories the NFS service should export. Add the following to /etc/exports:

/var/backup    192.168.0.0/24(rw,sync)
/var/media 192.168.0.0/24(rw,sync)
/var/share 192.168.0.0/24(rw,sync)

This lists the directories to share, the machines (or networks) to permit to mount the files, and a set of options to control how the sharing works. The options include rw to allow read-write mounts and sync to force synchronous behavior. sync prevents data corruption if the server reboots in the middle of a file write, but sacrifices the performance advantages that async would provide.

Next, export these file shares from the NFS service:

# exportfs -av
exporting 192.168.0.0/24:/var/backup
exporting 192.168.0.0/24:/var/media
exporting 192.168.0.0/24:/var/share

Now, mount these file shares on each machine that will use them. Assuming the file server is named fileserv, add the following lines to the client machines' /etc/fstab files:

# Device               mountpoint    fs-type   options    dump  fsckorder
fileserv:/var/backup /var/backup nfs defaults 0 0
fileserv:/var/media /var/media nfs defaults 0 0
fileserv:/var/share /var/share nfs defaults 0 0

Finally, create the mountpoints and mount the new shares:

# mkdir /var/backup /var/media /var/share
# mount /var/backup /var/media /var/share

Now all the machines on your network have access to large, reliable, and expandable disk space!

Backup Strategies

As you rely more heavily on this new LVM-enabled disk space, you may have concerns about backing it up. Using RAID ensures against basic disk failures, but gives you no protection in the case of fire, theft, or accidental deletion of important files.

Traditionally, tape drives are used for backups of this class. This option is still viable and has several advantages, but it can be an expensive and slow solution for a system of this size. Fortunately, there other options using today's technology.

rsync is a powerful utility for copying files from one system to another, and it works well across the Internet. You could set up a backup system at a friend's house in a different city and arrange to periodically send backups there. This is easy to do with cronjob:

04 4 * * 4  rsync --delete -a /var/backup/ fileserv.myfriend.org:/backup/myself/backup \
> /var/log/crontab.backup.log 2>&1

Another approach is to attach a pair of external RAID 1 hard drives to your file server using Firewire, USB, or eSATA. Add one drive to /dev/md0 and the other to /dev/md1. Once the mirroring is complete, remove the drives and store them in a safe place offsite. Re-mirror weekly or monthly, depending on your needs.

 

Growth and Reallocation

Suppose that over the next year, the storage system fills up and needs to be expanded. Initially, you can begin allocating the unallocated space. For instance, to increase the amount of space available for shared files from 10GB to 15GB, run a command such as:

# lvextend -L15G /dev/datavg/sharelv
# resize_reiserfs /dev/datavg/sharelv

But over time, all the unallocated disk space will be used. One solution is to replace the four 250G drives with larger 800G ones.

In the case where you use RAID 1, migration is straightforward. Use mdadm to mark one drive of each of the RAID 1 mirrors as failed, and then remove them:

# mdadm --manage /dev/md0 --fail /dev/sda1
# mdadm --manage /dev/md0 --remove /dev/sda1
# mdadm --manage /dev/md0 --fail /dev/sdc1
# mdadm --manage /dev/md0 --remove /dev/sdc1

Pull out the sda and sdc hard drives and replace them with two of the new 800G drives. Split each 800G drive into a 250G partition and a 550G partition using fdisk, and add the partitions back to md0 and md1:

# fdisk /dev/sda
# fdisk /dev/sdc
# mdadm --manage /dev/md0 --add /dev/sda1
# mdadm --manage /dev/md1 --add /dev/sdc1

Repeat the above process with sdd and sdb to move them to the other two new drives, then create a third and fourth RAID device, md2 and md3, using the new space:

# mdadm --create /dev/md2 -a -l 1 -n 2 /dev/sda2 /dev/sdd2
# mdadm --create /dev/md3 -a -l 1 -n 2 /dev/sdb2 /dev/sdc2

Finally, add these to LVM:

# pvcreate /dev/md2 /dev/md3
# vgextend datavg /dev/md2 /dev/md3

The file server now has 1.6TB of fully redundant storage.

LVM and Desktops

So far, we've talked only about LVM and RAID for secondary disk space via a standalone file server, but what if you want to use LVM to manage the space on a regular desktop system? It can work, but there are some considerations to take into account.

First, the installation and upgrade procedures for some Linux distributions don't handle RAID or LVM, which may present complications. Many of today's distros do support it, and even provide tools to assist in creating and managing them, so check this first.

Second, having the root filesystem on LVM can complicate recovery of damaged file systems. Because boot loaders don't support LVM yet, you must also have a non-LVM /boot partition (though it can be on a RAID 1 device).

Third, you need some spare unallocated disk space for the new LVM partition. If you don't have this, use parted to shrink your existing root partition, as described in the LVM HOWTO.

For this example, assume you have your swap space and /boot partitions already set up outside of LVM on their own partitions. You can focus on moving your root filesystem onto a new LVM partition in the partition /dev/hda4. Check that the filesystem type on hda4 is LVM (type 8e).

Initialize LVM and create a new physical volume:

# vgscan
# pvcreate /dev/hda4
# vgcreate rootvg /dev/hda4

Now create a 5G logical volume, formatted into an xfs file system:

# lvcreate rootvg ---name rootlv -size 5G
# mkfs.xfs /dev/rootvg/rootlv

Copy the files from the existing root file system to the new LVM one:

# mkdir /mnt/new_root
# mount /dev/rootvg/rootlv /mnt/new_root
# cp -ax /. /mnt/new_root/

Next, modify /etc/fstab to mount / on /dev/rootvg/root instead of /dev/hda3.

The trickiest part is to rebuild your initrd to include LVM support. This tends to be distro-specific, but look for mkinitrd or yaird. Your initrd image must have the LVM modules loaded or the root filesystem will not be available. To be safe, leave your original initrd image alone and make a new one named, for example, /boot/initrd-lvm.img.

Finally, update your bootloader. Add a new section for your new root filesystem, duplicating your original boot stanza. In the new copy, change the root from /dev/hda3 to /dev/rootvg/rootlv, and change your initrd to the newly built one. If you use lilo, be sure to run lilo once you've made the changes. For example, with grub, if you have:

title=Linux
root (hd0,0)
kernel /vmlinuz root=/dev/hda3 ro single
initrd /initrd.img

add a new section such as:

title=LinuxLVM

root (hd0,0)
kernel /vmlinuz root=/dev/rootvg/root ro single
initrd /initrd-lvm.img

Conclusion

LVM is only one of many enterprise technologies in the Linux kernel that has become available for regular users. LVM provides a great deal of flexibility with disk space, and combined with RAID 1, NFS, and a good backup strategy, you can build a bulletproof, easily managed way to store, share, and preserve any quantity of files.

Read more

0 Replace a bad disk in a mirrored volume group

 

1. unmirrorvg workvg hdisk7
2. reducevg workvg hdisk7
Note: migrate the lv copies of the disk if you get any errors like lvcopies on disk.

3. rmdev -l hdisk7 -d

replace the disk drive, let the drive be renamed hdisk7

4. extendvg workvg hdisk7
5. mirrorvg workvg hdisk7

Note: By default in this example, mirrorvg will try to create 2 copies for logical volumes in workvg. It will try to create the new mirror

Read more

0 Access Control List (ACL's) on AIX.

 
           Well. We speak so much of security and make secure system, in concerns of that here I am explaining how to put ACL's on AIX system. As I presume its just the brief description of ACL on AIX system. You can also find how to put ACL's on my previous article.

          As there is an interactive way to do it but well here I'll be only trying it to do with files.

1. Create/Identify the file on which we need to put acl's
# touch alctest2

2. Get the acl's which exist on the file.
# ls -l acltest2
-rw-r--r-- 1 root system 0 Jul 18 17:15 acltest2
# aclget acltest2
*
* ACL_type AIXC
*
attributes:
base permissions
owner(root): rw-
group(system): r--
others: r--
extended permissions
disabled

3. Now we will collect all the acl's permission in an outputfile called as acldefs.
# aclget -o acldefs acltest2

4. Edit the file acldefs. and make the necessary changes as you want in the file.
for eg:
Base permissions
AIXC ACL specific base permissions are the traditional file-access modes assigned to the file
owner, file group, and other users. The access modes are read (r), write (w), and execute/search
(x).
Note: AIXC ACL type Base Permissions will be same as the file mode bits stored in the file
system object’s inode headers. That is, the information in base mode bits is same as the
value returned by file system when stat is performed on the file system object.
In an access control list, base permissions are in the following format, with the Mode parameter
expressed as rwx (with a hyphen (-) replacing each unspecified permission):
base permissions:
owner(name): Mode
group(group): Mode
others: Mode

Attributes
Three attributes can be added to an access control list:
setuid (SUID)
Set-user-ID mode bit. This attribute sets the effective and saved user IDs of the process to
the owner ID of the file on execution.
setgid (SGID)
Set-group-ID mode bit. This attribute sets the effective and saved group IDs of the process
to the group ID of the file on execution.
savetext (SVTX)
Saves the text in a text file format.
above attributes are added in the following format:
The attributes: SUID, SGID, SVTX

Extended permissions
AIXC ACL extended permissions allow the owner of a file to more precisely define access to that
file. Extended permissions modify the base file permissions (owner, group, others) by permitting,
denying, or specifying access modes for specific individuals, groups, or user and group
combinations. Permissions are modified through the use of keywords.
The permit, deny, and specify keywords are defined as follows:
permit
Grants the user or group the specified access to the file deny Restricts the user or group from using the specified access to the file specify Precisely defines the file access for the user or group a user is denied a particular access by either a deny or a specify keyword, no other entry can If override that access denial.
The enabled keyword must be specified in the ACL for the extended permissions to take effect.
The default value is the disabled keyword.
In an AIXC ACL, extended permissions are in the following format:
extended permissions:
enabled | disabled
permit Mode UserInfo...:
deny Mode UserInfo...:
specify Mode UserInfo...:

Use a separate line for each permit, deny, or specify entry. The Mode parameter is expressed as
rwx (with a hyphen (-) replacing each unspecified permission). The UserInfo parameter is
expressed as u:UserName, or g:GroupName, or a comma-separated combination of u:UserName and
g:GroupName.
Note: If more than one user name is specified in an entry, that entry cannot be used in an access
control decision because a process has only one user ID.

 5. After changing the files for necessary ACL permission's save the file as we are going to use this file for our future input values.

6. Now to put the acl on the other file with as per the changes you made in your file.

# aclput -i acldefs acltest3

This will put the acl's on the file with the values specfied in the acldefs file. and now you can play with acl's

 

Note: To copy acl's from one file to another.

# aclget filename | aclput filename1

 

That's all for ACL's for now. Will write about NFS4 ACL's on AIX

Read more

0 Access Control List + Solaris

All the commands are fired from (% - user) prompt not (# - root) so do not confuse with the #es there with root prompt. They are the command output.


The ACL facility allows you to define more than just the usual eight permission bits for a file or directory. You can define a list of users (based on user-id or name) and groups (again, number or name) that you want to have access to a file. For each user or group getting special access, you can define read, write, or execute access permission.

There are only two commands that you need to learn for Solaris ACLs. They are setfacl for setting a file's ACLs and getfacl for reading them. There are also a bunch of system and library calls that make the ACL facility available to programs. One confusing aspect of ACLs is that, in essence, every file already has an ACL entry. Running getfacl on a normal file reveals some ACL information:

% cd /usr/tmp
% touch foo
% ls -l foo
-rw-r--r-- 1 pbg staff 0 Jul 22 13:35 foo

% getfacl foo
# file: foo
# owner: pbg
# group: staff
user::rw-
group::r-- #effective:r--
mask:rwx
other:r--

This ACL information is merely getfacl's interpretation of the Unix permissions on the file. The user, group and other information is a straightforward display of the permission bits for those fields. The mask field is very similar to the Unix umask method. It defines the maximum permissions allowed for users (other than the owner) and groups. Even if a user or group has permissions set that exceed the mask, the mask limits their access. The #effective display shows, for each user (except the owner) and group, the effect that the mask has on the permissions. The #effective output is the one to look at to determine exactly who can access the file and exactly what they are allowed to do.

To set an ACL for a file, use the command setfacl:

% setfacl -m user:jeff:rw- foo

% ls -l foo
-rw-r--r--+ 1 pbg staff 0 Jul 22 13:52 foo

% getfacl foo

# file: foo
# owner: pbg
# group: staff
user::rw-
user:jeff:rw- #effective:r--
group::r-- #effective:r--
mask:r--
other:r--

The -m option tells setfacl that I want to modify the ACLs for the file. Use the -s option to set the entire mode, but then you must type in the user, group, and other access bits as well:

% setfacl -s user::rw-,group::r--,other:---,mask:rw-,user:jeff:rw- foo

To set general user, group, and other permissions, use the field::perms identifier. To set ACLs for individual users and groups, use the field:uid or gid:perms identifier.

But back to our previous example. Notice that the effective access for user Jeff is unchanged, he can still only read the file, not write to it. That's the result of the mask being applied to his permissions. To grant Jeff the access desired, I need to:

% setfacl -m mask:rw- foo
% getfacl foo

# file: foo
# owner: pbg
# group: staff
user::rw-
user:jeff:rw- #effective:rw-
group::r-- #effective:r--
mask:rw-
other:r--

Now Jeff has read and write permissions to the file, while all others have only read access. Of note is the slight change in behavior of the ls command. Any file with specific ACL information is shown with a + at the end of the permission field. Unfortunately, find doesn't seem to have an option to find all files with ACL lists.

As well as setting an ACL for the directory, you can set a default ACL for the directory. This default ACL is used to set the ACL on every file created within the directory. The only way I managed to get directory ACLs to work was using the -s option with a very-long parameter string:

% setfacl -s user::rwx,group::rw-,mask:r--,other:rw-,default:user::rw-,\
default:group::r-x,default:mask:rwx,default:other:r-x bar

% ls -ld bar
drwxr--rw-+ 2 pbg staff 512 Jul 22 14:11 bar

% getfacl bar

# file: bar
# owner: pbg
# group: staff
user::rwx
group::rw- #effective:r--
mask:r--
other:rw-
default:user::rw-
default:group::r-x
default:mask:rwx
default:other:r-x

Now set a default ACL, and create a file in the directory:

% setfacl -m default:user:jeff:rwx bar

% getfacl bar

# file: bar
# owner: pbg
# group: staff
user::rwx
group::rw- #effective:r--
mask:r--
other:rw-
default:user::rw-
default:user:jeff:rwx
default:group::r-x
default:mask:rwx
default:other:r-x
default:user::rw-
default:user:jeff:rwx
default:group::r-x
default:mask:rwx
default:other:r-x

% touch bar/test

% getfacl bar/test

# file: bar/test
# owner: pbg
# group: staff
user::rw-
user:jeff:rwx #effective:r--
group::r-- #effective:r--
mask:r--
other:r--

There are several other aspects of ACLs, including deleting ACLs and using abbreviations and permission bit numbers (rather than symbols). This information is provided on the appropriate manual pages.

To use ACLs over an NFS mount, both the client and server must be running Solaris 2.5 or better. If the client is running 2.5 but the server is running 2.4 or lower, you'll see an error such as:

% touch foo
% getfacl foo

# file: foo
# owner: pbg
# group: staff
user::rw-
group::r-- #effective:r--
mask:rwx
other:r--

% setfacl -m user:jeff:rw- foo
foo: failed to set acl entries
setacl error: Operation not applicable

You'll get a similar error if you try to use ACLs in a swapfs-based directory (such as /tmp). Finally, there's a "non-feature" of ACLs when used with tar. tar itself works well with files that have associated ACLs. Unfortunately, the tar file is not readable under previous SunOS and Solaris operating systems.

It is also important to note that ACLs "stick" to a file during copy and rename operations. To remove the ACL from a file use setfacl -d for each entry. When the last entry is removed, the "+" disappears from the file's ls display.



Powered by ScribeFire.

Read more

Popular Posts

Linux Gazette