0 The 10 Commands we never use.

It takes years maybe decades to master the commands available to you at the Linux shell prompt. Here are 10 that you will have never heard of or used. They are in no particular order. My favorite is mkfifo.

  1. pgrep, instead of:
    # ps -ef | egrep '^root ' | awk '{print $2}'
    1
    2
    3
    4
    5
    20
    21
    38
    39
    ...

    You can do this:

    # pgrep -u root
    1
    2
    3
    4
    5
    20
    21
    38
    39
    ...
  2. pstree, list the processes in a tree format. This can be VERY useful when working with WebSphere or other heavy duty applications.
    # pstree
    init-+-acpid
    |-atd
    |-crond
    |-cups-config-dae
    |-cupsd
    |-dbus-daemon-1
    |-dhclient
    |-events/0-+-aio/0
    | |-kacpid
    | |-kauditd

    | |-kblockd/0
    | |-khelper
    | |-kmirrord
    | `-2*[pdflush]
    |-gpm
    |-hald
    |-khubd
    |-2*[kjournald]
    |-klogd
    |-kseriod

    |-ksoftirqd/0
    |-kswapd0
    |-login---bash
    |-5*[mingetty]
    |-portmap
    |-rpc.idmapd
    |-rpc.statd
    |-2*[sendmail]
    |-smartd
    |-sshd---sshd---bash---pstree

    |-syslogd
    |-udevd
    |-vsftpd
    |-xfs
    `-xinetd
  3. bc is an arbitrary precision calculator language. Which is great. I found it useful in that it can perform square root operations in shell scripts. expr does not support square roots.
    # ./sqrt
    Usage: sqrt number
    # ./sqrt 64
    8
    # ./sqrt 132112
    363
    # ./sqrt 1321121321
    36347

    Here is the script:

    # cat sqrt
    #!/bin/bash
    if [ $# -ne 1 ]
    then
    echo 'Usage: sqrt number'
    exit 1
    else
    echo -e "sqrt($1)\nquit\n" | bc -q -i
    fi
  4. split, have a large file that you need to split into smaller chucks? A mysqldump maybe? split is your command. Below I split a 250MB file into 2 megabyte chunks all starting with the prefix LF_.
    # ls -lh largefile
    -rw-r--r-- 1 root root 251M Feb 19 10:27 largefile
    # split -b 2m largefile LF_
    # ls -lh LF_* | head -n 5
    -rw-r--r-- 1 root root 2.0M Feb 19 10:29 LF_aa
    -rw-r--r-- 1 root root 2.0M
    Feb 19 10:29 LF_ab
    -rw-r--r-- 1 root root 2.0M Feb 19 10:29 LF_ac
    -rw-r--r-- 1 root root 2.0M Feb 19 10:29 LF_ad
    -rw-r--r-- 1 root root 2.0M Feb 19 10:29 LF_ae
    # ls -lh LF_* | wc -l
    126
  5. nl numbers lines. I had a script doing this for me for years until I found out about nl.
    # head wireless.h
    /*
    * This file define a set of standard wireless extensions
    *
    * Version : 20 17.2.06
    *
    * Authors : Jean Tourrilhes - HPL
    * Copyright (c) 1997-2006 Jean Tourrilhes, All Rights Reserved.

    */

    #ifndef _LINUX_WIRELESS_H
    # nl wireless.h | head
    1 /*
    2 * This file define a set of standard wireless extensions
    3 *
    4 * Version : 20 17.2.06
    5 *
    6 * Authors : Jean Tourrilhes - HPL
    7 * Copyright (c) 1997-2006 Jean Tourrilhes, All Rights Reserved.
    8 */

    9 #ifndef _LINUX_WIRELESS_H
  6. mkfifo is the coolest one. Sure you know how to create a pipeline piping the output of grep to less or maybe even perl. But do you know how to make two commands communicate through a named pipe?

    First let me create the pipe and start writing to it:

    mkfifo pipe; tail file > pipe

    Then read from it:

    cat pipe
  7. ldd, want to know which Linux thread library java is linked to?
    # ldd /usr/java/jre1.5.0_11/bin/java
    libpthread.so.0 => /lib/tls/libpthread.so.0 (0x00bd4000)
    libdl.so.2 => /lib/libdl.so.2 (0x00b87000)
    libc.so.6 => /lib/tls/libc.so.6 (0x00a5a000)

    /lib/ld-linux.so.2 (0x00a3c000)
  8. col, want to save man pages as plain text?
    # PAGER=cat
    # man less | col -b > less.txt
  9. xmlwf, need to know if a XML document is well formed? (A configuration file maybe..)
    # curl -s 'http://bashcurescancer.com' > bcc.html
    # xmlwf bcc.html
    # perl -i -pe 's@<br/>@<br>@g' bcc.html
    # xmlwf bcc.html
    bcc.html
    :104:2: mismatched tag
  10. lsof lists open files. You can do all kinds of cool things with this. Like find which ports are open:
    # lsof | grep TCP
    portmap 2587 rpc 4u IPv4 5544 TCP *:sunrpc (LISTEN)
    rpc.statd 2606 root 6u IPv4 5585 TCP *:668 (LISTEN)
    sshd 2788 root 3u IPv6 5991 TCP *:ssh (LISTEN)

    sendmail 2843 root 4u IPv4 6160 TCP badhd:smtp (LISTEN)
    vsftpd 9337 root 3u IPv4 34949 TCP *:ftp (LISTEN)
    cupsd 16459 root 0u IPv4 41061 TCP badhd:ipp (LISTEN)

    sshd 16892 root 3u IPv6 61003 TCP badhd.mshome.net:ssh->kontiki.mshome.net:4661 (ESTABLISHED)

    Or find the number of open files a user has. Very important for running big applications like Oracle, DB2, or WebSphere:

    # lsof | grep ' root ' | awk '{print $NF}' | sort | uniq | wc -l
    179


-
Read more

0 Bash Tips and Tricks

Save a lot of typing with these handy bash features you won't find in an old-fashioned UNIX shell.

bash, or the Bourne again shell, is the default shell in most Linux distributions. The popularity of the bash shell amongst Linux and UNIX users is no accident. It has many features to enhance user-friendliness and productivity. Unfortunately, you can't take advantage of those features unless you know they exist.

When I first started using Linux, the only bash feature I took advantage of was going back through the command history using the up arrow. I soon learned additional features by watching others and asking questions. In this article, I'd like to share some bash tricks I've learned over the years.

This article isn't meant to cover all of the features of the bash shell; that would require a book, and plenty of books are available that cover this topic, including Learning the bash Shell from O'Reilly and Associates. Instead, this article is a summary of the bash tricks I use most often and would be lost without.

Brace Expansion

My favorite bash trick definitely is brace expansion. Brace expansion takes a list of strings separated by commas and expands those strings into separate arguments for you. The list is enclosed by braces, the symbols { and }, and there should be no spaces around the commas. For example:

$ echo {one,two,red,blue}
one two red blue

Using brace expansion as illustrated in this simple example doesn't offer too much to the user. In fact, the above example requires typing two more characters than simply typing:

echo one two red blue

which produces the same result. However, brace expansion becomes quite useful when the brace-enclosed list occurs immediately before, after or inside another string:

$ echo {one,two,red,blue}fish
onefish twofish redfish bluefish
$ echo fish{one,two,red,blue}
fishone fishtwo fishred fishblue
$ echo fi{one,two,red,blue}sh
fionesh fitwosh firedsh fibluesh

Notice that there are no spaces inside the brackets or between the brackets and the adjoining strings. If you include spaces, it breaks things:

$ echo {one, two, red, blue }fish
{one, two, red, blue }fish
$ echo "{one,two,red,blue} fish"
{one,two,red,blue} fish

However, you can use spaces if they're enclosed in quotes outside the braces or within an item in the comma-separated list:

$ echo {"one ","two ","red ","blue "}fish
one fish two fish red fish blue fish
$ echo {one,two,red,blue}" fish"
one fish two fish red fish blue fish

You also can nest braces, but you must use some caution here too:

$ echo {{1,2,3},1,2,3}
1 2 3 1 2 3
$ echo {{1,2,3}1,2,3}
11 21 31 2 3

Now, after all these examples, you might be thinking to yourself, "Gee, those are great parlor tricks, but why should I care about brace expansion?"

Brace expansion becomes useful when you need to make a backup of a file. This is why it's my favorite shell trick. I use it almost every day when I need to make a backup of a config file before changing it. For example, if I'm making a change to my Apache configuration, I can do the following and save some typing:

$ cp /etc/httpd/conf/httpd.conf{,.bak}

Notice that there is no character between the opening brace and the first comma. It's perfectly acceptable to do this and is useful when adding characters to an existing filename or when one argument is a substring of the other. Then, if I need to see what changes I made later in the day, I use the diff command and reverse the order of the strings inside the braces:

$ diff /etc/httpd/conf/httpd.conf{.bak,}
1050a1051
> # I added this comment earlier

Command Substitution

Another bash trick I like to use is command substitution. To use command substitution, enclose any command that generates output to standard output inside parentheses and precede the opening parenthesis with a dollar sign, $(command). Command substitution is useful when assigning a value to a variable. This is typical in shell scripts, where a common operation is to assign the date or time to a variable. It also is handy for using the output of one command as an argument to another command. If you want to assign the date to a variable, you can do this:

$ date +%d-%b-%Y
12-Mar-2004
$ today=$(date +%d-%b-%Y)
$ echo $today
12-Mar-2004

I often use command substitution to get information about several RPM packages at once. If I want a listing of all the files from all the RPM packages that have httpd in the name, I simply execute the following:

$ rpm -ql $(rpm -qa | grep httpd)

The inner command, rpm -qa | grep httpd, lists all the packages that have httpd in the name. The outer command, rpm -ql, lists all the files in each package.

Now, those of you who have experience with the Bourne shell might point out that you could perform command substitution by surrounding a command with back quotes, also called back-ticks. Using Bourne-style command substitution, the date assignment from above becomes:

today2=`date +%d-%b-%Y`
$ echo $today2
12-Mar-2004

There are two important advantages to using the newer bash-style syntax for command substitution. First, it can be nested more easily. Because the opening and closing symbols are different, the inner symbols don't need to be escaped with back slashes. Second, it is easier to read, especially when nested.

Even on Linux, where bash is standard, you still encounter shell scripts that use the older, Bourne-style syntax. This is done to provide portability to various flavors of UNIX that do not always have bash available but do have the Bourne shell. bash is backward-compatible with the Bourne shell, so it can understand the older syntax.

Redirecting Standard Error

Have you ever looked for a file using the find command, only to learn the file you were looking for is lost in a sea of permission denied error messages that quickly fill your terminal window?

If you are the administrator of the system, you can become root and execute find again as root. Because root can read any file, you don't get that error anymore. Unfortunately, not everyone has root access on the system being used. Besides, it's bad practice to be root unless it's absolutely necessary. So what can you do?

One thing you can do is redirect your output to a file. Basic output redirection should be nothing new to anyone who has spent a reasonable amount of time using any UNIX or Linux shell, so I won't go into detail regarding the basics of output redirection. To save the useful output from the find command, you can redirect the output to a file:

$ find /  -name foo > output.txt

You still see the error messages on the screen but not the path of the file you're looking for. Instead, that is placed in the file output.txt. When the find command completes, you can cat the file output.txt to get the location(s) of the file(s) you want.

That's an acceptable solution, but there's a better way. Instead of redirecting the standard output to a file, you can redirect the error messages to a file. This can be done by placing a 2 directly in front of the redirection angle bracket. If you are not interested in the error messages, you simply can send them to /dev/null:

$ find /  -name foo 2> /dev/null

This shows you the location of file foo, if it exists, without those pesky permission denied error messages. I almost always invoke the find command in this way.

The number 2 represents the standard error output stream. Standard error is where most commands send their error messages. Normal (non-error) output is sent to standard output, which can be represented by the number 1. Because most redirected output is the standard output, output redirection works only on the standard output stream by default. This makes the following two commands equivalent:

$ find / -name foo > output.txt
$ find / -name foo 1> output.txt

Sometimes you might want to save both the error messages and the standard output to file. This often is done with cron jobs, when you want to save all the output to a log file. This also can be done by directing both output streams to the same file:

$ find / -name foo > output.txt 2> output.txt

This works, but again, there's a better way to do it. You can tie the standard error stream to the standard output stream using an ampersand. Once you do this, the error messages goes to wherever you redirect the standard output:


$ find / -name foo > output.txt 2>&1

One caveat about doing this is that the tying operation goes at the end of the command generating the output. This is important if piping the output to another command. This line works as expected:


find -name test.sh 2>&1 | tee /tmp/output2.txt

but this line doesn't:


find -name test.sh | tee /tmp/output2.txt 2>&1

and neither does this one:


find -name test.sh 2>&1 > /tmp/output.txt

I started this discussion on output redirection using the find command as an example, and all the examples used the find command. This discussion isn't limited to the output of find, however. Many other commands can generate enough error messages to obscure the one or two lines of output you need.

Output redirection isn't limited to bash, either. All UNIX/Linux shells support output redirection using the same syntax.

Searching the Command History

One of the greatest features of the bash shell is command history, which makes it easy to navigate through past commands by navigating up and down through your history with the up and down arrow keys. This is fine if the command you want to repeat is within the last 10-20 commands you executed, but it becomes tedious when the command is 75-100 commands back in your history.

To speed things up, you can search interactively through your command history by pressing Ctrl-R. After doing this, your prompt changes to:

(reverse-i-search)`':

Start typing a few letters of the command you're looking for, and bash shows you the most recent command that contains the string you've typed so far. What you type is shown between the ` and ' in the prompt. In the example below, I typed in htt:

(reverse-i-search)`htt': rpm -ql $(rpm -qa | grep httpd)

This shows that the most recent command I typed containing the string htt is:

rpm -ql $(rpm -qa | grep httpd)

To execute that command again, I can press Enter. If I want to edit it, I can press the left or right arrow key. This places the command on the command line at a normal prompt, and I now can edit it as if I just typed it in. This can be a real time saver for commands with a lot of arguments that are far back in the command history.

Using for Loops from the Command Line

One last tip I'd like to offer is using loops from the command line. The command line is not the place to write complicated scripts that include multiple loops or branching. For small loops, though, it can be a great time saver. Unfortunately, I don't see many people taking advantage of this. Instead, I frequently see people use the up arrow key to go back in the command history and modify the previous command for each iteration.

If you are not familiar with creating for loops or other types of loops, many good books on shell scripting discuss this topic. A discussion on for loops in general is an article in itself.

You can write loops interactively in two ways. The first way, and the method I prefer, is to separate each line with a semicolon. A simple loop to make a backup copy of all the files in a directory would look like this:

$ for file in * ; do cp $file $file.bak; done

Another way to write loops is to press Enter after each line instead of inserting a semicolon. bash recognizes that you are creating a loop from the use of the for keyword, and it prompts you for the next line with a secondary prompt. It knows you are done when you enter the keyword done, signifying that your loop is complete:

$ for file in *
> do cp $file $file.bak
> done

And Now for Something Completely Different

When I originally conceived this article, I was going to name it "Stupid bash Tricks", and show off some unusual, esoteric bash commands I've learned. The tone of the article has changed since then, but there is one stupid bash trick I'd like to share.

About five years ago, a Linux system I was responsible for ran out of memory. Even simple commands, such as ls, failed with an insufficient memory error. The obvious solution to this problem was simply to reboot. One of the other system administrators wanted to look at a file that may have held clues to the problem, but he couldn't remember the exact name of the file. We could switch to different directories, because the cd command is part of bash, but we couldn't get a list of the files, because even ls would fail. To get around this problem, the other system administrator created a simple loop to show us the files in the directory:

$ for file in *; do echo $file; done

This worked when ls wouldn't, because echo is a part of the bash shell, so it already is loaded into memory. It's an interesting solution to an unusual problem. Now, can anyone suggest a way to display the contents of a file using only bash built-ins?

Read more

0 Remote Logins - Telnet

An answer found from Linux Gazette for the question on Remote Logins and su.

Q. i am running red hat linux 6.1 and am encountering some problems i can login as root from the console but not from anywhere else i have to login as webmaster on all other machines on ntwk from nowhere, including the console, can i su once logged in as webmaster any help would be appreciated

Ans. :
Any of these should allow you to access your system through cryptographically secured authentication and session protocols that protect you from a variety of sniffing, spoofing, TCP hijacking and other vulnerabilties that are common using other forms of remote shell access (such as telnet, and the infamous rsh and rlogin packages).

If you really insist on eliminating these policies from your system you can edit files under /etc/pam.d that are used to configure the options and restrictions of the programs that are compiled against the PAM (pluggable authentication modules) model and libraries. Here's an example of one of them (/etc/pam.d/login which is used by the in.telnetd service):

#
# The PAM configuration file for the Shadow `login' service
#
# NOTE: If you use a session module (such as kerberos or NIS+)
# that retains persistent credentials (like key caches, etc), you
# need to enable the `CLOSE_SESSIONS' option in /etc/login.defs
# in order for login to stay around until after logout to call
# pam_close_session() and cleanup.
#

# Outputs an issue file prior to each login prompt (Replaces the
# ISSUE_FILE option from login.defs). Uncomment for use
# auth required pam_issue.so issue=/etc/issue

# Disallows root logins except on tty's listed in /etc/securetty
# (Replaces the `CONSOLE' setting from login.defs)
auth requisite pam_securetty.so

# Disallows other than root logins when /etc/nologin exists
# (Replaces the `NOLOGINS_FILE' option from login.defs)
auth required pam_nologin.so

# This module parses /etc/environment (the standard for setting
# environ vars) and also allows you to use an extended config
# file /etc/security/pam_env.conf.
# (Replaces the `ENVIRON_FILE' setting from login.defs)
auth required pam_env.so

# Standard Un*x authentication. The "nullok" line allows passwordless
# accounts.
auth required pam_unix.so nullok

# This allows certain extra groups to be granted to a user
# based on things like time of day, tty, service, and user.
# Please uncomment and edit /etc/security/group.conf if you
# wish to use this.
# (Replaces the `CONSOLE_GROUPS' option in login.defs)
# auth optional pam_group.so

# Uncomment and edit /etc/security/time.conf if you need to set
# time restrainst on logins.
# (Replaces the `PORTTIME_CHECKS_ENAB' option from login.defs
# as well as /etc/porttime)
# account requisite pam_time.so

# Uncomment and edit /etc/security/access.conf if you need to
# set access limits.
# (Replaces /etc/login.access file)
# account required pam_access.so

# Standard Un*x account and session
account required pam_unix.so
session required pam_unix.so

# Sets up user limits, please uncomment and read /etc/security/limits.conf
# to enable this functionality.
# (Replaces the use of /etc/limits in old login)
# session required pam_limits.so

# Prints the last login info upon succesful login
# (Replaces the `LASTLOG_ENAB' option from login.defs)
session optional pam_lastlog.so

# Prints the motd upon succesful login
# (Replaces the `MOTD_FILE' option in login.defs)
session optional pam_motd.so

# Prints the status of the user's mailbox upon succesful login
# (Replaces the `MAIL_CHECK_ENAB' option from login.defs). You
# can also enable a MAIL environment variable from here, but it
# is better handled by /etc/login.defs, since userdel also uses
# it to make sure that removing a user, also removes their mail
# spool file.
session optional pam_mail.so standard noenv

# The standard Unix authentication modules, used with NIS (man nsswitch) as
# well as normal /etc/passwd and /etc/shadow entries. For the login service,
# this is only used when the password expires and must be changed, so make
# sure this one and the one in /etc/pam.d/passwd are the same. The "nullok"
# option allows users to change an empty password, else empty passwords are
# treated as locked accounts.
#
# (Add `md5' after the module name to enable MD5 passwords the same way that
# `MD5_CRYPT_ENAB' would do under login.defs).
#
# The "obscure" option replaces the old `OBSCURE_CHECKS_ENAB' option in
# login.defs. Also the "min" and "max" options enforce the length of the
# new password.

password required pam_unix.so nullok obscure min=4 max=8

# Alternate strength checking for password. Note that this
# requires the libpam-cracklib package to be installed.
# You will need to comment out the password line above and
# uncomment the next two in order to use this.
# (Replaces the `OBSCURE_CHECKS_ENAB', `CRACKLIB_DICTPATH')
#
# password required pam_cracklib.so retry=3 minlen=6 difok=3
# password required pam_unix.so use_authtok nullok md5

This is from Debian machine (mars.starshine.org) and thus has far more comments (all those lines starting with "#" hash marks) than those that Red Hat installs. It's good that Debian comments these files so verbosely, since that's practically the only source of documentation for PAM files and modules.

In this case the entry that you really care about is the one for 'securetty.so' This module checks the file /etc/securetty which is classically a list of those terminals on which your system will allow direct root logins.

You could comment out this line in /etc/pam.d/login to disable this check for those services which call the /bin/login command. You can look for similar lines in the various other /etc/pam.d files so see which other services are enforcing this policy.

This leads us to the question of why your version of 'su' is not working. Red Hat's version of 'su' is probably also "PAMified" (almost certainly, in fact). So there should be a /etc/pam.d/su file that controls the list of policies that your copy of 'su' is checking. You should look through that to see why 'su' isn't allowing your 'webmaster' account to become 'root'.

It seems quite likely that your version of Red Hat contains a line something like:

# Uncomment this to force users to be a member of group root
# before than can use `su'. You can also add "group=foo" to
# to the end of this line if you want to use a group other
# than the default "root".
# (Replaces the `SU_WHEEL_ONLY' option from login.defs)
auth required pam_wheel.so

Classically the 'su' commands on most versions of UNIX required that a user be in the "wheel" group in order to attain 'root' The traditional GNU implementation did not enforce this restriction (since rms found it distasteful).

On my system this line was commented out (which is presumably the Debian default policy, since I never fussed with that file on my laptop). I've uncommented here for this example.

Note that one of the features of PAM is that it allows you to specify any group using a command line option. It defaults to "wheel" because that is an historical convention. You can also use the pam_wheel.so module on any of the PAMified services --- so you could have programs like 'ftpd' or 'xdm' enforce a policy that restricted their use to members of arbitrary groups.

Finally note that most recent versions of SSH have PAM support enabled when they are compiled for Linux systems. Thus you may find, after you install any version of SSH, that you have an /etc/pam.d/ssh file. You may have to edit that to set some of your preferred SSH policies. There is also an sshd_config file (mine's in /etc/ssh/sshd_config) that will allow you to control other ssh options).

In generall the process of using ssh works something like this:

  1. Install the sshd (daemon) package on your servers (the systems that you want to access)
  2. Install the ssh client package on your clients (the systems from which you'd like to initiate your connections).
  3. Generate Host keys on all of these systems (normally done for you by the installation).

.... you could stop at this point, and just start using the ssh and slogin commands to access your remote accounts using their passwords. However, for more effective and convenient use you'd also:

  1. Generate personal key pairs for your accounts.
  2. Copy/append the identity.pub (public) keys from each of your client accounts into the ~/.ssh/authorized_keys files on each of the servers.

This allows you to access those remote accounts without using your passwords on them. (Actually sshd can be configured to require the passwords AND/OR the identity keys, but the default is to allow access without a password if the keys work).

Another element you should be aware of is the "passphrases" and the ssh-agent. Basically it is normal to protect your private key with a passphrase. This is sort of like a password --- but it is used to decrypt or "unlock" your private key. Obviously there isn't much added convenience if you protect your private key with a passphrase so that you have to type that every time you use an ssh/slogin or scp (secure remote copy) command.

ssh-agent allows you to start a shell or other program, unlock your identity key (or keys), and have all of the ssh commands you run from any of the descendents of that shell or program automatically use any of those unlocked keys. (The advantage of this is that the agent automatically dies when you exit the shell program that you started. That automatically "locks" the identity --- sort of.

There are alot of other aspects to ssh. It can be used to create tunnels, through which one can use all sorts of traffic. People have created PPP/TCP/IP tunnels that run through ssh tunnels to support custom VPNs (virtual private networks). When run under X, ssh automatically performs "X11 forwarding" through one of the these tunnels. This is particularly handy for running X clients on remote systems beyond a NAT (IP Masquerading) router or through a proxying firewall.

In other words ssh is a very useful package quite apart from its support for cryptographic authentication and encryption.

In fairness I should point out that there are a number of alternatives to ssh. Kerberos is a complex and mature suite of protocols for performing authentication and encryption. STEL is a simple daemon/client package which functions just like telnetd/telnet --- but with support for encrypted sessions. And there are SSL enabled versions telnet and ftp daemons and clients.

Read more

0 How do I lock out a user after a set number of login attempts?

The PAM (Pluggable Authentication Module) module pam_tally keeps track of unsuccessful login attempts then disables user accounts when a preset limit is reached. This is often referred to as account lockout.

To lock out a user after 4 attempts, two entries need to be added in the /etc/pam.d/system-auth file:

auth        required        /lib/security/$ISA/pam_tally.so onerr=fail no_magic_root
account required /lib/security/$ISA/pam_tally.so deny=3 no_magic_root reset

The options used above are described below:

  • onerr=fail
    If something strange happens, such as unable to open the file, this determines how the module should react.
  • no_magic_root
    This is used to indicate that if the module is invoked by a user with uid=0, then the counter is incremented. The sys-admin should use this for daemon-launched services, like telnet/rsh/login.
  • deny=3The deny=3 option is used to deny access if tally for this user exceeds 3.
  • reset
    The reset option instructs the module to reset count to 0 on successful entry.

See below for a complete example of implementing this type of policy:

auth        required      /lib/security/$ISA/pam_env.so
auth required /lib/security/$ISA/pam_tally.so onerr=fail
no_magic_root
auth sufficient /lib/security/$ISA/pam_unix.so likeauth nullok
auth required /lib/security/$ISA/pam_deny.so
account required /lib/security/$ISA/pam_unix.so
account required /lib/security/$ISA/pam_tally.so deny=5
no_magic_root reset
password requisite /lib/security/$ISA/$ISA/pam_cracklib.so retry=3
password sufficient /lib/security/$ISA/$ISA/pam_unix.so nullok use_authtok md5 shadow password
required /lib/security/$ISA/$ISA/pam_deny.so session
required /lib/security/$ISA/$ISA/pam_limits.so session
required /lib/security/$ISA/$ISA/pam_unix.so

For more detailed information on the PAM system please see the documentation contained under /usr/share/doc/pam-

For information on how to unlock a user that has expired their deny tally see additional Knowledgebase articles regarding unlocking a user account and seeing failed logins with the faillog command.

contributed by David Robinson

Read more

0 Good Morning.

Well. Today is Monday. the most painful day of the week and unfortunatley it rained today. i don't know why and for what good reason this happened but it made it worst day today.
Read more

Popular Posts

Linux Gazette