Save a lot of typing with these handy bash features you won't find in an old-fashioned UNIX shell.
bash, or the Bourne again shell, is the default shell in most Linux distributions. The popularity of the bash shell amongst Linux and UNIX users is no accident. It has many features to enhance user-friendliness and productivity. Unfortunately, you can't take advantage of those features unless you know they exist.
When I first started using Linux, the only bash feature I took advantage of was going back through the command history using the up arrow. I soon learned additional features by watching others and asking questions. In this article, I'd like to share some bash tricks I've learned over the years.
This article isn't meant to cover all of the features of the bash shell; that would require a book, and plenty of books are available that cover this topic, including Learning the bash Shell from O'Reilly and Associates. Instead, this article is a summary of the bash tricks I use most often and would be lost without.
Brace Expansion
My favorite bash trick definitely is brace expansion. Brace expansion takes a list of strings separated by commas and expands those strings into separate arguments for you. The list is enclosed by braces, the symbols { and }, and there should be no spaces around the commas. For example:
$ echo {one,two,red,blue}
one two red blue
Using brace expansion as illustrated in this simple example doesn't offer too much to the user. In fact, the above example requires typing two more characters than simply typing:
echo one two red blue
which produces the same result. However, brace expansion becomes quite useful when the brace-enclosed list occurs immediately before, after or inside another string:
$ echo {one,two,red,blue}fish
onefish twofish redfish bluefish
$ echo fish{one,two,red,blue}
fishone fishtwo fishred fishblue
$ echo fi{one,two,red,blue}sh
fionesh fitwosh firedsh fibluesh
Notice that there are no spaces inside the brackets or between the brackets and the adjoining strings. If you include spaces, it breaks things:
$ echo {one, two, red, blue }fish
{one, two, red, blue }fish
$ echo "{one,two,red,blue} fish"
{one,two,red,blue} fish
However, you can use spaces if they're enclosed in quotes outside the braces or within an item in the comma-separated list:
$ echo {"one ","two ","red ","blue "}fish
one fish two fish red fish blue fish
$ echo {one,two,red,blue}" fish"
one fish two fish red fish blue fish
You also can nest braces, but you must use some caution here too:
$ echo {{1,2,3},1,2,3}
1 2 3 1 2 3
$ echo {{1,2,3}1,2,3}
11 21 31 2 3
Now, after all these examples, you might be thinking to yourself, "Gee, those are great parlor tricks, but why should I care about brace expansion?"
Brace expansion becomes useful when you need to make a backup of a file. This is why it's my favorite shell trick. I use it almost every day when I need to make a backup of a config file before changing it. For example, if I'm making a change to my Apache configuration, I can do the following and save some typing:
$ cp /etc/httpd/conf/httpd.conf{,.bak}
Notice that there is no character between the opening brace and the first comma. It's perfectly acceptable to do this and is useful when adding characters to an existing filename or when one argument is a substring of the other. Then, if I need to see what changes I made later in the day, I use the diff command and reverse the order of the strings inside the braces:
$ diff /etc/httpd/conf/httpd.conf{.bak,}
1050a1051
> # I added this comment earlier
Command Substitution
Another bash trick I like to use is command substitution. To use command substitution, enclose any command that generates output to standard output inside parentheses and precede the opening parenthesis with a dollar sign, $(command). Command substitution is useful when assigning a value to a variable. This is typical in shell scripts, where a common operation is to assign the date or time to a variable. It also is handy for using the output of one command as an argument to another command. If you want to assign the date to a variable, you can do this:
$ date +%d-%b-%Y
12-Mar-2004
$ today=$(date +%d-%b-%Y)
$ echo $today
12-Mar-2004
I often use command substitution to get information about several RPM packages at once. If I want a listing of all the files from all the RPM packages that have httpd in the name, I simply execute the following:
$ rpm -ql $(rpm -qa | grep httpd)
The inner command, rpm -qa | grep httpd, lists all the packages that have httpd in the name. The outer command, rpm -ql, lists all the files in each package.
Now, those of you who have experience with the Bourne shell might point out that you could perform command substitution by surrounding a command with back quotes, also called back-ticks. Using Bourne-style command substitution, the date assignment from above becomes:
today2=`date +%d-%b-%Y`
$ echo $today2
12-Mar-2004
There are two important advantages to using the newer bash-style syntax for command substitution. First, it can be nested more easily. Because the opening and closing symbols are different, the inner symbols don't need to be escaped with back slashes. Second, it is easier to read, especially when nested.
Even on Linux, where bash is standard, you still encounter shell scripts that use the older, Bourne-style syntax. This is done to provide portability to various flavors of UNIX that do not always have bash available but do have the Bourne shell. bash is backward-compatible with the Bourne shell, so it can understand the older syntax.
Redirecting Standard Error
Have you ever looked for a file using the find command, only to learn the file you were looking for is lost in a sea of permission denied error messages that quickly fill your terminal window?
If you are the administrator of the system, you can become root and execute find again as root. Because root can read any file, you don't get that error anymore. Unfortunately, not everyone has root access on the system being used. Besides, it's bad practice to be root unless it's absolutely necessary. So what can you do?
One thing you can do is redirect your output to a file. Basic output redirection should be nothing new to anyone who has spent a reasonable amount of time using any UNIX or Linux shell, so I won't go into detail regarding the basics of output redirection. To save the useful output from the find command, you can redirect the output to a file:
$ find / -name foo > output.txt
You still see the error messages on the screen but not the path of the file you're looking for. Instead, that is placed in the file output.txt. When the find command completes, you can cat the file output.txt to get the location(s) of the file(s) you want.
That's an acceptable solution, but there's a better way. Instead of redirecting the standard output to a file, you can redirect the error messages to a file. This can be done by placing a 2 directly in front of the redirection angle bracket. If you are not interested in the error messages, you simply can send them to /dev/null:
$ find / -name foo 2> /dev/null
This shows you the location of file foo, if it exists, without those pesky permission denied error messages. I almost always invoke the find command in this way.
The number 2 represents the standard error output stream. Standard error is where most commands send their error messages. Normal (non-error) output is sent to standard output, which can be represented by the number 1. Because most redirected output is the standard output, output redirection works only on the standard output stream by default. This makes the following two commands equivalent:
$ find / -name foo > output.txt
$ find / -name foo 1> output.txt
Sometimes you might want to save both the error messages and the standard output to file. This often is done with cron jobs, when you want to save all the output to a log file. This also can be done by directing both output streams to the same file:
$ find / -name foo > output.txt 2> output.txt
This works, but again, there's a better way to do it. You can tie the standard error stream to the standard output stream using an ampersand. Once you do this, the error messages goes to wherever you redirect the standard output:
$ find / -name foo > output.txt 2>&1
One caveat about doing this is that the tying operation goes at the end of the command generating the output. This is important if piping the output to another command. This line works as expected:
find -name test.sh 2>&1 | tee /tmp/output2.txt
but this line doesn't:
find -name test.sh | tee /tmp/output2.txt 2>&1
and neither does this one:
find -name test.sh 2>&1 > /tmp/output.txt
I started this discussion on output redirection using the find command as an example, and all the examples used the find command. This discussion isn't limited to the output of find, however. Many other commands can generate enough error messages to obscure the one or two lines of output you need.
Output redirection isn't limited to bash, either. All UNIX/Linux shells support output redirection using the same syntax.
Searching the Command History
One of the greatest features of the bash shell is command history, which makes it easy to navigate through past commands by navigating up and down through your history with the up and down arrow keys. This is fine if the command you want to repeat is within the last 10-20 commands you executed, but it becomes tedious when the command is 75-100 commands back in your history.
To speed things up, you can search interactively through your command history by pressing Ctrl-R. After doing this, your prompt changes to:
(reverse-i-search)`':
Start typing a few letters of the command you're looking for, and bash shows you the most recent command that contains the string you've typed so far. What you type is shown between the ` and ' in the prompt. In the example below, I typed in htt:
(reverse-i-search)`htt': rpm -ql $(rpm -qa | grep httpd)
This shows that the most recent command I typed containing the string htt is:
rpm -ql $(rpm -qa | grep httpd)
To execute that command again, I can press Enter. If I want to edit it, I can press the left or right arrow key. This places the command on the command line at a normal prompt, and I now can edit it as if I just typed it in. This can be a real time saver for commands with a lot of arguments that are far back in the command history.
Using for Loops from the Command Line
One last tip I'd like to offer is using loops from the command line. The command line is not the place to write complicated scripts that include multiple loops or branching. For small loops, though, it can be a great time saver. Unfortunately, I don't see many people taking advantage of this. Instead, I frequently see people use the up arrow key to go back in the command history and modify the previous command for each iteration.
If you are not familiar with creating for loops or other types of loops, many good books on shell scripting discuss this topic. A discussion on for loops in general is an article in itself.
You can write loops interactively in two ways. The first way, and the method I prefer, is to separate each line with a semicolon. A simple loop to make a backup copy of all the files in a directory would look like this:
$ for file in * ; do cp $file $file.bak; done
Another way to write loops is to press Enter after each line instead of inserting a semicolon. bash recognizes that you are creating a loop from the use of the for keyword, and it prompts you for the next line with a secondary prompt. It knows you are done when you enter the keyword done, signifying that your loop is complete:
$ for file in *
> do cp $file $file.bak
> done
And Now for Something Completely Different
When I originally conceived this article, I was going to name it "Stupid bash Tricks", and show off some unusual, esoteric bash commands I've learned. The tone of the article has changed since then, but there is one stupid bash trick I'd like to share.
About five years ago, a Linux system I was responsible for ran out of memory. Even simple commands, such as ls, failed with an insufficient memory error. The obvious solution to this problem was simply to reboot. One of the other system administrators wanted to look at a file that may have held clues to the problem, but he couldn't remember the exact name of the file. We could switch to different directories, because the cd command is part of bash, but we couldn't get a list of the files, because even ls would fail. To get around this problem, the other system administrator created a simple loop to show us the files in the directory:
$ for file in *; do echo $file; done
This worked when ls wouldn't, because echo is a part of the bash shell, so it already is loaded into memory. It's an interesting solution to an unusual problem. Now, can anyone suggest a way to display the contents of a file using only bash built-ins?