YOU CAN’T GET THERE FROM HERE

When tools fail you

I am reminded of a joke concerning a visitor to a large city who asked a local for directions. The local responded to the request for directions by saying “You can’t get there from here.” Sometimes that is the case when using prepackaged tools. They just don’t do exactly what you want and there is no easy way to get them to conform to your will.

Recently in one of my forensics classes at the university where I teach there was a technical issue with one of the commercial tools we use. In an attempt to salvage the rest of my 75 minute class period I turned to Autopsy. It is far from being a bad tool and it has some nice features. One of the things it supports is filters. You can filter files by size, type, etc. What you cannot do, however, is combine these filters. This is just one simple example of something that is extremely easy with a database, but if you only have a prepackaged tools “You can’t get there from here.”

Based on our live analysis of the subject system from PFE, we know that the attack most likely occurred during the month of March. We also see that the john account was used in some way during the attack. As noted earlier in this chapter, this account has administrative privileges. We can combine these facts together to examine only files accessed and modified from March onwards that are owned by john or johnn (with user IDs of 1000 and 1001, respectively). All that is required is a few additions to the where clause in our query which now reads:

select accessdate, accesstime, filename, permissions, username from files, users where files.userid=users.uid and modifydate > date(‘2015-03-01’) and accessdate > date(‘2015-03-01’) and (files.userid=1000 or files.userid=1001) order by accessdate desc, accesstime desc;.

We could have used the users table to match based on username, but it is a bit easier to use the user IDs and prevents the need for a join with the users table. This query ran in 0.13 seconds on my laptop and returned only 480 rows, a reduction of over 867,000 records. This allows you to eliminate the noise and home in on the relevant information such as that shown in Figure 6.14 and Figure 6.15.

FIGURE 6.14

Evidence of a rootkit download.

FIGURE 6.15

Evidence of logging in to a bogus account. Note that the modified files for the john account suggest that the attacker initially logged in with this account, switched to the johnn account as a test, and then logged off.

CREATING A TIMELINE

As we said previously, making a proper timeline with access, modification, and creation times intertwined is not easy with a simple spreadsheet. It is quite easily done with our database, however. The shell script below (which is primarily just a SQL script) will create a new timeline table in the database. The timeline table will allow us to easily and quickly create timelines.

!/bin/bash

# create-timeline.sh

# Simple shell script to create a timeline in the database.

#

Developed for PentesterAcademy by # Dr. Phil Polstra (@ppolstra) usage () { echo “usage: $0 <database>”

echo “Simple script to create a timeline in the database” exit 1

} if [ $# -lt 1 ] ; then usage fi

cat << EOF | mysql $1 -u root -p create table timeline (

Operation char(1),

Date date not null, Time time not null, recno bigint not null

); insert into timeline (Operation, Date, Time, recno) select “A”, accessdate, accesstime, recno from files; insert into timeline (Operation, Date, Time, recno) select “M”, modifydate, modifytime, recno from files; insert into timeline (Operation, Date, Time, recno) select “C”, createdate, createtime, recno from files;

EOF

There is one technique in this script that requires explaining as it has not been used thus far in this book. The relevant line is cat << EOF | mysql $1 -u root -p. This construct will cat (type out) everything from the following line until the string after <<

(which is ‘EOF’ in our case) is encountered. All of these lines are then piped to mysql which is run against the passed in database ($1) with user root who must supply a password.

Looking at the SQL in this script we see that a table is created that contains a one character operation code, date, time, and record number. After the table is created three insert statements are executed to insert access, modification, and creation timestamps into the table. Note that recno in the timeline table is the primary key from the files table. Now that we have a table with all three timestamps, a timeline can be quickly and easily created. This script ran in under two seconds on my laptop.

For convenience I have created a shell script that accepts a database and a starting date and then builds a timeline. This script also uses the technique that was new in the last script. Note that you can change the format string for the str_to_date function in this script if you prefer something other than the standard US date format.

!/bin/bash

# print-timeline.sh

# Simple shell script to print a timeline.

#

Developed for PentesterAcademy by # Dr. Phil Polstra (@ppolstra) usage () { echo “usage: $0 <database> ” echo “Simple script to get timeline from the database” exit 1

} if [ $# -lt 2 ] ; then usage fi

cat << EOF | mysql $1 -u root -p select Operation, timeline.date, timeline.time, filename, permissions, userid, groupid from files, timeline

where timeline.date >= str_to_date(“$2”, “%m/%d/%Y”) and files.recno = timeline.recno

order by timeline.date desc, timeline.time desc;

EOF

At this point I should probably remind you that the timestamps in our timeline could have been altered by a sophisticated attacker. We will learn how to detect these alterations later in this book. Even an attacker that knows to alter these timestamps might miss a few files here and there that will give you insight into what has transpired.

The script above was run with a starting date of March 1, 2015. Recall from our live analysis that some commands such as netstat and lsof failed which lead us to believe the system might be infected with a rootkit. The highlighted section in Figure 6.16 shows the Xing Yi Quan rootkit was downloaded into the john user’s Downloads directory at 23:00:08 on 2015-03-05. As can be observed in the highlighted portion of Figure 6.17, the compressed archive that was downloaded was extracted at 23:01:10 on the same day.

FIGURE 6.16

Evidence showing the download of a rootkit.

FIGURE 6.17

Evidence of a rootkit compressed archive being uncompressed.

It appears that the attacker logged off and did not return until March 9. At that time he or she seems to have read the rootkit README file using more and then built the rootkit. Evidence to support this can be found in Figure 6.18. It is unclear why the attacker waited several days before building and installing the rootkit. Looking at the README file on the target system suggests an inexperienced attacker. There were 266 matches for the search string “xingyi” in the timeline file. The rootkit appears to have been run repeatedly. This could have been due to a system crash, reboot, or attacker inexperience.

FIGURE 6.18

Evidence showing a rootkit being built and installed.

We have really just scratched the surface of what we can do with a couple of database tables full of metadata. You can make up queries to your heart’s content. We will now move on to other common things you might wish to examine while your image is mounted.

EXAMINING BASH HISTORIES

During our live response we used a script to extract users’ bash command histories. Here we will do something similar except that we will use the filesystem image. We will also optionally import the results directly into a database. The script to do all this follows.

!/bin/bash

#

get-histories.sh

# Simple script to get all user bash history files and . # by Dr. Phil Polstra (@ppolstra) as developed for # PentesterAcademy.com.

usage () { echo “usage: $0 [database name]” echo “Simple script to get user histories and \ optionally store them in the database” exit 1

}

if [ $# -lt 1 ] ; then usage fi # find only files, filename is .bash_history # execute echo, cat, and echo for all files found olddir=$(pwd) cd $1 find home -type f -regextype posix-extended \

-regex “home/[a-zA-Z.]+(/.bash_history)” \

-exec awk ‘{ print “{};” $0}’ {} \; \

| tee /tmp/histories.csv # repeat for the admin user

find root -type f -regextype posix-extended \

-regex “root(/.bash_history)” \

-exec awk ‘{ print “{};” $0}’ {} \; \

| tee -a /tmp/histories.csv cd $olddir if [ $# -gt 1 ] ; then chown mysql:mysql /tmp/histories.csv cat << EOF | mysql $2 -u root -p create table if not exists ‘histories’ ( historyFilename varchar(2048) not null, historyCommand varchar(2048) not null, recno bigint not null auto_increment, primary key(recno)

);

load data infile “/tmp/histories.csv” into table histories fields terminated by ‘;’ enclosed by ‘”’

lines terminated by ‘\n’;

EOF fi

Back in Chapter 3, our live response script simply displayed a banner, typed out the history file contents, and displayed a footer. This will not work as a format if we wish to import the results into a spreadsheet and/or database. To get an output that is more easily imported we use awk.

Some readers may be unfamiliar with awk. It was created at Bell Labs in the 1970s by Alfred Aho, Peter Weinberger, and Brian Kernighan. Its name comes from the first letters of the authors’ surnames. Awk is a text processing language. The most common use of awk in scripts is the printing of positional fields in a line of text.

Simple awk usage is best learned by examples. For example, the command echo “one two three” | awk ‘{ print $1 $3 }’ will print “onethree”. By default fields are separated by whitespace in awk. The three -exec clauses for the find command in the script presented in Chapter 3 have been replaced with the single clause exec awk ‘{ print “{};” $0}’ {} \;. The $0 in this awk command refers to an entire line. This prints the filename followed by a semicolon and then each line from the file.

The database code is new if we compare this script to the similar one in Chapter 3. It is also straightforward and uses techniques previously discussed. Another thing that is done in this script is to change the owner and group of the output histories.csv file to mysql. This is done to avoid any complications loading the file into the database. Partial results from running this script against our PFE subject system are shown in Figure 6.19.

FIGURE 6.19

Extracting bash command histories from the image file.

Once the histories are loaded in the database they are easily displayed using select * from histories order by recno. This will give all user histories. Realize that each account’s history will be presented in order for that user, but there is no way to tell when any of these commands were executed. The proper query to display bash history for a single user is select historyCommand from histories where historyFilename like ‘%<username>%’ order by recno;.

The results of running the query select historyCommand from histories where historyFilename like ‘%.johnn%’ order by recno; are shown in Figure 6.20. From this history we can see the bogus johnn user ran w to see who else was logged in and what command they last executed, typed out the password file, and switched to two user accounts that should not have login privileges.

FIGURE 6.20

Bash command history for a bogus account created by an attacker. Note that the commands being run are also suspicious.

Several interesting commands from the john account’s bash history are shown in Figure 6.21. It can be seen that this user created the johnn account, copied /bin/true to /bin/false, created passwords for whoopsie and lightdm, copied /bin/bash to /bin/false, edited the group file, move the johnn user’s home directory from /home/johnn to /home/.johnn (which made the directory hidden), edited the password file, displayed the man page for sed, used sed to modify the password file, and installed a rootkit. Copying /bin/bash to /bin/false was likely done to allow system accounts to log in. This might also be one source of the constant “System problem detected” popup messages.

FIGURE 6.21

Evidence of multiple actions by an attacker using the john account.

EXAMINING SYSTEM LOGS

We might want to have a look at various system log files as part of our investigation. These files are located under /var/log. As we discussed previously, some of these logs are in subdirectories and others in the main /var/log directory. With a few exceptions these are text logs. Some have archives of the form .n, where n is an integer, and older archives may be compressed with gzip. This leads to log files such as syslog, syslog.1, syslog.2.gz, syslog.3.gz, etc. being created.

A script very similar to one from Chapter 3 allows us to capture log files for our analysis. As with the script from the earlier chapter, we will only capture the current log. If it appears that archived logs might be relevant to the investigation they can always be obtained from the image later. Our script follows.

!/bin/bash

# get-logfiles.sh

# Simple script to get all logs and optionally # store them in a database.

Warning: This script might take a long time to run! # by Dr. Phil Polstra (@ppolstra) as developed for # PentesterAcademy.com.

usage () {

echo “usage: $0 [database name]” echo “Simple script to get log files and” echo “optionally store them to a database.” exit 1

} if [ $# -1t 1 ] ; then usage fi # remove old file if it exists if [ -f /tmp/logfiles.csv ] ; then rm /tmp/logfiles.csv fi # find only files, exclude files with numbers as they are old logs # execute echo, cat, and echo for all files found olddir=$(pwd) cd $1/var

find log -type f -regextype posix-extended \

-regex ‘log/[a-zA-Z.]+(/[a-zA-Z.]+)*’ \ -exec awk ‘{ print “{};” $0}’ {} \; \

| tee -a /tmp/logfiles.csv cd $olddir if [ $# -gt 1 ] ; then

chown mysql:mysql /tmp/logfiles.csv clear

echo “Let’s put that in the database” cat << EOF | mysql $2 -u root -p create table if not exists logs ( logFilename varchar(2048) not null, logentry varchar(2048) not null, recno bigint not null auto_increment, primary key(recno)

);

load data infile “/tmp/logfiles.csv” into table logs fields terminated by ‘;’ enclosed by ‘”’ lines terminated by ‘\n’;

EOF fi

There are no techniques used in this script that have not been discussed earlier in this book. Running this against the PFE subject system yields 74,832 entries in our database in 32 log files. Some of these results are shown in Figure 6.22.

FIGURE 6.22

Partial results of importing log files into the database.

Recall that these logs fall into three basic categories. Some have absolutely no time information, other give seconds since boot, while others give proper dates and times. Because of this it is normally not possible to build a timeline of log entries. The general syntax for a query of a single log file is select logentry from logs where logfilename like ‘%%’ order by recno;, i.e. select logentry from logs where logfilename like ‘%auth%’ order by recno;. Partial results from this query are shown in Figure 6.23. Notice that the creation of the bogus johnn user and modifications to the lightdm and whoopsie accounts are clearly shown in this screenshot.

FIGURE 6.23

Evidence of the attacker’s actions from log files.

If you are uncertain what logs have been imported, the query select distinct logfilename from logs; will list all of the log files captured. If you are not sure what kind of information is in a particular log, run a query. One of the nice things about this method is that it is so quick and easy to look at any of the logs without having to navigate a maze of directories.

Several of these logs, such as apt/history.log, apt/term.log, and dpkg.log, provide information on what has been installed via standard methods. It is quite possible that even a savvy attacker might not clean their tracks in all of the relevant log files. It is certainly worth a few minutes of your time to browse through a sampling of these logs.

EXAMINING LOGINS AND LOGIN ATTEMPTS

As discussed in the previous section, most of the system logs are text files. Two exceptions to this norm are the btmp and wtmp binary files which store failed logins and login session information, respectively. Earlier in this book, when we were talking about live response, we introduced the last and lastb commands which display information from wtmp and btmp, respectively.

Like all good Linux utilities, these two commands support a number of command line options. The command last -Faiwx will produce a full listing (-F), append (-a) the IP address for remote logins (-i), use the wide format (-w), and include extra information (-

x), such as when a user changed the run level. Running this command will provide information contained within the current wtmp file only. What if you want to view older information, perhaps because the current file is only a couple days old? For this and other reasons, last allows you to specify a file using the -f option.

The results of running last against the current and most recent archive wtmp are shown in Figure 6.24. This is a good example of why you should look at the archived wtmp (and btmp) files as well. The current wtmp file contains only three days of information, but the archive file has an additional month of data.

FIGURE 6.24

Running the last command on the current and most recent archive wtmp files.

Not surprisingly, we can create a script that will import the logins and failed login attempts into our database. Because these files tend to be smaller than some other logs and they can contain valuables information, the script presented here loads not only the current files but also any archives. A few new techniques can be found in the script that follows.

!/bin/bash

# get-logins.sh

#

Simple script to get all successful and unsuccessful # login attempts and optionally store them in a database.

# by Dr. Phil Polstra (@ppolstra) as developed for # PentesterAcademy.com. usage () { echo “usage: $0 [database name]” echo “Simple script to get logs of successful “

echo “and unsucessful logins.” echo “Results may be optionally stored in a database” exit 1

}

if [[ $# -lt 1 ]] ; then usage fi # use the last and lastb commands to display information

use awk to create ; separated fields # use sed to strip white space

echo “who-what;terminal-event;start;stop;elapsedTime;ip” \

| tee /tmp/logins.csv for logfile in $1/var/log/wtmp* do

last -aiFwx -f $logfile | \ awk ‘{print substr($0, 1, 8) “;” substr($0, 10, 13) “;” \

substr($0, 23, 24) “;” substr($0, 50, 24) “;” substr($0, 75, 12) \ “;” substr($0, 88, 15)}’ \ | sed ‘s/[[:space:]]*;/;/g’ | sed ‘s/[[:space:]]+\n/\n/’ \

| tee -a /tmp/logins.csv done echo “who-what;terminal-event;start;stop;elapsedTime;ip” \

| tee /tmp/login-fails.csv for logfile in $1/var/log/btmp* do lastb -aiFwx -f $logfile | \

awk ‘{print substr($0, 1, 8) “;” substr($0, 10, 13) “;” \ substr($0, 23, 24) “;” substr($0, 50, 24) “;” substr($0, 75, 12) \ “;” substr($0, 88, 15)}’ \ | sed ‘s/[[:space:]]*;/;/g’ | sed ‘s/[[:space:]]+\n/\n/’ \

| tee -a /tmp/login-fails.csv done

if [ $# -gt 1 ] ; then chown mysql:mysql /tmp/logins.csv chown mysql:mysql /tmp/login-fails.csv cat << EOF | mysql $2 -u root -p create table logins ( who_what varchar(8), terminal_event varchar(13), start datetime,

stop datetime, elapsed varchar(12), ip varchar(15), recno bigint not null auto_increment, primary key(recno)

);

load data infile “/tmp/logins.csv” into table logins fields terminated by ‘;’ enclosed by ‘”’ lines terminated by ‘\n’ ignore 1 rows (who_what, terminal_event, @start, @stop, elapsed, ip) set start=str_to_date(@start, “%a %b %e %H:%i:%s %Y”), stop=str_to_date(@stop, “%a %b %e %H:%i:%s %Y”); create table login_fails ( who_what varchar(8), terminal_event varchar(13), start datetime, stop datetime, elapsed varchar(12), ip varchar(15),

recno bigint not null auto_increment, primary key(recno)

); load data infile “/tmp/login-fails.csv” into table login_fails fields terminated by ‘;’ enclosed by ‘”’ lines terminated by ‘\n’ ignore 1 rows (who_what, terminal_event, @start, @stop, elapsed, ip) set start=str_to_date(@start, “%a %b %e %H:%i:%s %Y”),

stop=str_to_date(@stop, “%a %b %e %H:%i:%s %Y”);

EOF fi

This script starts out in the usual way and is quite simple right up until the line for logfile in $1/var/log/wtmp*. This is our first new item. The bash shell supports a number of variations of a for loop. Readers familiar with C and similar programming languages have seen for loops that are typically used to iterate over a list where the number of iterations is known beforehand and an integer is incremented (or decremented) with each step in the loop. Bash supports those types of loops and also allows a loop to be created that iterates over files that match a pattern.

The pattern in our for loop will match the login log file (wtmp) and any archives of the same. The do on the next line begins the code block for the loop and done seven lines later terminates it. The last command is straightforward, but the same cannot be said of the series of pipes that follow. As usual, it is easier to understand the code if you break this long command down into its subparts.

We have seen awk, including the use of positional parameters such as $0 and $1, in previous scripts. The substr function is new, however. The format for substr is substr(, , ). For example, substr(“Hello there”, 1, 4) would return “Hell”. Notice that indexes are 1-based, not 0-based as in many other languages and programs. Once you understand how substr works, it isn’t difficult to see that this somewhat long awk command is printing six fields of output from last separated by semicolons. In order these fields are to whom or what this entry refers, the terminal or event for this entry, start time, stop time, elapsed time, and IP address.

There is still a small problem with the formatted output from last. Namely, there is likely a bunch of whitespace in each entry before the semicolons. This is where sed, the scripted editor, comes in. One of the most popular commands in sed is the substitution command which has a general format of s///<options>. While “/” is the traditional separator used, the user may use a different character (“#” is a common choice) if desired. The translation of sed ‘s/[[:space:]]*;/;/g’ is search for zero or more whitespace characters before a semicolon, if you find them substitute just a semicolon, and do this globally (g option) which in this context means do not stop with the first match on each line. The second sed command, sed ‘s/[[:space:]]+\n/\n/’, removes whitespace from the end of each line (the IP field). The code for processing btmp (failed logins) parallels the wtmp code.

The database code is similar to what we have used before. Once again, the only small complication is formatting the date and time information output by last and lastb into a MySQL datetime object. Some of the output from running this script against the PFE subject system is shown in Figure 6.25. Note that last and lastb generate an empty line and a message stating when the log file was created. This results in bogus entries in your database. My philosophy is that it is better to ignore these entries than to add considerable complication to the script to prevent their creation.

FIGURE 6.25

Output from running logins and failed login attempts script. Note that there are a couple of empty entries and erroneous lines that follow.

The query select from logins order by start; will list login sessions and select from login_fails order by start; will display failed login attempts. Some of the results from these queries are shown in Figure 6.26. In the figure it can be seen that the attacker failed to log in remotely from IP address 192.168.56.1 as lightdm on 2015-03-09 21:33:55. Around that same time the john, johnn, and lightdm accounts had successful logins from the same IP address. The attacker appears to be testing some newly created accounts.

FIGURE 6.26

Login sessions and failed login attempts.

OPTIONAL – GETTING ALL THE LOGS

Earlier in this chapter we discussed importing the current log files into MySQL. We ignored the archived logs to save space and also because they may be uninteresting. For those that wish to grab everything, I offer the following script.

!/bin/bash

# get-logfiles-ext.sh

# Simple script to get all logs and optionally # store them in a database.

Warning: This script might take a long time to run! # by Dr. Phil Polstra (@ppolstra) as developed for # PentesterAcademy.com.

# This is an extended version of get-logfiles.sh.

It will attempt to load current logs and archived logs. # This could take a long time and required lots of storage. usage () { echo “usage: $0 [database name]” echo “Simple script to get log files and” echo “optionally store them to a database.”

exit 1

}

if [ $# -lt 1 ] ; then usage fi # remove old file if it exists if [ -f /tmp/logfiles.csv ] ; then rm /tmp/logfiles.csv fi olddir=$(pwd) cd $1/var for logfile in $(find log -type f -name ‘*’) do if echo $logfile | egrep -q “.gz$” ; then

zcat $logfile | awk “{ print \”$logfile;\” $0 }” \ | tee -a /tmp/logfiles.csv else awk “{ print \”$logfile;\” $0 }” $logfile \

| tee -a /tmp/logfiles.csv fi done cd “$olddir” if [ $# -gt 1 ] ; then chown mysql:mysql /tmp/logfiles.csv clear echo “Let’s put that in the database” cat << EOF | mysql $2 -u root -p create table if not exists logs ( logFilename varchar(2048) not null, logentry varchar(2048) not null, recno bigint not null auto_increment, primary key(recno)

); load data infile “/tmp/logfiles.csv”

into table logs fields terminated by ‘;’ enclosed by ‘”’ lines terminated by ‘\n’;

EOF fi

If you decide to go this route you will want to modify your queries slightly. In particular, you will want to add “order by logFilename desc, recno” to your select statement in order to present things in chronological order. For example, to query all logs you would use select * from logs order by logfilename desc, recno. To examine a particular logfile use select logfilename, logentry from logs where logfilename like ‘%%’ order by logfilename desc, recno, i.e., select logfilename, logentry from logs where logfilename like ‘%syslog%’ order by logfilename desc, recno.

SUMMARY

In this chapter we have learned to extract information from a mounted subject filesystem or filesystems. Many techniques were presented for analyzing this data in LibreOffice and/or a database such as MySQL. In the next chapter we will dig into Linux extended filesystems which will allow us, among other things, to detect data that has been altered by an attacker.

CHAPTER

results matching ""

    No results matching ""