Web History

A place for general discussion of sleuthkit.org projects or other open source forensics software.

Moderator: carrier

Web History

Postby jul » Mon Jan 01, 2018 5:45 pm

Hello, I'm kind of new into forensics and I'm now learning the programs. So I'm using the same image in Autopsy and Encase and noticed that Autopsy gives 5000+ browser history and encase 500+. Also I checked them both and Autopsy gives me websites I haven't visited (I know because I made the image) and I was wondering why are these false positives or what and how?
jul
 
Posts: 2
Joined: Wed Jul 19, 2017 6:30 pm

Re: Web History

Postby Hoyt » Thu Mar 01, 2018 1:34 pm

I've been trying to think of how to keep this answer short and I can't come up with a way. Einstein said if you can't explain it simply, you don't understand it well enough. He may be right in my case.

The best image to use for tool comparison is called a baseline image. A true baseline image is one you know for absolute certain what artifacts it contains, where those artifacts are, how they got there, etc. Oftentimes the baseline image won't be a complete disk image, but rather a smaller subset intended to test for specific things. For example, a baseline image for browser history might only contain those disk/filesystem structures and artifacts pertinent to the subject, along with enough additional data that the tool being tested can run without error. Brian has some test images here you can use for that sort of thing.

Whatever comes across your desk from the field in a real world situation, however, is beyond your control and we often do see disparity between different tools. We'd hope that disparity for something as common as Web history would fall into a much narrower margin than 5000 artifacts vs. 500 artifacts. Regardless, the factors that go into what constitutes a given artifact does differ somewhat from tool to tool. Likewise, the programming methods used to obtain an artifact also vary from tool to tool. For example, one tool might have been written to distinguish deleted history files carved by the tool itself from those that existed prior to carving only counting those files not resulting from carving. One adds to the file count under Web history after carving while the other doesn't.

Sometimes it comes down to error. One principle of the UNIX philosophy is to fail big and make lots of noise when you do. Not all software is designed this way. Some fail silently. Some aren't written to test for all failure conditions. In those cases, a function or module running and quitting is considered successful regardless of outcome. Such a tool might not warn you when it encounters permissions errors, for example. It might just quit the function and move on to the next without you being any the wiser. It might not warn you if a function glitches and doesn't complete. Worse still is that there might not even be a log entry related to the condition at all. The only way to tell is to run the tool again, preferably as part of a second test case using the same image so that the results from the first test don't step on or mix with the second test. Even then, if the problem condition is persistent and not a one-off glitch, this won't prove to be very helpful.

Another issue has to do with what the tool was written to discover. For a Windows exam, we can likely expect that any tool designed for the purpose will recover Internet Explorer artifacts. Depending on when the tool was released, it might not account for Edge artifacts that are located in different places from IE. Further, a given tool may search for and give account of IE/Edge, Firefox, and Chrome artifacts, but not Opera or the older Safari on Windows browsers or other less popular browsers. A tool might count a SQLite database as a single artifact in addition to its entry items, while another might not. This aspect really boils down to what the tool was written to do. Free and Open Source Software (FOSS) tools lend themselves to inspection by default as to how/what they're doing behind the scenes. Proprietary tools leave you with (a) only their documentation or (b) training material if you've attended classes on it or (c) tech support if the first year hasn't expired or you've coughed up the money to pay for continued support. If you have trouble reading or understanding source code, you may still not get the answer you seek from any of them.

Lastly, it may come down to how the tool approaches an artifact. For example, one tool may search for raw files on disk while another parses the registry and still another does both. At least one question to ask is does the tool add results from both of those together in the count? Or, more importantly, what exactly constitutes "Web history" for my tool?

The bottom line for me is to generally ignore counts like that. I only use them as indicators for processing and I think that's what most software authors have in mind. If I see a category with 5000 items, I might estimate how long a follow up process might take to weed through those vs. another category with a larger or smaller count. I might use those counts in my narrative report to show my thought process as I worked the case from general theories to the more specific. If I do need specific counts for charging purposes, such as the number of files meeting Project Vic's CAT 1, then I conduct specific examinations to arrive at those counts. If I need to know exactly how many items are listed in TypedURLs in the registry and my tool doesn't produce that result by default, I conduct a deliberate exam to discover that answer. Lastly, the best way I've found to compare the output from different tools is to browse through those results and see what sorts of artifacts a tool lists for a given category. This doesn't require any programming knowledge really and should provide lots of insight at a glance as to what the tools "think" about that artifact type.

Hopefully you'll decide to post more information about your comparison. I haven't used EnCase in over a decade, but I'm curious to know what you discover the cause of the discrepancy might be.

Hoyt
Hoyt Harness, CFCE
Hoyt
 
Posts: 74
Joined: Thu Dec 11, 2014 4:02 am
Location: Little Rock, AR


Return to General

Who is online

Users browsing this forum: No registered users and 0 guests