One of the many reasons I enjoy doing interviews for the blog is that I get to introduce to the readers people that they normally might not know about. There are so many people in the information security and digital forensics field that are both amazing and relatively unknown. Mike “Jake” Jacobson is one of those people. I’ve known Jake from back in his RCFL days and I really enjoy talking him about the issues of the day. He’s a tremendously sharp fellow and his current employer is lucky to have him. Since his current employer is the United States government, we’re all fortunate to have him and his team chasing bad guys on our behalf. I hope this interview illustrates why I think so highly of Jake and the work that his team does.
Professional Biography of Mike “Jake” Jacobson
I started cleaning ambulances when I was 14. By 18 I was an EMT and a Paramedic by 19. At age 20 I enlisted in the Marine Corps as a military policeman, where I spent six years on active duty, eventually specializing in accident investigation and reconstruction. Upon separation, I was hired by the Overland Park Police Department as a patrol officer. In 2000, I helped form the department's first full time high-tech crime unit while we were actively surveilling a serial killer. Those were interesting times. In 2003, I was transferred to the Heart of America Regional Computer Forensic Laboratory. I received Windows and UNIX CART certifications and also served as the operations manager for four years. In 2008, I was promoted to Sergeant, so back to patrol I went. In 2010 I decided it was time to look for new opportunities. Through a friend at the RCFL, I was introduced to Eric, who inspired me to go big.
I've learned many lessons through the years: You can't rinse a soapy ambulance in sub-freezing weather, the military is good for you, being poor in hardware, software, and professional training can be inspiring if you have a passion for what you do.
Today, I work for the federal government, where I'm very focused on managing workload as efficiently as possible without impacting quality. I believe we owe it to the taxpayers – and I'm one of them – to produce actionable results cost-effectively. My experience making do with so little continues to influence how I view my job responsibilities.
AFoD: So what led you into digital forensics? Was it primarily your serial killer case or had you been thinking about it previously?
JJ: Thankfully, I’d been preparing for that day for five years. In 1995, my police department felt my computer knowledge was sufficient to send me to a basic computer forensics course taught by SEARCH. In the course of one week, I realized how much I didn’t know. I had so much to learn and I couldn’t wait to begin. It would be almost five years before my next formal training in computer forensics. In hindsight, that was probably for the best.
For the next five years I devoured any information I could find on technology and computer forensics. I learned how the Internet worked by reading an O’Reilly book on DNS, email headers from a book on stopping spam, and a SAM’s book on HTML 4. I discovered a fantastic tutorial on how IP addressing works and read the entire “PC Maintenance and Repair” book. That was NOT exciting. I learned a lot about SCSI, which was really quite valuable back then, and some useful hacking techniques. I sold our District Attorney on sponsoring a local HTCIA, which I promptly joined. I also took computer, programming, database, and networking courses at the local junior college. You name it, I enrolled in it.
My eclectic education may have lacked a lot of specific digital forensic training, but I certainly gained a wide range of knowledge. When the big one hit, I was ready.
AFoD: How did working in the RCFL differ from working in your home agency's digital forensic unit?
JJ: The difference between an operation consisting of two people and an RCFL is significant. My department’s unit was well supported, but Jim Castaldo and I still needed to be resourceful to accomplish the mission. We were a hybrid unit tasked with the full investigation of computer crimes, audio/video forensics, and Internet crimes. We needed to be scrappy to get what we needed, especially forensic-specific training.
Conversely, the RCFL program is huge, consisting of 16 RCFLs throughout the United States. It’s designed to support federal, state, and local agencies and has a large, centralized support unit to handle administration, operations, and training.
The Heart of America RCFL had 16 examiners on board by the end of 2008. If my memory serves, we exceeded 600+ intakes that year. The RCFL is like hitting the forensic examiner’s lottery. The wealth of software and equipment is ridiculous and the training opportunities just get better every year, not to mention the great people with whom you work. Unfortunately, to manage the intake, an RCFL has to operate much like an assembly line. Support personnel handle the network, hardware, and software, distancing the examiner from some very valuable experience. On the plus side, examiners receive a heavy caseload with a wide variety of cases.
Small units just can’t compete with the RCFL’s embarrassment of riches. A well financed operation isn’t inherently “better.” Each RCFL is slightly different, and their performance and output quality vary. Building a lab from the ground up is an invaluable experience, but so is learning to manage and work within a high-speed, assembly line environment and finding a way to maintain quality. In the end, it doesn’t matter whether you’re large or small, law enforcement or civil, rich or poor. If you seek the truth, maintain your integrity, and strive to provide a quality work product in a timely manner, that’s all that matters.
AFoD: Can you explain what your present job is? Do you get the chance to do any digital forensics work or are you primarily in a management role now?
JJ: I’m the director of a digital forensics laboratory for an agency within a cabinet level department of the federal government in support of law enforcement. By the way, the opinions and ideas I express in this interview are my own. I manage digital forensic examiners, strive for peak efficiency while maintaining quality, manage a budget, study trends (i.e. gaze into the crystal ball), track statutory regulations and try my best to keep pace with emerging technologies. I apologize for the resume-speak, but that seems to capture the core of what I do with as few words as possible.
Right now, management duties have limited my forensic work, but perhaps that will change as time passes. I think it’s important to maintain some level of skill to better understand whether my performance expectations remain realistic and my budget planning appropriate.
AFoD: You've been spending a considerable amount of time during the past year researching data mining and digital forensics. Can you talk about what you've been doing and what you've learned?
JJ: Lately, I’ve been looking at ways to apply the data mining process to best serve my Investigation Division’s needs as well as how it might positively impact digital forensic investigations.
The data mining bug bit me last July shortly after I brought in an employee from a data analysis division. I was curious how her data analyst skills would fare against a web server database one of my examiners had extracted. Within hours, she’d discovered significant financial anomalies in the database. Wait. What? How did she do that so fast? What software did she use? (Hyperion Intelligence) Where did she learn about data analysis? How does this provide added value to my agency? And how does any of this advance digital forensics? I had a lot of questions, few easy answers, and the excitement of knowing I might be onto something important.
Data mining is old news. It’s been around for some time, but only recently has it become so accessible. First, we need to define what data mining is and what it is not. There are a number of definitions, but they’re all quite similar: Data mining is the use of automation and/or machine intelligence to extract useful, often previously unknown information from data sources, primarily databases. Data mining is an algorithmic process, whereas data analysis is a human process. Too often data analysis is incorrectly referred to as “data mining”. Digital forensic examiners are not data mining – he or she is neither a computer nor an algorithm – they’re conducting an analysis of data using advanced tools. As an example, FTK 3x has at least one data mining algorithm known as explicit image detection, but serves primarily as a tool for data analysis.
I’m going to reference AccessData’s (AD) FTK 3 simply because I know little about EnCase v7 and FTK has a database backend. It fits the subject matter quite well. I’m not affiliated with AD and receive nothing of real or imaginary value from them unless I pay full price for it. For those of you who wish to advocate other tools (and you know who you are), please feel free to email Eric.
A data mining operation requires data sources and a central repository/database. Mining data is an automated process, so your data must be organized and standardized. Next, you’ll need to conduct research to determine with which data attributes your model will interact. You’ll need to run your algorithm against a set of data and compare your results to the test set. Although one might be tempted to call a search “data mining”, a data mining algorithm is far more complex.
I think this example will help: During the pre-processing phase, FTK will search for, recover, and dump all graphics under one tab. FTK’s Explicit Image Detection algorithm data mines graphics for flesh tone attributes. The algorithm calculates gradient values and scores its probability based upon statistical analysis. Data mining is a complex tool capable of prediction and analysis, yet it can’t differentiate an adult from a child. This model’s value is its ability to narrow the examiner’s focus, thereby improving efficiency. If you’re worried it might miss something, just remember; the algorithm doesn’t eliminate, it only scores. An analyst can quickly scan the segregated, lower scored graphics and easily identify the outliers from the noise. That’s its strength.
As another example, i2’s Workstation social network data mining model is impressive. Those of you who’ve worked a pen trap investigation have used some type of visualization software, often Analyst Notebook. The social network algorithm determines an organization’s hierarch based on call frequency and call direction, an incredibly valuable tool. This is something an analyst may miss due to the typically large data sets involved in many pen register investigations. Again, this doesn’t mean the algorithm’s qualitative evaluation is 100% correct; however, the quantitative results are correct and immediately available for further analysis. Once again, data mining helps narrow the focus, potentially eliminating countless man-hours evaluating the results.
An example of the importance of data visualization is this article on Edward Tufte, published by the Washington Monthly. Also, take a look at these graphs of 311 calls in New York City. As you can see, its value and impact are immediate. I also suggest searching for: “determining the author of anonymous email through data mining” and “data mining text using unsupervised discovery”, to get a far more technical grasp on data mining.
Another fantastic example of data mining is Microsoft’s PhotoDNA. They developed an algorithm that identifies child pornography graphics, even if they’re altered in some manner. After testing it against a known set and tuning it, they let it loose with incredible great. Here’s a link to quick video that shows how it all works.
A potential target area for data mining is unallocated space and timelines. Google is a fantastic example of free-text mining. Imagine the statistical evaluations necessary to identify and de-rank content farms from legitimate, quality sites (thank you, Google). If Google can differentiate between a content farm and a legitimate site, or an original content site vs. the article spinning sites, I think it’s possible to develop a problem definition for unallocated space.
I’d be remiss if I didn’t point out that data analysts are a critical part of the data mining equation. Data mining results are the product of extensive research, but the results must be evaluated and the algorithm periodically tuned to increase value and accuracy.
I hope I've provided an adequate overview of data mining's potential in digital forensics and eDiscovery. The key is to get past the idea of a basic search and the linear progression examiners take when tackling a data set. Of course, we mere humans need to approach large data sets with a plan or we'll just get lost in the data, and deviation is necessary and inevitable. Data mining simply reduces the noise and presents visual clues that will increase our ability to process or eliminate data more effectively and efficiently.
I have a lot more questions than answers on how we might apply data mining principles to digital forensics. There's so much more to data mining than we can discuss here. It's a fascinating field of research and of great value to your agency, whether in digital forensics or other areas.
Here are some books I've read or I'm reading. I tend to read and listen to a variety of source material to further my education. Remember, I'm a manager:
I'm a fan of Malcolm Gladwell's analytical thinking:
The Tipping Point – If you're in narcotics, i.e. investigating it, take a look at chapter 2 on connectors.
Outliers – A fascinating look at the power of analytical thinking; apropos to our subject.
Head First Data Analysis – Some interesting search theory that might improve your searching skills. Over all, this is a good reminder that data mining is of little value without a properly trained, highly competent analyst.
Data Mining Explained: A Manager's Guide to Customer-Centric Business Intelligence – A great overview of data mining and its potential.
AFoD: I think that’s one of the most approachable answers that I’ve seen in regards to explaining data mining. Let’s drill down into the practical application issues. Does implementation of data mining tools and methods require that organizations hire people who have a specific background in areas such as data mining and databases? Can it be done with existing staff such as traditional incident responders and digital forensic examiners?
JJ: Good question Eric. I think we all have a lot of pride in our hard fought knowledge and tend to believe there’s nothing we can’t figure out or accomplish given enough time. Writing an effective data mining algorithm may be that line in the sand many of us can’t reasonably expect to cross. Data science seems populated by people with PhDs and engineering level math skills. Although most of us could develop a model, collect data for algorithm development and testing, manage a database and format the data, most of us will have to wait for FTK and EnCase to provide us with additional data mining functionality, or hire a contractor.
We have an important role to play in advancing data mining and data visualization tools by engaging our preferred vendors in conversation about these capabilities. Once examiners become comfortable with data mining concepts, I think they’ll look at their datasets and forensic environment in a new way. We can also learn a lot from data analysts, who may be especially adept at complex search techniques.
AFoD: There is a tremendous amount of technical change occurring in consumer level computing such as increasingly inexpensive and sophisticated mobile devices and associated cloud computing services. How do you see all of this change impacting the digital forensics field?
JJ: We’re witnessing an important revolutionary shift in how we think about and use digital technology. The Internet is nearly as ubiquitous, inexpensive, and accessible as electricity. Computer processing power and traditional digital storage is now a commodity, and flash storage is the new star of the show. Combine all of this with highly portable, intuitive devices and suddenly the computer becomes a toaster.
As an example, Apple’s upcoming iOS5, iCloud, and current crop of devices is an example of form, functionality, and user experience over features and raw speed. The ability of normal (non geek) users to create, consume, and access data across multiple devices through seamless and transparent use of the cloud may well be a significant shift in behavior. We’ll have to wait and see how it all plays out. An SSD drive with native encryption will be a challenge, not to mention their other tendencies once powered up on a different device. As people become more concerned and better educated about security, they’ll become more comfortable with whole disk encryption. I’d venture to say there will be more live acquisitions and a big focus on cloud data in our future. After all, let’s not forget Amazon and Google.
From an Enterprise standpoint, Microsoft still has a lock on the enterprise. On one hand, you’d think much should remain familiar to us for quite some time, but that could change based upon new product lines. If it improves the bottom line, change will happen quickly. Don’t forget the rapid emergence of “the cloud” as an alternative to some in-house functionality, which has already changes how some of us do business. Another example is Google’s ChromeBook. I wonder if it will find its place in some portion of the enterprise as well as some homes and schools. How will we respond to these changes and how will they affect what we do?
It seems like everything changed overnight. Of course, that’s not the case. Microsoft’s PhotoDNA is a remarkable data mining tool that will make it much more difficult for child pornography to proliferate unchecked, yet it won’t eliminate the bad actors. Instead, investigators will have to adjust as the bad guys adapt. Likewise, consumer and enterprise advances by Apple, Google, Microsoft and WebOS will almost certainly require us to adapt and respond differently than we have in the past.
We best not ignore these changes and assume our jobs won’t change in some way. Don’t be lulled by the lag we see in intake from what’s actually occurring in real-time. Each of these advancements will have an impact on us; many sooner rather than later. I regretted being unable to attend the recent SANS Summit to hear others’ opinion on these changes. Obviously, I don’t believe we’re going to be rendered obsolete quite yet; however, we better have our eyes and ears wide open, prepared to try new processes and procedures, and willing to transition to new ways of operating as budgetary realities and technological advances dictate. We need to be flexible and be prepared to diversify.
I hope I imparted some useful information and sparked some interest in data mining. If you have such an operation within your organization, I encourage you seek them out. I think you’ll be surprised at many of the similarities. We can learn from data analysts, and they’re often just as fascinated in what we do. Even though there are many changes on the horizon, we’ll still have to deal with big datasets for quite some time. I think we can learn a lot from data mining principles, visualization tools, and data analysis techniques.
No comments:
Post a Comment