At the bottom of Justin’s blog post he wrote this:
For bonus points you can also push those JSON files into Elasticsearch (or modify onionrunner.py to do so on the fly) and analyze the results using Kibana!
Always being up for a challenge I’ve done just that. The onionrunner.py script outputs each scan result as a json file, you have two options for loading this into ElasticSearch. You can either load your results after you’ve run a scan or you can load them into ElasticSearch as a scan runs. Now this might sound scary but it’s not, lets tackle each option separately.
On the fly:
To send the results to ElasticSearch (and to a file), you just need to make some small changes to the onionrunner.py script (10 lines of code). Firstly we need to import a couple of extra python libraries. At the top of onionrunner.py add the following lines of code.
from elasticsearch import Elasticsearch import datetime
If you haven’t used the elasticsearch python library before you will need to install it which you can easily do using:
pip install elasticsearch
sudo pip install elasticsearch
The elasticsearch library is essentially what we are going to use to load the results into ElasticSearch, as the results as nicely formatted as json, ElasticSearch happily accepts the data without any formatting changes. The datetime library is used to create the correct timestamp format for ElasticSearch so we can track when the data was imported (important if you are running multiple scans).
The second (and equally easy part) is to create a python function to send the results where they need to go. Create this function towards the bottom of the script (I added it after the add_new_onions function):
def send_to_elastic(data): try: es = Elasticsearch() data['timestamp'] = datetime.datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%S.%fZ') es.index(index='osint', doc_type='onion', body=data) except: pass
You will need to make some slight tweaks to the code to match your ElasticSearch environment. The first part of the function is where you define your ElasticSearch instance to connect to, the default is localhost:9200. If your server is called ‘bob’ then you need to change the line of code to:
es = ElasticSearch('bob')
The next line of code adds the timestamp we mentioned earlier, you don’t need to change this but if you use Kibana to visualise the data you will need to add this as the field to index. The next step is to change the index and doc_type variables to meet your requirements. So for example if you wanted to store the results in your ‘research’ index, with a doc_type of ‘darkweb’ you would simply change the code to this:
es.index(index='research', doc_type='darkweb', body=data)
The final step is to add the line of code into the script to send the data once it’s been collected. To do this you need to find the def process_results function, and then after this segment of code:
# look for additional .onion domains to add to our scan list scan_result = ur"%s" % json_response.decode("utf8") scan_result = json.loads(scan_result)
You just need to add this line of code:
# look for additional .onion domains to add to our scan list<br /> scan_result = ur"%s" % json_response.decode("utf8")<br /> scan_result = json.loads(scan_result)<br /> <strong>send_to_elastic(scan_result)</strong>
And that’s it, job done. Now when you run the script the results will get saved to a json file and also added into ElasticSearch (fingers crossed).
From results files:
Adding existing scan results into ElasticSearch is also just as easy, well in fact it’s easier as I’ve written the code for you. Head over to https://github.com/catalyst256/MyJunk/blob/master/loadresults.py and you will find a pre-made script all ready for you to use. Well actually you will need to change the ElasticSearch parameters at the top of the script but otherwise it’s good to go.
To run the script all you need to do is specify the results folder as a variable when you run the script. So for example if you take the default output location for the onionrunner.py script you would just run the following.
./loadresults.py onionscan_results/ ( NOTE: you need to have the trailing ‘/’ in there)
This will then load each file and send it to ElasticSearch, now how simple was that.
Maltego all the things:
I’ve been messing around with ElasticSearch a lot lately for various things and one thing I really like about it is how easy it is to plug Maltego (via transforms) into it and get some lovely visualisations out. So with that in mind I spent a bit of time writing some transforms for the data from OnionRunner.
The awesome thing (well one of them) about ElasticSearch is that it is free text searching, so essentially type some words in and it will return any record that matches, or you can be more precise and specify certain key: value searches to run. If we look at the data collected by OnionScan/OnionRunner we can start to work out what we want to search for. For example if you want to find all servers that run Apache you can search one of two ways.
2. serverVersion: Apache
Both will give you the results you need, the second option will only return matches to that specific “key” whereas the first one will find any references to apache in any field.
Using this search method it’s easy to create some Maltego transforms. The flow of the ones I (quickly) created work like this.
1. Create a phrase with your search query
2. Transform returns hiddenServer addresses
3. Search for open ports
4. Search for SSH keys
These are just some quick examples, and here are some screenshots.
The Maltego transforms aren’t quite ready for release as I need to tweak them and make them production ready but once they are I will release them. They are all local transforms so you will need to install them into Maltego yourself (I will provide instructions) but it’s a painless process.
Massive thanks to Justin for creating OnionRunner and if you want to learn Python I’ve heard great things about the Python courses he provides .
PSSorry about the rubbish code blocks, WordPress hosted doesn’t like Python…:frowning: