Lurking for js files with jslurk
This is my experimental methodology for lurking for interesting JS oddities and tidbits. JS files contain application code, api endpoints and sometimes sensitive information like API keys and tokens. It would be impractical to manually sift through JS files manually so I am working on an automated approach.
Tools used#
Install jslurk#
jslurk depends on Elixir which is available from package managers or I recommend getting the latest
version using: asdf
sudo apt-get install elixir # easy method
# or using asdf for the latest version
asdf plugin add erlang
asdf plugin add elixir
asdf install erlang 26.2.1
asdf install elixir 1.18.0-otp-26
# test it works
iex
# install dependencies
cd ./jslurk
mix deps.get
mix escript.build
The approach#
Get some domains in scope via your bug bounty platform of choice and but the domains in a new line deliminated text file. For wildcard domains use bbot to get subdomains.
bbot -t target.com -p subdomain-enum
When the scan is finished you can find a “subdomains.txt” file in ~/.bbot/scans/x
Create one big domains.txt file with all the domains.
Crawling for js files#
Now lets pipe the domains through httpx to make sure their active. Using the status code flag and awk to filter the URLs before passing to katana. katana will then crawl the domains looking for JS files and save the js file URLs to a text file.
cat ./domains.txt | httpx -sc 200 | awk '{print $1}' | katana -em js -jc -d 5 -c 50 -silent | tee ./active_js_hosts.txt
We also could pipe the output of katana directly into jslurk or we can do this…
Resolving js files#
I had an issue where the .js files returned from katana were being redirected or came us as 404. You may need to add more filtering to remove these false positives using httpx:
cat ./domains.txt | httpx -follow-redirects -follow-host-redirects -filter-string "<html" | tee ./active_js_hosts.txt
Run through jslurk#
Using the download flag will save the javascript intto a folder To see findings appear in realtime I like this approach.
cat ./active_js_hosts.txt | ./jslurk --download ./downloaded_js | tee ./lurk_output.txt
Run through jslurk and save to .json#
To create a report .json file specify an output json file.
cat ./active_js_hosts.txt | ./jslurk --download ./downloaded_js -o ./report.json
Then check out the report.json for interesting findings.
JSLurk#
JSLurk is my experimental js scanning tool which looks for:
- DOM sinks, html, templates and DOM manipulation
- Exposed secrets, URLs, tokens and secrets
- API endpoints
- JSON objects
automation one-liner#
You may get blocked or banned when you attempt to crawl or scan which can be annoying.
It’s probably better to download and crawl at the same time.
You can pipe the output of katana directly into jslurk like this…
cat ./domains.txt | katana -em js -jc -d 5 -c 50 -silent | ./jslurk --download ./domains > ./streaming_output.txt
Instead of waiting for everything to finish using the “-o” flag I’m redirecting the output of jslurk to a file so I can “tail -f” the output file as it’s running.
Thanks#
See you next time.