CURRYFINGER - SNI & Host header spoofing utility

Unix philosophy your way to finding the real host behind the CDN.

Travis dropped

CURRYFINGER measures a vanilla request for a particular URL against requests directed to specific IP addresses with forced TLS SNI and HTTP Host headers. The tool takes a string edit distance, and emits matches according to a rough similarity metric threshold.

There are many guides that explain the process of finding servers that may actually host a CDN fronted domain, which all boil down to;


“But Travis,” you say “we already have a tool for this, why do we need yet another one?”

Many guides point to an open source tool, christophetd/CloudFlair, that roughly does this;

Unfortunately, is a little slow, and it fails to indentify true-positives in many cases. Concretely, downloading CloudFlare’s IP lists on every run compounds already slow python warm-up times - and, maybe more importantly, will not work on non-CloudFlair CDNs.

Why not just commit to an existing project? One; Python has its uses, but writing highly performant multi-threaded scanners is not one of them. Two; we get value from separating the concerns of identifying targets and verifying them, to try other, more egregious methods at finding candidate origin servers than commercial OSINT platforms.

CURRYFINGER demonstrates the kind of effective PoC you can pump out in a few hours using Golang. It has been battle tested against thousands of domains, across hundreds of thousands of requests, and run on dozens of servers. I’ll share that information in another post, but let’s just take a look at one example;

Head 2 Head & Demo

Here, we put up against CURRYFINGER in an attempt to identify the real server behind the popular “chat” website

Left Pane - CloudFlair

We launch ./ -o chatbate.txt - kicking off the process of finding targets and carrying out similarity analysis.


We find targets by querying the Shodan REST API; curl "$SHO&query=ssl%3A\"\"" | jq ".matches|.[].ip_str" | tr -d "\"\t " | tee

Then we invoke CURRYFINGER on the results to find which IPs seem like the real origin servers behind the CDN; ./CURRYFINGER -file -show=false -url 2>/dev/null | tee res.txt

Then we drop the CloudFlare IP addresses from the results; grep ^match res.txt | grep -v 104.16|cut -d " " -f 2

We finally manually examine the full response by forcing curl to resolve a domain with a specific IP; curl -vik --resolve$IP


So What

tl;dr; is still running after CURRYFINGER completes and we’ve verified the results. By the time finishes, it has failed to identify the correct server, even though Censys found the IP, and checked it.

Operator Notes

  -file string
    	read ips from specified -file instead of stdin.
  -mbits int
    	Match in the first -mbits. (default 500)
  -perc int
    	Match at -perc[entage] similarity (default 50)
    	Show sample responses.
  -threads int
    	Number of -threads to use. (default 200)
  -timeout duration
    	Timeout the check. (default 30s)
  -ua string
    	Specify User Agent, otherwise we'll generate one.
  -url string
    	-url to check. (default "")

The CURRYFINGER help text.

-file string

You can specify IP addresses to test via stdin, or you can throw a filename here.

-mbits int

This is the number of bytes we’ll consider out of the replies from servers. 500 Bytes is a good default. You can bump this up if you get too many false positives.

-perc int

We divide the total examined bytes by levenshtein edit distance, and call that a ‘percentage’ fun fact; the edit distance can exceed the original sample. It works well enough as a measurement, and empirical results over 15,000 hits show roughly show the 25th percentile at -perc 74. Our default of 50 is good.

-show bool

Setting -show=true will emit both measurement samples to stderr, which is fine for debugging, but you’ll want to set this to -show=false.

-threads int

How many simultaneous threads will be used to perform requests. I’ve used up to fifty-thousand concurent threads over thousands of ips. It works just fine.

-timeout duration

This timeout applies to the total connection to a target server. The default timeout is extremely conservative, values down to -timeout 1s are just fine. If you’re saturating your pipe with -threads 500000 then you’re going to want to increase timeout, or decrease threads. YMMV.

-ua string

We usually generate a random User Agent string for requests, but you can specify one here. I wouldn’t.

-url string

The https:// prefixed url we’re going to grab for our tests.

Getting IP Addresses

If you have a free Shodan account, you have an API Key;

curl "$SHO&query=ssl%3A\"$DOMAIN\"" | jq ".matches|.[].ip_str" | tr -d "\"\t " | tee

Grab some IPs

You can also grab CIDR ranges for popular cloud hosting providers, and masscan -p443 them. I’ll explore this option in another article.


CURRYFINGER does full connects, and doesn’t know what your ulimits are. So, juice those up before a run; ulimit -n 60000. Yep.

VHOST check; lots of domains, just a few IPs

With a pile of IP addresses in and a pile of domains in targetDOMAINS.txt you can quickly test for the presence of every domain on every IP by using GNU parallel.

parallel -j 20 ./CURRYFINGER -url https://{} -threads 200 -show=false -timeout 3s -file :::: targetDOMAINS.txt 2>/dev/null | grep ^match | tee results.txt

Vanilla application of GNU Parallel

All together now; match subdomains

Pull subdomains for a target domain before running CURRYFINGER now you’re cooking with concentrated freedom. Of course, use whatever tools you want, amass, subbrute, Censys, Shodan, masscan, whatever.


Set up env vars.

Here’s what that looks like using;

python -e ssl,ask,bing,google,yahoo,netcraft,dnsdumpster,virustotal,threatcrowd,passivedns -d $DOMAIN -o

##Fix newlines...
cat | tr -d "\r"  >> targetDOMAINS.txt
echo $DOMAIN>>targetDOMAINS.txt

Grab subdomains

curl "$SHO&query=ssl%3A\"$DOMAIN\"" | jq ".matches|.[].ip_str" | tr -d "\"\t " | tee

ulimit -n 60000

parallel -j 400 ./CURRYFINGER -url https://{} -threads 200 -show=false -timeout 3s -mbits 5000 -file :::: targetDOMAINS.txt 2>/dev/null | grep ^match | tee results.txt

Let it rip with 400 parallel instances of CURRYFINGER and match against more bytes.

Grab your own copy