Bug bounty tools from enumeration to reporting

bug bounty tools

Hello ethical hacker and welcome to the world of hacking and bug bounty hunting. Today, you will learn the bug bounty tools I use when I hunt for vulnerabilities, from reconnaissance, to subdomain enumeration, to finding your first security vulnerabilities. Every craftsman has its toolbox and a bounty hunter is no different. However, it’s easy to get lost in the growing number of bug bounty tools which get published by the community every day. That’s why one of the goals of this article is to provide you with the minimal tools which provide the maximum returns.

Bug bounty tools for general reconnaissance

When you hunt for bugs, the first thing you will do is recon. This step is critical because if you don’t do it well, you will have a hard time down the road. And if you focus primarily on it, it will waste your time. So you must keep a good balance since you are trading your time when you hunt for bugs.

The goal of recon is to gather as much data as possible from about the company you target. 

Unlike a red team assignment, you won’t phish employees since targeting them is out-of-scope in. That’s why I like to focus on finding subdomains, IP ranges, URLs, API keys, etc. To do that, I use the following tools.

Amass as a bug bounty tool for general reconnaissance

OWASP Amass is a swiss-army knife for recon. It performs open-source intelligence and active reconnaissance using various techniques. You can use it to map the external assets of your targets to dress your attack surface and craft your plan of attack. It’s a well-maintained project and you can install it in many ways. I prefer to run it on Docker. It also generates detailed graphs and interfaces with other tools such as Maltego, a famous open-source intelligence software.

Amass has helped many bug bounty hunters find new assets and report vulnerabilities. This tweet is proof of my claims.

Amass is one of the most useful bug bounty tools
Amass is one of the most useful bug bounty tools

GitHub: A search engine and a great bug bounty tool

You can use GitHub to collect a lot of data about your target. Most of the time, you will find sensitive information leaks, from API keys to passwords. This is possible because employees accidentally push code without proper verification. Unfortunately for the company, these commits occasionally contain hard-coded credentials which allow you to access deep services.

Some bug bounty hunters specialize in this area and find highly impactful bugs. Although many tools have been developed to enumerate repositories and find sensitive data, they don’t cover the whole search space. That’s why those hunters invest considerable time conducting manual research. One of the main wizards in this area is th3g3ntl3man and he has made an awesome talk about Github recon on the Bugcrowd university videos. I could have shared a screenshot of some queries, but I’m afraid it will disclose sensitive data.

Shodan is your bug bounty tool for public devices enumeration

While GitHub is the search engine for code repositories, Shodan specializes in internet-connected devices. In other words, if there is a public IP exposing a service on a certain port, it is available for Shodan index. You’d be surprised by the number of exposed services there are online. From IP cameras with default credentials to industrial control systems, Shodan allows you to access all of them. There is a great Defcon talk by Dan which is both scary and amusing at the same time. I recommend you watch it to see how exposing services to the public can be so dangerous.

Shodan supports many operators as well. As a bug bounty hunter, you can use them to build your dorks and answer key questions about your target from a network perspective. You can get an idea of the top ports, the IP ranges, the ASN numbers, the country locations, etc. There is also an API which you can use to automate your recon process.

Shodan's Explore feature: It's scary to find such devices publicly available
Shodan’s Explore feature: It’s scary to find such devices publicly available

Wayback machine

What goes online stays online, as long and gets indexed. That’s because there are projects such as the Wayback Machine, which indexes and stores copies of web pages, books, audio, videos, images, etc. The project exists to provide knowledge for everyone. This is useful from a reconnaissance perspective because you can dig into previous copies of a target looking for any information disclosure, old URLs, removed files, etc. 

However, the process of going through tons of indexed content is tedious. Luckily, there are many tools out there which automate the process. I use waybackurls and gau, which give somewhat the same results.

Google hacking database

I’m sure that all of you jump to Google when you first want to learn more about the target you want to test. However, do you make use of google dorks? These are queries that use Google search operators to return precise results, such as only PDF files of a certain website, or administration panels of a certain technology in a range of subdomains, or any other need you might have. The only thing that limits you is your imagination. 

Even if you don’t have enough imagination, people have been sharing their google dorks for ages. You can find them in the Google Hacking Database (GHDB) and get inspiration. For example, if you found that the target uses a certain technology, you can look for it on that database to see previous dorks which might be helpful in your recon process.

You should add GHD to your bug bounty tools
You should add GHD to your bug bounty tools

These are some resources which can serve as bug bounty tools when you perform recon. However, there are certainly many other resources and one article simply cannot include them all.

Bug bounty tools for subdomain enumeration

So far, we have seen how you can perform general reconnaissance. But the hacking process involves enumeration in all stages. And one of the first stages is subdomain enumeration, which aims at finding as many subdomains as possible. The community has developed many bug bounty tools to assist you during this exercise. 


I’ve already mentioned this tool in my bug bounty methodology. It uses multiple sources like certificate transparency, Facebook, Virustotal, etc. It works out of the box, but if you want more results, you can configure the API keys for the services which need one. 

Provided that you have installed and configured Go, the command is simple, you just have to pipe your target to the tool.

echo domain.com | assetfinder --subs-only

Below is a screenshot demonstrating part of the output of assetfinder against tesla.com

assetfinder output for subdomain enumeration
assetfinder output for subdomain enumeration

 OWASP Amass

We’ve talked about OWASP amass at the beginning of this article as a general bug bounty tool for reconnaissance. Well, you can use it for subdomain enumeration as well. It supports passive and active enumeration, performs DNS resolution and can also brute-force the subdomains based on the wordlist of your choice. The user guide is detailed and gives example commands that you can run. The simplest and quickest subdomain enumeration command would be:

amass enum -d domain.com -passive


You can use Google dorks to find subdomains as well. To do that, you can use the site operator. An example would be site:domain.com. Once you get the results, you can enumerate the subdomains one by one using negative search. For example, suppose we found sub.domain.com, you can eliminate that result using: site:domain.com -site:sub.domain.com. Repeat this process until you no longer get any results from Google.

As you may have noticed, the process is tedious and takes some time. Luckily, there are tools such as theHarvester and sublist3r which you can use for such queries. However, bear in mind that they can get rate limited, which might return only a subset of the existing subdomains.

Waybackurls and gau

We have seen how digging into indexed content is important during the general reconnaissance phase. Well, it is also equally important when it comes to subdomain enumeration. I find it useful to run waybackurls and gau to grab potential subdomains which might go under the radar of amass. It’s always useful to combine multiple tools to get the most exhaustive results. 

The commands are simple and easy. For either waybackurls or gau, you simply pipe your target domain to them. I like to use unfurl as well to extract the domain part from the result.

echo domain.com | waybackurls | unfurl domains
echo domain.com | gau | unfurl domains

This is part of the output of waybackurls against tesla.com

waybackurls and unfurl bug bounty tools can work together when you perform subdomain enumeration
waybackurls and unfurl bug bounty tools can work together when you perform subdomain enumeration


When it comes to enumeration, you can boost your results using brute force. To do that, I usually combine keywords related to my target. Using Altdns, I quickly generate permutations which usually get used by companies. For example, suppose the company’s main domain is XYZ. Well, the wordlist would contain subdomains like staging-XYZXYZ-dev and the like. 

The command is straightforward, you run the tool while providing the domains file and the words you want to use for permutations.

altdns -i domain.txt -o output.txt -w words.txt


After generating a list of potential subdomains, I use massdns to resolve the resulting list for valid and existing subdomains. A word of warning though, this process can yield false positives, depending on the quality of the DNS resolvers you are using. You can find more about this problem on this GitHub issue

Rather than using the resolvers.txt file provided by massdns, you can get a list available on public-dns.info. Then, the command is simple, just use the massdns command with the list of resolvers and the altdns wordlist you have generated before:

massdns -r resolvers_file -t A altdns_wordlist -w results.txt

Bug bounty tools for port scanning

When you have a list of subdomains from the subdomain enumeration phase, you can start looking for running services. The technical word for that is port scanning. There are many tools which can assist you during this phase. 


When you have a small list of subdomains, let’s say below 50, you can use Nmap to perform port scanning. It allows you to not only enumerate the running services but also fingerprint the server you are targeting. Besides, Nmap has a set of scripts which you can use to scan those services. For example, you can perform directory bruteforcing for HTTP services, or banner grabbing for SSH services, etc. The following command takes a list of subdomains as input, probes all the ports numbers (from 0 to 65535) while scanning the resulting services using their respective Nmap scripts. Finally, it saves the results into a sile named scan.

Nmap -p- -sC -o scan -iL subdomains.txt


Once you start working with big lists of subdomains, Nmap will take forever. That’s why I prefer to run masscan instead. It’s blazingly fast, but you need to have enough network bandwidth. It’s capable of scanning huge IP ranges. From the Readme file in the Github repository:

[…]the program is really designed with the entire Internet in mind. 

From masscan’s documentation

However, it only accepts IP addresses, no subdomains. Therefore, you have to resolve the IP addresses before running masscan. The following bash one-liner can do just that:

cat subdomains.txt | xargs -n1 host | grep "has address" | cut -d" " -f4 | sort -u > ips.txt

Then, you can run masscan. The following command takes the ips.txt file as input, it probes all port numbers, it uses a rate of 10k packets per second and it outputs the results into the file scan.txt

masscan -iL ips.txt -p0-65535 --rate=10000 -oL scan.txt


Port scanning is a loud action from a network perspective. It triggers Intrusion Detection Systems very easily. If you want to avoid detection, you can leverage Shodan to see what ports are open and even gather information about the services that are running. That’s because Shodan continuously performs port scanning for you. You can simply type the IP or range of IP addresses you want, and it will give you the results. I recommend you read about the Shodan operators which are a must. For example, the following screenshot shows the top services running on the whole ASN number AS36647 owned by YAHOO. 

Bug bounty tools for Directory Bruteforcing

Directory bruteforcing allows you to discover hidden directories which are referenced neither in JavaScript files nor in Internet archives such as the Wayback Machine. There are many tools which perform such a task.   


Currently, I’m using ffuf to perform directory bruteforcing. It is fast, reliable and capable of more than just looking for directories. The Readme file explains all the capabilities, but let’s focus on directories for now. 

In its simplest form, ffuf takes a wordlist and sends HTTP requests to your target application. The following command illustrates that:

ffuf -w wordlist.txt -u https://url.of.the.application/FUZZ

The term FUZZ is a special placeholder that ffuf uses to insert the elements of your wordlist.

Burp Suite Intruder

When I am analyzing a feature using BurpSuite, I find it practical to run the intruder to discover some endpoints without having to leave Burp. For that, I use the Intruder. The community edition offers only one thread, which is not useful in my opinion. However, the Pro version allows you to use as many threads as your machine can handle. This Intruder documentation from the authors of BurpSuite gives you all you need to start using this awesome tool.

Other directory bruteforcing tools

There are so many other tools that perform directory bruteforcing. Many bug bounty hunters use Gobusterdirsearchwfuzz or similar ones. You can experiment with all of them and choose the one that suits your goals and taste.

Bug bounty tools for Web application testing

Ok, now that you have a list of web applications, it’s time to focus on one of them and hunt for those bugs! However, without the proper tools, you won’t find any. Here are the main tools I use, and so you should. 

Burp Suite

This is the de facto when it comes to pentesting a web application. It is a suite of tools which assist you during your hacking. For example, it allows you to see all the HTTP requests and Websockets thanks to the Proxy tool. You can play with them with the Repeater tool to find security vulnerabilities. If you want to brute force a parameter, a header or anything in a request, the Intruder is your friend. BurpSuite supports extensions as well. You can code your own as I did with GWTab, or download many of them from the BApp Store using the Extender tool.

You can start using the Community Edition, which is free. If you want to benefit from the Scanner tool and some extensions, you can buy the Pro version and get a yearly license. You can earn a three-month license if you have a positive signal and 500 reputation points on HackerOne.

Learn how to download it, install it and configure it with this video I made just for you.

Zed Attack Proxy

ZAP is the free and open-source alternative to BurpSuite. It’s a flagship of the OWASP that can do almost all what Burp does, plus some more. For example, it offers the cool Heads Up Display (HUD) which allows you to use ZAP without leaving your web browser. It comes also in many packages, including docker, which makes it convenient for automated testing. Unlike BurpSuite Community Edition, Zaproxy allows you to run active scans. 

ZAP supports extensions as well, which you can download and update from the Marketplace included in its user interface. You can install it and configure it with this video. Besides, there is a great video on how to use ZAP, including how to use the HUD.


When you are doing bug bounty hunting or penetration testing, you will definitely use some of these tools I have just listed. If you are not familiar with them, take some time to learn how to use them and you will thank me later. However, while using the proper tools can play a key role in finding great bugs, it’s worth mentioning that they will never be a substitute for your brain. They exist to assist you, not replace you. That’s why it’s important to invest in your knowledge.

These are the tools I like to use when performing reconnaissance and subdomain enumeration. I hope you found this content helpful. Don’t forget to like, subscribe and share this content because it supports me to continue sharing such content.

As usual, stay curious, keep learning, and go find some bugs!

Hacking a Google Web Toolkit application

hacking a GWT application

Hello ethical hackers and bug bounty hunters! I’ve recently conducted a successful penetration testing against a web application built using Google Web Toolkit, and I want to share with you the process I followed and the bugs I found. Hopefully, this episode will inspire you to try harder during your own bug bounty hunting and penetration testing journey.

I will briefly explain what Google Web Toolkit is and what research has already been made around it. Then, I will explain why and how I built a Burp extension to help me during the penetration testing process. Finally, I will share with you some vulnerabilities I found, especially a cool one which required further effort. So stay with me as we smash this web application into pieces!

A brief introduction of Google Web Toolkit

Throughout this episode, I will use Google Web toolkit and GWT interchangeably. It is pronounced GWiT according to the official website.

What is Google Web Toolkit?

Throughout my career, I’ve encountered GWT applications two times only. It’s a relatively old technology, but it’s still used by some companies. According to the official GWT website, Google Web Toolkit is

[…] a development toolkit for building and optimizing complex browser-based applications. Its goal is to enable productive development of high-performance web applications without the developer having to be an expert in browser quirks, XMLHttpRequest, and JavaScript.

From the official GWT website

In other words, GWT allows developers to write web applications in Java, without having to worry about client-side technologies. In fact, it cross-compiles Java code into JavaScript ready to be used cross-browsers. 

How do Google Web Toolkit requests look like?

It’s easy to tell when you are in front of a GWT application. Typically, you will mostly see POST requests in your web proxy, with a series of strings separated with pipes. It seems intimidating at first, but when you understand how the POST data is structured, it’s fairly easy to spot what it does with a bit of practice. The following is the kind of data you will encounter in a typical GWT web applications.

GWT body example
GWT body example

Understanding the Google Web Toolkit body

I’ve built my knowledge upon this awesome article which explains the previous work that has been done, the GWT body structure and how you can enumerate the endpoints in such a technology. Although it doesn’t completely apply to recent versions, I still recommend you take some time to read it. However, if you still don’t want to manually analyze the requests, it’s possible to parse the GWT requests and pinpoint exactly where the user input is located thanks to a parser available on GitHub. Using this tool, the following command takes the GWT request body and returns the user input marked with the same highlight that BurpSuite uses in the Intruder tool.

GWT parser script highlighting user input similar to Burp Intruder
GWT parser script highlighting user input similar to Burp Intruder

Even with this, it’s impractical for me to manually copy the request body from BurpSuite and run the parser for each and every request. I think it would be great if BurpSuite automatically highlights the user input whenever it encounters a GWT request.

Writing my own Burp Extension for Google Web Toolkit

I have always wanted to write a BurpSuite extension, and this was the best opportunity for me to do so. In fact, I didn’t find any publicly available extension that would successfully parse this kind of requests. For example, the GWT Insertion Points is an extension which doesn’t seem to work, at least for me. It hasn’t been updated for 3 years. Moreover, ZAProxy supports scanning GWT requests, but it doesn’t support them during manual security testing.

The birth of GWTab

With the penetration testing schedule I had, I planned for one day to write the extension. Therefore, I had to keep it simple. The goal was to show a new tab in BurpSuite containing the user input for every GWT request. That way, I can significantly increase my efficiency by focusing only on the marked strings without having to manually run the parsing command. Hence, GWTab was born.

The process of writing GWTab

Writing GWTab involved three main actions:

  • Show a new tab in Burp: I used the custom editor tab template provided by BurpSuite, which gave me a quick start and let me focus on only the GWT feature I wanted to develop. 
  • Parse the GWT body: I used the parser I mentioned earlier. As I said, it can highlight the user input with the Burp Intruder’s marker, which is useful if I want to perform some automated fuzzing later, or even active scanning based on the highlighted input.
  • Extensive reading: I had to read parts of the Burp Extender API in order to properly understand the signature of the functions, the interfaces to use and what to implement.

After a lot of trial and error, I finally got it working! I made it available for everyone on GitHub. The following screenshot shows the new GWT tab containing the user input that I can focus on.

GWT body shown in directly in a Burp tab thanks to the GWTab extension
GWT body shown in directly in a Burp tab thanks to the GWTab extension

Limitations of GWTab

Some requests containing long values make the GWT parser crash. Therefore, GWTab will sometimes show the message “Parser Failed” whenever that happens. Unfortunately, I couldn’t invest more time to fix this issue on the parser.

Now that I can spot user input in most GWT requests on the fly, I was ready to start hunting for those juicy bugs!

Low hanging fruits

I found many low hanging vulnerabilities during this assessment because developers simply didn’t bother implementing any sort of proper access control.

Security through obscurity is a flaw by design

Because the GWT body seems complex, developers assume hackers won’t be able to understand and exploit it. I guess they ignore the very definition of a hacker. If you are a developer reading this, just know that curiosity and challenge are key drivers for a hacker. Besides, be aware that security through obscurity is a fundamentally false protection. It has only pushed hackers to dig even deeper. 

This application was no different. In fact, Broken Access Control and IDOR vulnerabilities were everywhere.

IDOR everywhere

Because of the false assumption I mentioned earlier, almost all endpoints were vulnerable to IDOR vulnerabilities. To make things worse, most requests use increasing numerical identifiers. Besides, it was easy to spot such IDs without even using GWTab since there was only one identifier per request. All I needed was a trained eye, which came naturally with practice. These vulnerable endpoints allowed me to access, edit and even delete resources of other accounts.

However, I want to share details about one bug which required more effort to fully exploit. I chose this one because I want to demonstrate why impact is critical and what techniques you can use to increase it.

Beyond trivial IDOR vulnerabilities

This application is a service exchange platform which allows its clients to offer and consume services. Therefore, if an attacker can delete arbitrary offers, it means that the whole purpose of the application is compromised. Guess what! I found just how to achieve that!

Vulnerability detection

Detecting this vulnerability was easy. In fact, I followed the same approach I mentioned in the video tutorial about Broken Access Control. In a nutshell, I used two separate accounts. Using the first account, I created an offer and triggered the request to delete it. Before deleting it though, I captured the request using BurpSuite and sent it to the Repeater, then dropped the request to preserve the offer. From there, I took the JSON Web Token of the attacking user and inserted it into the vulnerable request. When I sent it to the server, the victim’s offer got deleted.

Impact analysis

Looking at the POST data revealed a huge payload containing multiple objects, IDs and string values. As a bug bounty hunter, you would quickly report this bug right? Well, the impact is still not clear. In fact, I had no idea how an attacker can realistically build such POST data. If you have listened to read the episode about writing a good report, you know that impact plays a huge role in the bug bounty game. To prove the impact, I had to dig deeper into the application.

Exploiting the vulnerability

I first assumed that the server might delete the offer whose ID is present in the request. Therefore, I tried injecting the victim’s offer ID in all the potential inputs present in the POST data. I had to do it by hand since the GWTab extension failed at parsing the POST data. However, after many tries, it became obvious that this was not the right approach because nothing was deleted.

I didn’t want to give up so quickly. I knew that the application allowed users to search for offers of other users. What if I could grab the entire offer object from the results? Unfortunately, this idea failed since both objects didn’t fully match.

It was clear that I needed two requirements if I wanted to successfully exploit this vulnerability.

  • First, I needed a request which uses the same offer object structure.
  • Second, this dream request should be accessible to the attacker.

Based on these two requirements, I started looking through the application features for all the actions a user can perform on offers published by other users. After some time, I found that the user can like and unlike an offer. Lucky for me, the unlike operation uses the exact same offer object as the one used in the offer deletion request! I couldn’t believe my eyes, I was really lucky!

From there, the attack scenario became clear:

  • An attacker browses the offers list, which is public.
  • He/she likes the victim’s offer, then unlikes it.
  • He/she captures the offer object and injects it into the vulnerable request.
  • The victim’s offer gets deleted from the database.

Writing the report

Now that the impact is clear, you can finally and safely report that bug without worrying about rejection. Besides, you might even reduce the probability of getting duplicated since your vulnerability requires more effort to exploit, and not all bug bounty hunters are willing to take the extra steps. Moreover, even if the team accepts your not-so-convincing-impact report, the reward of a clear impact will certainly be much higher.


In the offensive security industry, whether you are a full-time penetration tester or a seasoned bug bounty hunter, curiosity and challenge are the fuel which will push your limits. In my case, I always wanted to write a Burp extension to solve a problem, and this application presented the right opportunity for me to challenge myself. Besides, I always seek ways to achieve the highest impact not only to get higher bounties but to give a better return on investment to my clients as well.

Later I found that the developers were already aware of this issue. However, because of the complexity of the POST data, they assumed that nobody would figure out how to successfully exploit the vulnerability. Thanks to this full exploit, they’ve learnt that they should never rely on obscurity…the hard way!

I hope you found this content useful. If you did, then support me by commenting, sharing and subscribing. Until next time, stay curious, keep learning and go find some bugs.