Open In App

CrawlBox – Easy Way to Brute-force Web Directory

Improve
Improve
Like Article
Like
Save
Share
Report

Directory brute force is used to find hidden and often forgotten directories on a site to try to compromise. Some various automated tools and scripts retrieve the status of the directory which is brute-forced from custom wordlists. CrawlBox tool is a Python language-based tool, which is the command line in use. CrawlBox tool is an advanced tool that is designed to brute-force directories and files in web servers or web path scanners on the web application or target domain. CrawlBox also allows hackers to use their custom brute force wordlists rather than using the default one. CrawlBox tool has superb performance, speed, high accuracy, and relevant outputs.

Note: Make Sure You have Python Installed on your System, as this is a python-based tool.

Click to check the Installation process: Python Installation Steps on Linux

Features of CrawlBox Tool

  1. CrawlBox tool supports delays between requests.
  2. CrawlBox tool allows users to use custom wordlists
  3. CrawlBox tool is open source and free to use.
  4. CrawlBox tool is designed in python language
  5. CrawlBox tool is Easy and simple to use.
  6. CrawlBox tool supports the HTTP method.
  7. CrawlBox tool supports proxy to connect to the target URL.

Installation of CrawlBox Tool on Kali Linux OS

Step 1: Check whether Python Environment is Established or not, use the following command.

python3

Step 2: Open up your Kali Linux terminal and move to Desktop using the following command.

cd Desktop

Step 3: You are on Desktop now create a new directory called CrawlBox using the following command. In this directory, we will complete the installation of the CrawlBox tool.

mkdir CrawlBox 

Step 4: Now switch to the CrawlBox directory using the following command.

cd CrawlBox 

Step 5: Now you have to install the tool. You have to clone the tool from Github.

sudo git clone https://github.com/abaykan/crawlbox.git

Step 6: The tool has been downloaded successfully in the CrawlBox directory. Now list out the contents of the tool by using the below command.

ls

Step 7: You can observe that there is a new directory created of the CrawlBox tool that has been generated while we were installing the tool. Now move to that directory using the below command:

cd crawlbox

Step 8: Once again to discover the contents of the tool, use the below command.

ls

Step 9: Download the required packages for running the tool, use the following command.

pip3 install -r requirements.txt

Step 10: Now we are done with our installation, Use the below command to view the help (gives a better understanding of tool) index of the tool.

python3 crawlbox.py -h

Working with CrawlBox Tool on Kali Linux OS

Example 1: Simple Crawl

python3 crawlbox.py -u https://geeksforgeeks.org

1. In this example, we will be performing simple crawl or simple directory brute-forcing of directories with default wordlist. We have specified our target using the -u tag (https://geeksforgeeks.org).

2. In the below Screenshot, you can see that our scan has started and CrawlBox is detecting directories along with their response status.

3. In the below Screenshot, You can see we are accessing one of the links. You can check and visit for 200,301 Status codes which can have some interesting information.

Example 2: Using Custom Wordlists

python3 crawlbox.py -u https://geeksforgeeks.org -w /usr/share/wordlists/dirb/common.txt

1. In this example, we will be using a custom wordlist for brute-forcing as CrawlBox allows or has support to custom wordlist. In the below Screenshot we are brute-forcing directories from /usr/share/wordlists/dirb/common.txt file.

2. In the below Screenshot, we have given the crawl query or command.

 

3. In the below Screenshot, you can see that the results of the scan are displayed along with the status codes and response information.

Example 3: Adding Delay between requests

python3 crawlbox.py -u https://geeksforgeeks.org -d 3

1. In this example, we will be specifying the time delay between two requests. In the below screenshot, you can see that we have used the -d tag along with the 3-sec delay.

Example 4: Printing Tool’s Version Number

python3 crawlbox.py --version

1. In this example, we are printing or displaying the version of the CrawlBox tool. –version tag is used.



Last Updated : 23 Aug, 2021
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads