A set of decent tools is an essential for any being efficient at anything. Usually tools are limited and have a well defined role and usage. But when it comes to software tools, the numbers are large with boundaries of usage domains diminishing. A newbie can easily get confused over all the options available and end up with tools that do not fully support the purpose. A guide that indexes the tools with their intended usage and a deep insight on recommendations for varying expertise levels can be very helpful.
When it comes to web application penetration testing or web application hacking, the essential requirements are small despite the large numbers of available tools. Also, there are tools which are very well known for their usability and are popular and easy to use to such an extent that everyone can use them without taking expert opinion. This article lists the types of tools that are required for web application pentest with popularly used tools of each type.
1. Intercepting Proxy:
An intercepting proxy sits between your web browser/client and the webserver. The “client
” here means applications that communicate over the web apart from browsers. The client transfers its requests to the intercepting Proxy which then forwards it to the web server. With an intercepting proxy, you can view and modify the request and response while these are in transit. You can also decide whether to forward the packet or to drop it. Some of the famous intercepting proxies are:
- Paros proxy
- Burp proxy of BurpSuite
- OWASP ZAP Proxy
Fuzzer is a tool that is used to test for brute forcible entry points. A fuzzer takes a segment of request as input and runs a set of values(called payloads) on it, substituting each payload one by one with the selected segment, sending the request and displaying the response. The payload list can be a file, a set of values entered manually, a single value or a brute force mechanism with pre-defined parameters.
Fuzzers are used for:
Some popular fuzzers are:
- Bruteforcing weak logins
- Finding/confirming blind XSS
- Finding/confirming blind SQL injection
- Finding/confirming IDOR vulnerabilities.
- OWASP ZAP fuzzer
- Burp Intruder of BurpSuite
3. Web Spider/crawler:
A web spider or crawler explores a web application through hyperlinks and other content in the response to find out new endpoints. These endpoints may be intended to be hidden with almost no security measures. Finding out a large number of endpoints helps in increasing the attack surface. It also helps in finding left out, less secure, and old endpoints that are easy to exploit. There are many manual techniques to find out endpoints like viewing the sitemap or sitemap.xml
file, viewing the robots.txt
file, using search engine dorks, etc. The automated methods try to find connected components (endpoints that are referenced from other endpoints that are already discovered) and combine the results with the results of automated discovery done by using manual techniques. Popular spidering tools are:
- Burp Spider of BurpSuite
- Web Spider of OWASP ZAP
4. Request Repeater:
A request repeater lets you probe for changes in response with changes in certain parameters of a request. The task is the same as a fuzzer but with a repeater the things are manual. The repeater is used for:
- Payload construction for XSS and injection attacks.
- Checking for presence of CSRF.
- Checking for server-side validation of user-supplied data.
- Understanding how the server processes a slightly malformed input or an anomaly in the expected input pattern.
5. Entropy analyzer:
This tool is used to analyze the entropy or randomness of tokens generated by a web server’s code. These tokens are generally used for authentication in sensitive operations, cookies and anti-csrf tokens are the examples of such tokens. Ideally, these tokens must be generated in a fully random manner so that the probability of appearance of each possible character at a position is distributed uniformly. This should be achieved both bitwise and characterwise. An entropy analyzer tests this hypothesis for being true. It works like this: initially, it is assumed that the tokens are random. Then the tokens are tested on certain parameters for certain characteristics. A term significance level is defined as a value of probability that the token will exhibit a characteristic below which the hypothesis that the token is random will be rejected. This tool can be used to find out the weak tokens and enumerate their construction.
Share your thoughts in the comments
Please Login to comment...