Skip to content

nkbeast/paramspider

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

paramspider

Mining URLs from dark corners of Web Archives for bug hunting/fuzzing/further probing

📖 About🏗️ Installation⛏️ Usage🚀 Examples🤝 Contributing

paramspider

About

paramspider allows you to fetch URLs related to any domain or a list of domains from Wayback achives. It filters out "boring" URLs, allowing you to focus on the ones that matter the most.

Installation

To install paramspider, follow these steps:

git clone https://github.com/nkbeast/paramspider

Usage

To use paramspider, follow these steps:

python3 paramspider.py -d example.com

Examples

Here are a few examples of how to use paramspider:

  • Discover URLs for a single domain:

    python3 paramspider.py -d example.com
  • Discover URLs for multiple domains from a file:

    python3 paramspider.py -l domains.txt
  • Stream URLs on the termial:

    python3 paramspider.py -d example.com -s
  • Set up web request proxy:

    python3 paramspider.py -d example.com --proxy '127.0.0.1:7890'
  • Adding a placeholder for URL parameter values (default: "FUZZ"):

     python3 paramspider.py -d example.com -p '"><h1>reflection</h1>'

About

Mining URLs from dark corners of Web Archives for bug hunting/fuzzing/further probing

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%