Skip to content

ivansmc00/ParamSpider

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

paramspider

Mining URLs from dark corners of Web Archives for bug hunting/fuzzing/further probing

📖 About🏗️ Installation⛏️ Usage🚀 Examples🤝 Contributing

paramspider

About

paramspider allows you to fetch URLs related to any domain or a list of domains from Wayback achives. It filters out "boring" URLs, allowing you to focus on the ones that matter the most.

WARNING: this is a modification of the real paramspider provided by devanshbatham. This modification makes the output longer, without squeez and with no header. Simple for use the CLI output into other scripts.

Installation

To install paramspider, follow these steps:

git clone https://github.com/ivansmc00/ParamSpider
cd paramspider
pip install .

Usage

To use paramspider, follow these steps:

paramspider -d example.com

Examples

Here are a few examples of how to use paramspider:

  • Discover URLs for a single domain:

    paramspider -d example.com
  • Discover URLs for multiple domains from a file:

    paramspider -l domains.txt
  • Stream URLs on the termial:

    paramspider -d example.com -s
  • Set up web request proxy:

    paramspider -d example.com --proxy '127.0.0.1:7890'
  • Adding a placeholder for URL parameter values (default: "FUZZ"):

     paramspider -d example.com -p '"><h1>reflection</h1>'

Contributing

Contributions are welcome! If you'd like to contribute to paramspider, please follow these steps:

  1. Fork the repository.
  2. Create a new branch.
  3. Make your changes and commit them.
  4. Submit a pull request.

Star History

Star History Chart

About

ParamSpider without header and all results

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.9%
  • Dockerfile 6.1%