Photon can extract the following data while crawling:
URLs (in-scope & out-of-scope)
URLs with parameters (example.com/gallery.php?id=2)
Intel (emails, social media accounts, amazon buckets etc.)
Files (pdf, png, xml etc.)
Secret keys (auth/API keys & hashes)
JavaScript files & Endpoints present in them
Strings matching custom regex pattern
Subdomains & DNS related data
The extracted information is saved in an organized manner or can be exported as json.
Flexible
Control timeout, delay, add seeds, exclude URLs matching a regex pattern and other cool stuff.
The extensive range of options provided by Photon lets you crawl the web exactly the way you want.
Genius
Photon's smart thread management & refined logic gives you top notch performance.
Still, crawling can be resource intensive but Photon has some tricks up it's sleeves. You can fetch URLs archived by archive.org to be used as seeds by using --wayback option.
In Ninja Mode which can be accessed by --ninja, 4 online services are used to make requests to the target on your behalf.
So basically, now you have 4 clients making requests to the same
server simultaneously which gives you a speed boost if you have a slow
connection, minimizes the risk of connection reset as well as delays
requests from a single client.
To view results, you can either head over to the local docker volume, which you can find by running docker inspect photon or by mounting the target loot folder:
Photon is under heavy development and updates for fixing bugs.
optimizing performance & new features are being rolled regularly.
If you would like to see features and issues that are being worked on, you can do that on Development project board.
Updates can be installed & checked for with the --update option. Photon has seamless update capabilities which means you can update Photon without losing any of your saved data.
Aucun commentaire:
Enregistrer un commentaire