Files
web-scraper/notes.md
2018-09-09 10:52:37 +01:00

19 lines
982 B
Markdown

## Thoughts
* ~~strip hashes and everything following (as they're in-page anchors)~~
* ~~strip args~~
* ~~use `pop()` on the set instead of `.remove()`~~
* ~~return false once the set is empty~~
* ~~`WebPage.parse_urls()` needs to compare startswith to base url~~
* ~~ignore any links which aren't to pages~~
* ~~better url checking to get bare domain~~ #wontfix
* ~~remove trailing slash from any discovered url~~
* ~~investigate lxml parser~~
* ~~remove base url from initial urls with and without trailing slash~~
* ~~investigate using [tldextract](https://github.com/john-kurkowski/tldextract) to match urls~~ #wontfix
* ~~implement parsing of [robots.txt](http://docs.w3cub.com/python~3.6/library/urllib.robotparser/)~~
* ~~investigate [gzip encoding](https://stackoverflow.com/questions/36383227/avoid-downloading-images-using-beautifulsoup-and-urllib-request)~~
* implement some kind of progress display
* async
* better exception handling