## Thoughts * ~~strip hashes and everything following (as they're in-page anchors)~~ * strip args * ~~use `pop()` on the set instead of `.remove()`~~ * ~~return false once the set is empty~~ * ~~`WebPage.parse_urls()` needs to compare startswith to base url~~ * ~~ignore any links which aren't to pages~~ * better url checking to get bare domain * remove trailing slash from any discovered url * investigate lxml parser * ~~remove base url from initial urls with and without trailing slash~~ * investigate using [tldextract](https://github.com/john-kurkowski/tldextract) to match urls * ~~implement parsing of [robots.txt](http://docs.w3cub.com/python~3.6/library/urllib.robotparser/)~~ * investigate [gzip encoding](https://stackoverflow.com/questions/36383227/avoid-downloading-images-using-beautifulsoup-and-urllib-request) ``` text/html; charset=utf-8 application/xhtml+xml 'WebPage' object has no attribute 'source' 'WebPage' object has no attribute 'discovered_hrefs' ```