56 lines
1.6 KiB
Markdown
56 lines
1.6 KiB
Markdown
## Thoughts
|
|
|
|
* ~~strip hashes and everything following (as they're in-page anchors)~~
|
|
* ~~strip args~~
|
|
* ~~use `pop()` on the set instead of `.remove()`~~
|
|
* ~~return false once the set is empty~~
|
|
* ~~`WebPage.parse_urls()` needs to compare startswith to base url~~
|
|
* ~~ignore any links which aren't to pages~~
|
|
* ~~better url checking to get bare domain~~ #wontfix
|
|
* ~~remove trailing slash from any discovered url~~
|
|
* ~~investigate lxml parser~~
|
|
* ~~remove base url from initial urls with and without trailing slash~~
|
|
* ~~investigate using [tldextract](https://github.com/john-kurkowski/tldextract) to match urls~~ #wontfix
|
|
* ~~implement parsing of [robots.txt](http://docs.w3cub.com/python~3.6/library/urllib.robotparser/)~~
|
|
* ~~investigate [gzip encoding](https://stackoverflow.com/questions/36383227/avoid-downloading-images-using-beautifulsoup-and-urllib-request)~~
|
|
* ~~implement some kind of progress display~~
|
|
* async
|
|
* better exception handling
|
|
* randomise output filename
|
|
|
|
### Async bits
|
|
|
|
in `__main__`:
|
|
|
|
```python
|
|
loop = asyncio.get_event_loop()
|
|
try:
|
|
loop.run_until_complete(main())
|
|
finally:
|
|
loop.close()
|
|
```
|
|
|
|
* initialises loop and runs it to completion
|
|
* needs to handle errors (try/except/finally)
|
|
|
|
```python
|
|
async def run(args=None):
|
|
tasks = []
|
|
|
|
for url in pool:
|
|
tasks.append(url)
|
|
# for i in range(10):
|
|
# tasks.append(asyncio.ensure_future(myCoroutine(i)))
|
|
|
|
# gather completed tasks
|
|
await asyncio.gather(*tasks)
|
|
```
|
|
|
|
Getting the contents of the page needs to be async too
|
|
|
|
```python
|
|
async def get_source():
|
|
blah
|
|
blah
|
|
await urlopen(url)
|
|
``` |