2015-05-29 03:19:56 +02:00
|
|
|
scrapyd
|
|
|
|
=======
|
2014-12-01 15:19:37 +02:00
|
|
|
|
2015-06-20 11:02:23 +02:00
|
|
|
[Scrapy][1] is an open source and collaborative framework for extracting the
|
|
|
|
data you need from websites. In a fast, simple, yet extensible way.
|
2015-06-20 10:45:20 +02:00
|
|
|
|
2015-06-20 11:02:23 +02:00
|
|
|
[Scrapyd][2] is a service for running Scrapy spiders. It allows you to deploy
|
|
|
|
your Scrapy projects and control their spiders using a HTTP JSON API.
|
|
|
|
|
2015-06-20 11:29:44 +02:00
|
|
|
[Scrapyd-client][3] is a client for scrapyd. It provides the scrapyd-deploy
|
|
|
|
utility which allows you to deploy your project to a Scrapyd server.
|
|
|
|
|
2015-06-20 11:02:23 +02:00
|
|
|
This image is based on `debian:jessie` without any useless packages installed.
|
2015-06-20 11:29:44 +02:00
|
|
|
Only 3 latest python packages are installed:
|
2015-06-20 10:45:20 +02:00
|
|
|
|
|
|
|
- `scrapy`: git+https://github.com/scrapy/scrapy.git
|
|
|
|
- `scrapyd`: git+https://github.com/scrapy/scrapyd.git
|
2015-06-20 11:29:44 +02:00
|
|
|
- `scrapyd-client`: git+https://github.com/scrapy/scrapyd-client.git
|
2014-12-01 15:19:37 +02:00
|
|
|
|
2015-05-29 03:19:56 +02:00
|
|
|
Please use this image as base for your own project.
|
2014-12-01 15:19:37 +02:00
|
|
|
|
2015-06-20 11:02:23 +02:00
|
|
|
## Run it as background-daemon for scrapyd
|
2015-06-20 10:45:20 +02:00
|
|
|
|
|
|
|
```
|
|
|
|
$ docker run -d --restart always --name scrapyd -p 6800:6800 vimagick/scrapyd
|
|
|
|
$ firefox http://localhost:6800
|
|
|
|
```
|
|
|
|
|
2015-06-20 11:02:23 +02:00
|
|
|
## Run it as interactive-shell for scrapy
|
2015-06-20 10:45:20 +02:00
|
|
|
|
|
|
|
```
|
|
|
|
$ cat > stackoverflow_spider.py << _EOF_
|
|
|
|
import scrapy
|
|
|
|
|
|
|
|
class StackOverflowSpider(scrapy.Spider):
|
|
|
|
name = 'stackoverflow'
|
|
|
|
start_urls = ['http://stackoverflow.com/questions?sort=votes']
|
|
|
|
|
|
|
|
def parse(self, response):
|
|
|
|
for href in response.css('.question-summary h3 a::attr(href)'):
|
|
|
|
full_url = response.urljoin(href.extract())
|
|
|
|
yield scrapy.Request(full_url, callback=self.parse_question)
|
|
|
|
|
|
|
|
def parse_question(self, response):
|
|
|
|
yield {
|
|
|
|
'title': response.css('h1 a::text').extract()[0],
|
|
|
|
'votes': response.css('.question .vote-count-post::text').extract()[0],
|
|
|
|
'body': response.css('.question .post-text').extract()[0],
|
|
|
|
'tags': response.css('.question .post-tag::text').extract(),
|
|
|
|
'link': response.url,
|
|
|
|
}
|
|
|
|
_EOF_
|
|
|
|
|
2015-06-20 11:04:33 +02:00
|
|
|
$ docker run -it --rm -v `pwd`:/code -w /code vimagick/scrapyd bash
|
2015-06-20 10:45:20 +02:00
|
|
|
>>> scrapy runspider stackoverflow_spider.py -o top-stackoverflow-questions.json
|
|
|
|
>>> cat top-stackoverflow-questions.json
|
|
|
|
>>> exit
|
|
|
|
```
|
|
|
|
|
|
|
|
[1]: https://github.com/scrapy/scrapy
|
|
|
|
[2]: https://github.com/scrapy/scrapyd
|
2015-06-20 11:29:44 +02:00
|
|
|
[3]: https://github.com/scrapy/scrapyd-client
|