1
0
mirror of https://github.com/vimagick/dockerfiles.git synced 2024-12-23 01:39:27 +02:00
dockerfiles/scrapyd/README.md

151 lines
4.2 KiB
Markdown
Raw Normal View History

2015-05-29 03:19:56 +02:00
scrapyd
=======
2014-12-01 15:19:37 +02:00
2015-07-02 16:19:37 +02:00
![](http://dockeri.co/image/vimagick/scrapyd)
2017-01-22 07:32:27 +02:00
[scrapy][1] is an open source and collaborative framework for extracting the
2015-06-20 11:02:23 +02:00
data you need from websites. In a fast, simple, yet extensible way.
2015-06-20 10:45:20 +02:00
2017-01-22 07:32:27 +02:00
[scrapyd][2] is a service for running Scrapy spiders. It allows you to deploy
2015-06-20 11:02:23 +02:00
your Scrapy projects and control their spiders using a HTTP JSON API.
2017-01-22 07:32:27 +02:00
[scrapyd-client][3] is a client for scrapyd. It provides the scrapyd-deploy
2015-06-20 11:29:44 +02:00
utility which allows you to deploy your project to a Scrapyd server.
2017-01-22 07:32:27 +02:00
[scrapy-splash][4] provides Scrapy+JavaScript integration using Splash.
2015-09-20 12:19:43 +02:00
2017-03-06 04:00:44 +02:00
[scrapyrt][5] allows you to easily add HTTP API to your existing Scrapy project.
2016-05-26 14:30:59 +02:00
2017-03-06 04:00:44 +02:00
[pillow][6] is the Python Imaging Library to support the ImagesPipeline.
2020-05-26 11:31:46 +02:00
This image is based on `debian:buster`, 6 latest python packages are installed:
2015-06-20 10:45:20 +02:00
- `scrapy`: git+https://github.com/scrapy/scrapy.git
- `scrapyd`: git+https://github.com/scrapy/scrapyd.git
2015-06-20 11:29:44 +02:00
- `scrapyd-client`: git+https://github.com/scrapy/scrapyd-client.git
2017-01-22 07:32:27 +02:00
- `scrapy-splash`: git+https://github.com/scrapinghub/scrapy-splash.git
2017-03-06 04:00:44 +02:00
- `scrapyrt`: git+https://github.com/scrapinghub/scrapyrt.git
2016-05-26 14:30:59 +02:00
- `pillow`: git+https://github.com/python-pillow/Pillow.git
2014-12-01 15:19:37 +02:00
2016-06-02 19:01:52 +02:00
Please use this as base image for your own project.
2014-12-01 15:19:37 +02:00
2020-05-26 11:31:46 +02:00
:warning: Scrapy has dropped support for Python 2.7, which reached end-of-life on 2020-01-01.
2016-08-10 05:32:53 +02:00
## docker-compose.yml
```yaml
scrapyd:
2020-05-26 12:24:55 +02:00
image: vimagick/scrapyd:py3
2016-08-10 05:32:53 +02:00
ports:
- "6800:6800"
2017-01-22 07:32:27 +02:00
volumes:
- ./data:/var/lib/scrapyd
2020-05-26 12:24:55 +02:00
- /usr/local/lib/python3.7/dist-packages
restart: unless-stopped
2016-08-10 05:32:53 +02:00
scrapy:
2020-05-26 12:24:55 +02:00
image: vimagick/scrapyd:py3
2016-08-10 05:32:53 +02:00
command: bash
volumes:
- .:/code
working_dir: /code
2020-05-26 12:24:55 +02:00
restart: unless-stopped
2018-01-25 09:01:53 +02:00
scrapyrt:
2020-05-26 12:24:55 +02:00
image: vimagick/scrapyd:py3
2018-01-25 09:01:53 +02:00
command: scrapyrt -i 0.0.0.0 -p 9080
ports:
- "9080:9080"
volumes:
- .:/code
working_dir: /code
2020-05-26 12:24:55 +02:00
restart: unless-stopped
2016-08-10 05:32:53 +02:00
```
2015-06-20 11:02:23 +02:00
## Run it as background-daemon for scrapyd
2015-06-20 10:45:20 +02:00
2016-08-10 05:32:53 +02:00
```bash
$ docker-compose up -d scrapyd
$ docker-compose logs -f scrapyd
$ docker cp scrapyd_scrapyd_1:/var/lib/scrapyd/items .
$ tree items
└── myproject
└── myspider
└── ad6153ee5b0711e68bc70242ac110005.jl
2015-06-20 10:45:20 +02:00
```
2016-08-10 05:32:53 +02:00
```bash
2020-05-26 12:24:55 +02:00
$ mkvirtualenv -p python3 webbot
2016-08-04 13:31:24 +02:00
$ pip install scrapy scrapyd-client
$ scrapy startproject myproject
$ cd myproject
$ setvirtualenvproject
$ scrapy genspider myspider mydomain.com
$ scrapy edit myspider
$ scrapy list
$ vi scrapy.cfg
$ scrapyd-client deploy
$ curl http://localhost:6800/schedule.json -d project=myproject -d spider=myspider
2016-08-10 05:32:53 +02:00
$ firefox http://localhost:6800
2016-08-04 13:31:24 +02:00
```
File: scrapy.cfg
2016-08-10 05:32:53 +02:00
```ini
2016-08-04 13:31:24 +02:00
[settings]
default = myproject.settings
[deploy]
url = http://localhost:6800/
project = myproject
```
2015-06-20 11:02:23 +02:00
## Run it as interactive-shell for scrapy
2015-06-20 10:45:20 +02:00
2016-08-10 05:32:53 +02:00
```bash
2015-06-20 10:45:20 +02:00
$ cat > stackoverflow_spider.py << _EOF_
import scrapy
class StackOverflowSpider(scrapy.Spider):
name = 'stackoverflow'
start_urls = ['http://stackoverflow.com/questions?sort=votes']
def parse(self, response):
for href in response.css('.question-summary h3 a::attr(href)'):
full_url = response.urljoin(href.extract())
yield scrapy.Request(full_url, callback=self.parse_question)
def parse_question(self, response):
yield {
'title': response.css('h1 a::text').extract()[0],
'votes': response.css('.question .vote-count-post::text').extract()[0],
'body': response.css('.question .post-text').extract()[0],
'tags': response.css('.question .post-tag::text').extract(),
'link': response.url,
}
_EOF_
2016-08-10 05:32:53 +02:00
$ docker-compose run --rm scrapy
2015-06-20 10:45:20 +02:00
>>> scrapy runspider stackoverflow_spider.py -o top-stackoverflow-questions.json
>>> cat top-stackoverflow-questions.json
>>> exit
```
2018-01-25 09:01:53 +02:00
## Run it as realtime crawler for scrapyrt
```bash
$ git clone https://github.com/scrapy/quotesbot.git .
$ docker-compose up -d scrapyrt
$ curl -s 'http://localhost:9080/crawl.json?spider_name=toscrape-css&callback=parse&url=http://quotes.toscrape.com/&max_requests=5' | jq -c '.items[]'
```
2015-06-20 10:45:20 +02:00
[1]: https://github.com/scrapy/scrapy
[2]: https://github.com/scrapy/scrapyd
2015-06-20 11:29:44 +02:00
[3]: https://github.com/scrapy/scrapyd-client
2015-09-20 12:19:43 +02:00
[4]: https://github.com/scrapinghub/scrapy-splash
2017-03-06 04:00:44 +02:00
[5]: https://github.com/scrapinghub/scrapyrt
[6]: https://github.com/python-pillow/Pillow