1
0
mirror of https://github.com/vimagick/dockerfiles.git synced 2024-11-24 08:52:31 +02:00
dockerfiles/scrapyd/README.md

161 lines
4.5 KiB
Markdown
Raw Normal View History

2015-05-29 03:19:56 +02:00
scrapyd
=======
2014-12-01 15:19:37 +02:00
2021-12-09 12:11:29 +02:00
> :warning: THIS PROJECT WAS MOVED TO: https://github.com/EasyPi/docker-scrapyd
2015-07-02 16:19:37 +02:00
![](http://dockeri.co/image/vimagick/scrapyd)
2017-01-22 07:32:27 +02:00
[scrapy][1] is an open source and collaborative framework for extracting the
2015-06-20 11:02:23 +02:00
data you need from websites. In a fast, simple, yet extensible way.
2015-06-20 10:45:20 +02:00
2017-01-22 07:32:27 +02:00
[scrapyd][2] is a service for running Scrapy spiders. It allows you to deploy
2015-06-20 11:02:23 +02:00
your Scrapy projects and control their spiders using a HTTP JSON API.
2017-01-22 07:32:27 +02:00
[scrapyd-client][3] is a client for scrapyd. It provides the scrapyd-deploy
2015-06-20 11:29:44 +02:00
utility which allows you to deploy your project to a Scrapyd server.
2017-01-22 07:32:27 +02:00
[scrapy-splash][4] provides Scrapy+JavaScript integration using Splash.
2015-09-20 12:19:43 +02:00
2017-03-06 04:00:44 +02:00
[scrapyrt][5] allows you to easily add HTTP API to your existing Scrapy project.
2016-05-26 14:30:59 +02:00
2021-11-30 08:46:34 +02:00
[Spidermon][6] is a framework to build monitors for Scrapy spiders.
2017-03-06 04:00:44 +02:00
2021-11-30 08:46:34 +02:00
[pillow][7] is the Python Imaging Library to support the ImagesPipeline.
This image is based on `debian:buster`, 7 latest python packages are installed:
2015-06-20 10:45:20 +02:00
- `scrapy`: git+https://github.com/scrapy/scrapy.git
- `scrapyd`: git+https://github.com/scrapy/scrapyd.git
2015-06-20 11:29:44 +02:00
- `scrapyd-client`: git+https://github.com/scrapy/scrapyd-client.git
2017-01-22 07:32:27 +02:00
- `scrapy-splash`: git+https://github.com/scrapinghub/scrapy-splash.git
2017-03-06 04:00:44 +02:00
- `scrapyrt`: git+https://github.com/scrapinghub/scrapyrt.git
2021-11-30 08:46:34 +02:00
- `spidermon`: git+https://github.com/scrapinghub/spidermon.git
2016-05-26 14:30:59 +02:00
- `pillow`: git+https://github.com/python-pillow/Pillow.git
2014-12-01 15:19:37 +02:00
2016-06-02 19:01:52 +02:00
Please use this as base image for your own project.
2014-12-01 15:19:37 +02:00
2020-05-26 11:31:46 +02:00
:warning: Scrapy has dropped support for Python 2.7, which reached end-of-life on 2020-01-01.
2016-08-10 05:32:53 +02:00
## docker-compose.yml
```yaml
2021-11-03 12:14:38 +02:00
version: "3.8"
services:
scrapyd:
image: vimagick/scrapyd:py3
ports:
- "6800:6800"
volumes:
- ./data:/var/lib/scrapyd
- /usr/local/lib/python3.9/dist-packages
restart: unless-stopped
scrapy:
image: vimagick/scrapyd:py3
command: bash
volumes:
- .:/code
working_dir: /code
restart: unless-stopped
scrapyrt:
image: vimagick/scrapyd:py3
command: scrapyrt -i 0.0.0.0 -p 9080
ports:
- "9080:9080"
volumes:
- .:/code
working_dir: /code
restart: unless-stopped
2016-08-10 05:32:53 +02:00
```
2015-06-20 11:02:23 +02:00
## Run it as background-daemon for scrapyd
2015-06-20 10:45:20 +02:00
2016-08-10 05:32:53 +02:00
```bash
$ docker-compose up -d scrapyd
$ docker-compose logs -f scrapyd
$ docker cp scrapyd_scrapyd_1:/var/lib/scrapyd/items .
$ tree items
└── myproject
└── myspider
└── ad6153ee5b0711e68bc70242ac110005.jl
2015-06-20 10:45:20 +02:00
```
2016-08-10 05:32:53 +02:00
```bash
2020-05-26 12:24:55 +02:00
$ mkvirtualenv -p python3 webbot
2016-08-04 13:31:24 +02:00
$ pip install scrapy scrapyd-client
$ scrapy startproject myproject
$ cd myproject
$ setvirtualenvproject
$ scrapy genspider myspider mydomain.com
$ scrapy edit myspider
$ scrapy list
$ vi scrapy.cfg
$ scrapyd-client deploy
$ curl http://localhost:6800/schedule.json -d project=myproject -d spider=myspider
2016-08-10 05:32:53 +02:00
$ firefox http://localhost:6800
2016-08-04 13:31:24 +02:00
```
File: scrapy.cfg
2016-08-10 05:32:53 +02:00
```ini
2016-08-04 13:31:24 +02:00
[settings]
default = myproject.settings
[deploy]
url = http://localhost:6800/
project = myproject
```
2015-06-20 11:02:23 +02:00
## Run it as interactive-shell for scrapy
2015-06-20 10:45:20 +02:00
2016-08-10 05:32:53 +02:00
```bash
2015-06-20 10:45:20 +02:00
$ cat > stackoverflow_spider.py << _EOF_
import scrapy
class StackOverflowSpider(scrapy.Spider):
name = 'stackoverflow'
start_urls = ['http://stackoverflow.com/questions?sort=votes']
def parse(self, response):
for href in response.css('.question-summary h3 a::attr(href)'):
full_url = response.urljoin(href.extract())
yield scrapy.Request(full_url, callback=self.parse_question)
def parse_question(self, response):
yield {
'title': response.css('h1 a::text').extract()[0],
2021-02-01 10:42:49 +02:00
'votes': response.css('.question div[itemprop="upvoteCount"]::text').extract()[0],
'body': response.css('.question .postcell').extract()[0],
2015-06-20 10:45:20 +02:00
'tags': response.css('.question .post-tag::text').extract(),
'link': response.url,
}
_EOF_
2016-08-10 05:32:53 +02:00
$ docker-compose run --rm scrapy
2021-02-01 10:42:49 +02:00
>>> scrapy runspider stackoverflow_spider.py -o top-stackoverflow-questions.jl
>>> cat top-stackoverflow-questions.jl
2015-06-20 10:45:20 +02:00
>>> exit
```
2018-01-25 09:01:53 +02:00
## Run it as realtime crawler for scrapyrt
```bash
$ git clone https://github.com/scrapy/quotesbot.git .
$ docker-compose up -d scrapyrt
$ curl -s 'http://localhost:9080/crawl.json?spider_name=toscrape-css&callback=parse&url=http://quotes.toscrape.com/&max_requests=5' | jq -c '.items[]'
```
2015-06-20 10:45:20 +02:00
[1]: https://github.com/scrapy/scrapy
[2]: https://github.com/scrapy/scrapyd
2015-06-20 11:29:44 +02:00
[3]: https://github.com/scrapy/scrapyd-client
2015-09-20 12:19:43 +02:00
[4]: https://github.com/scrapinghub/scrapy-splash
2017-03-06 04:00:44 +02:00
[5]: https://github.com/scrapinghub/scrapyrt
2021-11-30 08:46:34 +02:00
[6]: https://github.com/scrapinghub/spidermon
[7]: https://github.com/python-pillow/Pillow