mirror of
https://github.com/vimagick/dockerfiles.git
synced 2024-12-25 02:04:06 +02:00
.. | ||
onbuild | ||
docker-compose.yml | ||
Dockerfile | ||
README.md | ||
scrapyd.conf |
scrapyd
Scrapy is an open source and collaborative framework for extracting the data you need from websites. In a fast, simple, yet extensible way.
Scrapyd is a service for running Scrapy spiders. It allows you to deploy your Scrapy projects and control their spiders using a HTTP JSON API.
Scrapyd-client is a client for scrapyd. It provides the scrapyd-deploy utility which allows you to deploy your project to a Scrapyd server.
ScrapyJS provides Scrapy+JavaScript integration using Splash.
Pillow is the Python Imaging Library to support the ImagesPipeline.
This image is based on debian:jessie
, 5 latest python packages are installed:
scrapy
: git+https://github.com/scrapy/scrapy.gitscrapyd
: git+https://github.com/scrapy/scrapyd.gitscrapyd-client
: git+https://github.com/scrapy/scrapyd-client.gitscrapyjs
: git+https://github.com/scrapinghub/scrapy-splash.gitpillow
: git+https://github.com/python-pillow/Pillow.git
Please use this as base image for your own project.
docker-compose.yml
scrapyd:
image: vimagick/scrapyd
ports:
- "6800:6800"
restart: always
scrapy:
image: vimagick/scrapyd
command: bash
volumes:
- .:/code
working_dir: /code
restart: always
Run it as background-daemon for scrapyd
$ docker-compose up -d scrapyd
$ docker-compose logs -f scrapyd
$ docker cp scrapyd_scrapyd_1:/var/lib/scrapyd/items .
$ tree items
└── myproject
└── myspider
└── ad6153ee5b0711e68bc70242ac110005.jl
$ mkvirtualenv webbot
$ pip install scrapy scrapyd-client
$ scrapy startproject myproject
$ cd myproject
$ setvirtualenvproject
$ scrapy genspider myspider mydomain.com
$ scrapy edit myspider
$ scrapy list
$ vi scrapy.cfg
$ scrapyd-client deploy
$ curl http://localhost:6800/schedule.json -d project=myproject -d spider=myspider
$ firefox http://localhost:6800
File: scrapy.cfg
[settings]
default = myproject.settings
[deploy]
url = http://localhost:6800/
project = myproject
Run it as interactive-shell for scrapy
$ cat > stackoverflow_spider.py << _EOF_
import scrapy
class StackOverflowSpider(scrapy.Spider):
name = 'stackoverflow'
start_urls = ['http://stackoverflow.com/questions?sort=votes']
def parse(self, response):
for href in response.css('.question-summary h3 a::attr(href)'):
full_url = response.urljoin(href.extract())
yield scrapy.Request(full_url, callback=self.parse_question)
def parse_question(self, response):
yield {
'title': response.css('h1 a::text').extract()[0],
'votes': response.css('.question .vote-count-post::text').extract()[0],
'body': response.css('.question .post-text').extract()[0],
'tags': response.css('.question .post-tag::text').extract(),
'link': response.url,
}
_EOF_
$ docker-compose run --rm scrapy
>>> scrapy runspider stackoverflow_spider.py -o top-stackoverflow-questions.json
>>> cat top-stackoverflow-questions.json
>>> exit