1
0
mirror of https://github.com/vimagick/dockerfiles.git synced 2024-11-28 09:08:50 +02:00
dockerfiles/scrapyd
Felipe Roberto c86d9d5fd9 Update scrapyd.conf
With the release of 1.2.0 at 2017-04-12 scrapyd’s bind_address now defaults to 127.0.0.1 instead of 0.0.0.0 to listen only for connection from the local host, but for docker environment it should be listened from outside so now we should set it manually.
2017-04-29 01:32:32 -03:00
..
arm scrapyd: run without pidfile 2017-04-04 11:59:17 +08:00
onbuild switch to new domain: easypi.info 2016-05-01 09:06:20 +08:00
py3 fix scrapyd:py3 volume path in docker-compose.yml (FIX #39) 2017-03-21 08:17:51 +08:00
docker-compose.yml update scrapyd 2017-01-22 13:32:27 +08:00
Dockerfile scrapyd: run without pidfile 2017-04-04 11:59:17 +08:00
README.md add scrapyrt to scrapyd image 2017-03-06 13:08:27 +08:00
scrapyd.conf Update scrapyd.conf 2017-04-29 01:32:32 -03:00

scrapyd

scrapy is an open source and collaborative framework for extracting the data you need from websites. In a fast, simple, yet extensible way.

scrapyd is a service for running Scrapy spiders. It allows you to deploy your Scrapy projects and control their spiders using a HTTP JSON API.

scrapyd-client is a client for scrapyd. It provides the scrapyd-deploy utility which allows you to deploy your project to a Scrapyd server.

scrapy-splash provides Scrapy+JavaScript integration using Splash.

scrapyrt allows you to easily add HTTP API to your existing Scrapy project.

pillow is the Python Imaging Library to support the ImagesPipeline.

This image is based on debian:jessie, 6 latest python packages are installed:

Please use this as base image for your own project.

docker-compose.yml

scrapyd:
  image: vimagick/scrapyd
  ports:
    - "6800:6800"
  volumes:
    - ./data:/var/lib/scrapyd
    - /usr/local/lib/python2.7/dist-packages
  restart: always

scrapy:
  image: vimagick/scrapyd
  command: bash
  volumes:
    - .:/code
  working_dir: /code
  restart: always

Run it as background-daemon for scrapyd

$ docker-compose up -d scrapyd
$ docker-compose logs -f scrapyd
$ docker cp scrapyd_scrapyd_1:/var/lib/scrapyd/items .
$ tree items
└── myproject
    └── myspider
        └── ad6153ee5b0711e68bc70242ac110005.jl
$ mkvirtualenv webbot
$ pip install scrapy scrapyd-client

$ scrapy startproject myproject
$ cd myproject
$ setvirtualenvproject

$ scrapy genspider myspider mydomain.com
$ scrapy edit myspider
$ scrapy list

$ vi scrapy.cfg
$ scrapyd-client deploy
$ curl http://localhost:6800/schedule.json -d project=myproject -d spider=myspider
$ firefox http://localhost:6800

File: scrapy.cfg

[settings]
default = myproject.settings

[deploy]
url = http://localhost:6800/
project = myproject

Run it as interactive-shell for scrapy

$ cat > stackoverflow_spider.py << _EOF_
import scrapy

class StackOverflowSpider(scrapy.Spider):
    name = 'stackoverflow'
    start_urls = ['http://stackoverflow.com/questions?sort=votes']

    def parse(self, response):
        for href in response.css('.question-summary h3 a::attr(href)'):
            full_url = response.urljoin(href.extract())
            yield scrapy.Request(full_url, callback=self.parse_question)

    def parse_question(self, response):
        yield {
            'title': response.css('h1 a::text').extract()[0],
            'votes': response.css('.question .vote-count-post::text').extract()[0],
            'body': response.css('.question .post-text').extract()[0],
            'tags': response.css('.question .post-tag::text').extract(),
            'link': response.url,
        }
_EOF_

$ docker-compose run --rm scrapy
>>> scrapy runspider stackoverflow_spider.py -o top-stackoverflow-questions.json
>>> cat top-stackoverflow-questions.json
>>> exit