b113da5d19
Using "--no-cache-dir" flag in pip install ,make sure dowloaded packages by pip don't cached on system . This is a best practise which make sure to fetch ftom repo instead of using local cached one . Further , in case of Docker Containers , by restricing caching , we can reduce image size. In term of stats , it depends upon the number of python packages multiplied by their respective size . e.g for heavy packages with a lot of dependencies it reduce a lot by don't caching pip packages. Further , more detail information can be found at https://medium.com/sciforce/strategies-of-docker-images-optimization-2ca9cc5719b6 Signed-off-by: Pratik Raj <rajpratik71@gmail.com> |
||
---|---|---|
.. | ||
docker-compose.yml | ||
Dockerfile | ||
README.md | ||
scrapyd.conf |
scrapyd
scrapy is an open source and collaborative framework for extracting the data you need from websites. In a fast, simple, yet extensible way.
scrapyd is a service for running Scrapy spiders. It allows you to deploy your Scrapy projects and control their spiders using a HTTP JSON API.
scrapyd-client is a client for scrapyd. It provides the scrapyd-deploy utility which allows you to deploy your project to a Scrapyd server.
scrapy-splash provides Scrapy+JavaScript integration using Splash.
scrapyrt allows you to easily add HTTP API to your existing Scrapy project.
pillow is the Python Imaging Library to support the ImagesPipeline.
This image is based on debian:buster
, 6 latest python packages are installed:
scrapy
: git+https://github.com/scrapy/scrapy.gitscrapyd
: git+https://github.com/scrapy/scrapyd.gitscrapyd-client
: git+https://github.com/scrapy/scrapyd-client.gitscrapy-splash
: git+https://github.com/scrapinghub/scrapy-splash.gitscrapyrt
: git+https://github.com/scrapinghub/scrapyrt.gitpillow
: git+https://github.com/python-pillow/Pillow.git
Please use this as base image for your own project.
⚠️ Scrapy has dropped support for Python 2.7, which reached end-of-life on 2020-01-01.
docker-compose.yml
scrapyd:
image: vimagick/scrapyd:py3
ports:
- "6800:6800"
volumes:
- ./data:/var/lib/scrapyd
- /usr/local/lib/python3.7/dist-packages
restart: unless-stopped
scrapy:
image: vimagick/scrapyd:py3
command: bash
volumes:
- .:/code
working_dir: /code
restart: unless-stopped
scrapyrt:
image: vimagick/scrapyd:py3
command: scrapyrt -i 0.0.0.0 -p 9080
ports:
- "9080:9080"
volumes:
- .:/code
working_dir: /code
restart: unless-stopped
Run it as background-daemon for scrapyd
$ docker-compose up -d scrapyd
$ docker-compose logs -f scrapyd
$ docker cp scrapyd_scrapyd_1:/var/lib/scrapyd/items .
$ tree items
└── myproject
└── myspider
└── ad6153ee5b0711e68bc70242ac110005.jl
$ mkvirtualenv -p python3 webbot
$ pip install scrapy scrapyd-client
$ scrapy startproject myproject
$ cd myproject
$ setvirtualenvproject
$ scrapy genspider myspider mydomain.com
$ scrapy edit myspider
$ scrapy list
$ vi scrapy.cfg
$ scrapyd-client deploy
$ curl http://localhost:6800/schedule.json -d project=myproject -d spider=myspider
$ firefox http://localhost:6800
File: scrapy.cfg
[settings]
default = myproject.settings
[deploy]
url = http://localhost:6800/
project = myproject
Run it as interactive-shell for scrapy
$ cat > stackoverflow_spider.py << _EOF_
import scrapy
class StackOverflowSpider(scrapy.Spider):
name = 'stackoverflow'
start_urls = ['http://stackoverflow.com/questions?sort=votes']
def parse(self, response):
for href in response.css('.question-summary h3 a::attr(href)'):
full_url = response.urljoin(href.extract())
yield scrapy.Request(full_url, callback=self.parse_question)
def parse_question(self, response):
yield {
'title': response.css('h1 a::text').extract()[0],
'votes': response.css('.question div[itemprop="upvoteCount"]::text').extract()[0],
'body': response.css('.question .postcell').extract()[0],
'tags': response.css('.question .post-tag::text').extract(),
'link': response.url,
}
_EOF_
$ docker-compose run --rm scrapy
>>> scrapy runspider stackoverflow_spider.py -o top-stackoverflow-questions.jl
>>> cat top-stackoverflow-questions.jl
>>> exit
Run it as realtime crawler for scrapyrt
$ git clone https://github.com/scrapy/quotesbot.git .
$ docker-compose up -d scrapyrt
$ curl -s 'http://localhost:9080/crawl.json?spider_name=toscrape-css&callback=parse&url=http://quotes.toscrape.com/&max_requests=5' | jq -c '.items[]'