这个
stockInfo.py
包含:
import scrapy
import re
import pkgutil
class QuotesSpider(scrapy.Spider):
name = "stockInfo"
data = pkgutil.get_data("tutorial", "resources/urls.txt")
data = data.decode()
start_urls = data.split("\r\n")
def parse(self, response):
company = re.findall("[0-9]{6}",response.url)[0]
filename = '%s_info.html' % company
with open(filename, 'wb') as f:
f.write(response.body)
去执行蜘蛛
stockInfo
在窗口的命令中。
d:
cd tutorial
scrapy crawl stockInfo
现在网址的所有网页
resources/urls.txt
将下载到目录上
d:/tutorial
.
然后把蜘蛛部署到
Scrapinghub
stockInfo spider
.
下面的命令行是如何在
刮胡
?
with open(filename, 'wb') as f:
f.write(response.body)
如何将数据保存在scrapinghub中,并在作业完成后从scrapinghub下载?
首先安装刮片机。
pip install scrapinghub[msgpack]
重写为
Thiago Curvelo
喂,把它放在我的垃圾桶里。
Deploy log location: C:\Users\dreams\AppData\Local\Temp\shub_deploy_yzstvtj8.log
Error: Deploy failed: b'{"status": "error", "message": "Internal error"}'
_get_apisettings, commands_module='sh_scrapy.commands')
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 148, in _run_usercode
_run(args, settings)
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 103, in _run
_run_scrapy(args, settings)
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 111, in _run_scrapy
execute(settings=settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 148, in execute
cmd.crawler_process = CrawlerProcess(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 243, in __init__
super(CrawlerProcess, self).__init__(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 134, in __init__
self.spider_loader = _get_spider_loader(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 330, in _get_spider_loader
return loader_cls.from_settings(settings.frozencopy())
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 61, in from_settings
return cls(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 25, in __init__
self._load_all_spiders()
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 47, in _load_all_spiders
for module in walk_modules(name):
File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 71, in walk_modules
submod = import_module(fullpath)
File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/app/__main__.egg/mySpider/spiders/stockInfo.py", line 4, in <module>
ImportError: cannot import name ScrapinghubClient
{"message": "shub-image-info exit code: 1", "details": null, "error": "image_info_error"}
{"status": "error", "message": "Internal error"}
这个要求.txt只包含一行:
scrapinghub[msgpack]
这个剪贴簿.yml包含:
project: 123456
requirements:
file: requirements.tx
现在部署它。
D:\mySpider>shub deploy 123456
Packing version 1.0
Deploying to Scrapy Cloud project "123456"
Deploy log last 30 lines:
Deploy log location: C:\Users\dreams\AppData\Local\Temp\shub_deploy_4u7kb9ml.log
Error: Deploy failed: b'{"status": "error", "message": "Internal error"}'
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 148, in _run_usercode
_run(args, settings)
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 103, in _run
_run_scrapy(args, settings)
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 111, in _run_scrapy
execute(settings=settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 148, in execute
cmd.crawler_process = CrawlerProcess(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 243, in __init__
super(CrawlerProcess, self).__init__(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 134, in __init__
self.spider_loader = _get_spider_loader(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 330, in _get_spider_loader
return loader_cls.from_settings(settings.frozencopy())
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 61, in from_settings
return cls(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 25, in __init__
self._load_all_spiders()
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 47, in _load_all_spiders
for module in walk_modules(name):
File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 71, in walk_modules
submod = import_module(fullpath)
File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/tmp/unpacked-eggs/__main__.egg/mySpider/spiders/stockInfo.py", line 5, in <module>
from scrapinghub import ScrapinghubClient
ImportError: cannot import name ScrapinghubClient
{"message": "shub-image-info exit code: 1", "details": null, "error": "image_info_error"}
{"status": "error", "message": "Internal error"}
1.问题仍然存在。
ImportError: cannot import name ScrapinghubClient
2.我的本地电脑上只安装了python3.7和win7,为什么错误信息显示:
File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 71, in walk_modules
错误信息是否在scrapinghub(远程端)?就送到我的本地区去表演?