老师,捅咕一天了,实在顶不住了。
这是piplines的
from scrapy import Request
class InmagePipeline(object):
def get_media_requests(self, item, info):
yield Request(item['inmage_urls'],meta={'name':item['inmage_name']})
def file_path(self, request, response=None, info=None):
name=request.meta['name'].strip()
return name
这是zop的
import scrapy
class ZolSpider(scrapy.Spider):
name = 'zol'
allowed_domains = ['zol.com.cn']
start_urls = ['http://desk.zol.com.cn/bizhi/8335_103428_2.html']
def parse(self, response):
inmage_url=response.xpath('//img[@id="bigImg"]/@src').extract_first()
inmage_name=response.xpath('string(//h3)').extract_first()
yield {
'inmage_urls':[inmage_url],
'inmage_name':inmage_name
}
next_url=response.xpath('//a[@id="pageNext"]/@href').extract_first()
yield scrapy.Request(response.urljob(next_url),callback=self.parse)
# yield scrapy.Request(next_url,callback=self.parse,dont_filter=True)
这是settings,我改了好多遍,之前是报2019-12-04 21:54:49 [twisted] CRITICAL: Unhandled error in Deferred:错误,让我改了。现在的settings
SPIDER_MODULES = ['inmage.spiders']
NEWSPIDER_MODULE = 'inmage.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
ITEM_PIPELINES = {
'scrapy.pipelines.images.ImagesPipeline': 300
# 'scrapy.contrib.pipeline.images.ImagesPipeline': 300
# 'inmage/pipelines.InmagePipeline':300
}
IMAGES_STORE = 'e:/pic'
下面是报错
D:\pythonDownloads\python.exe E:/demo1/test12/pdemo/inmage/inmage/start.py
2019-12-04 21:57:11 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: inmage)
2019-12-04 21:57:11 [scrapy.utils.log] INFO: Versions: lxml 4.4.2.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d 10 Sep 2019), cryptography 2.8, Platform Windows-10-10.0.18362-SP0
2019-12-04 21:57:11 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'inmage', 'NEWSPIDER_MODULE': 'inmage.spiders', 'SPIDER_MODULES': ['inmage.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'}
2019-12-04 21:57:11 [scrapy.extensions.telnet] INFO: Telnet Password: f81e894471729d48
2019-12-04 21:57:11 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2019-12-04 21:57:11 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-12-04 21:57:11 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-12-04 21:57:11 [scrapy.middleware] INFO: Enabled item pipelines:
['scrapy.pipelines.images.ImagesPipeline']
2019-12-04 21:57:11 [scrapy.core.engine] INFO: Spider opened
2019-12-04 21:57:11 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-12-04 21:57:11 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-12-04 21:57:12 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://desk.zol.com.cn/bizhi/8335_103428_2.html> (referer: None)
2019-12-04 21:57:12 [scrapy.core.scraper] DEBUG: Scraped from <200 http://desk.zol.com.cn/bizhi/8335_103428_2.html>
{'inmage_urls': ['https://desk-fd.zol-img.com.cn/t_s960x600c5/g5/M00/0E/08/ChMkJ13g3omIZoikAASSbgH1-L4AAvfdQMk3h0ABJKG291.jpg'], 'inmage_name': '\r\n\t\t漩涡鸣人图片-火影忍者鸣人图片\r\n\t\t(1/10)\r\n\t', 'images': []}
2019-12-04 21:57:12 [scrapy.core.scraper] ERROR: Spider error processing <GET http://desk.zol.com.cn/bizhi/8335_103428_2.html> (referer: None)
Traceback (most recent call last):
File "D:\pythonDownloads\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
yield next(it)
File "D:\pythonDownloads\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable
for r in iterable:
File "D:\pythonDownloads\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
for x in result:
File "D:\pythonDownloads\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable
for r in iterable:
File "D:\pythonDownloads\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
return (_set_referer(r) for r in result or ())
File "D:\pythonDownloads\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable
for r in iterable:
File "D:\pythonDownloads\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "D:\pythonDownloads\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable
for r in iterable:
File "D:\pythonDownloads\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "E:\demo1\test12\pdemo\inmage\inmage\spiders\zol.py", line 18, in parse
yield scrapy.Request(response.urljob(next_url),callback=self.parse)
AttributeError: 'HtmlResponse' object has no attribute 'urljob'
2019-12-04 21:57:12 [scrapy.core.engine] INFO: Closing spider (finished)
2019-12-04 21:57:12 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 319,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 9959,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'elapsed_time_seconds': 1.65597,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2019, 12, 4, 13, 57, 12, 979906),
'item_scraped_count': 1,
'log_count/DEBUG': 2,
'log_count/ERROR': 1,
'log_count/INFO': 10,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/AttributeError': 1,
'start_time': datetime.datetime(2019, 12, 4, 13, 57, 11, 323936)}
2019-12-04 21:57:12 [scrapy.core.engine] INFO: Spider closed (finished)
Process finished with exit code 0