会员可以在此提问,百战程序员老师有问必答
对大家有帮助的问答会被标记为“推荐”
看完课程过来浏览一下别人提的问题,会帮你学得更全面
截止目前,同学们一共提了 134510个问题
Python全系列/第十六阶段:Python 爬虫开发/爬虫数据存储 166楼

老师,这是结果报的是NoneType: None,应该怎么办啊

2024-05-25 16:57:39 [scrapy.utils.log] INFO: Scrapy 2.11.2 started (bot: ImagePipeline)

2024-05-25 16:57:39 [scrapy.utils.log] INFO: Versions: lxml 5.2.2.0, libxml2 2.11.7, cssselect 1.2.0, parsel 1.9.1, w3lib 2.1.2, Twisted 24.3.0, Python 3.11.3 (tags/v3.11.3:f3909b8, Apr  4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)], pyOpenSSL 24.1.0 (OpenSSL 3.2.1 30 Jan 2024), cryptography 42.0.7, Platform Windows-10-10.0.19045-SP0

2024-05-25 16:57:39 [scrapy.addons] INFO: Enabled addons:

[]

2024-05-25 16:57:39 [asyncio] DEBUG: Using selector: SelectSelector

2024-05-25 16:57:39 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor

2024-05-25 16:57:39 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop

2024-05-25 16:57:39 [scrapy.extensions.telnet] INFO: Telnet Password: 68881f9c415504d8

2024-05-25 16:57:39 [scrapy.middleware] INFO: Enabled extensions:

['scrapy.extensions.corestats.CoreStats',

 'scrapy.extensions.telnet.TelnetConsole',

 'scrapy.extensions.logstats.LogStats']

2024-05-25 16:57:39 [scrapy.crawler] INFO: Overridden settings:

{'BOT_NAME': 'ImagePipeline',

 'FEED_EXPORT_ENCODING': 'utf-8',

 'NEWSPIDER_MODULE': 'ImagePipeline.spiders',

 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',

 'SPIDER_MODULES': ['ImagePipeline.spiders'],

 'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor',

 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '

               '(KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 '

               'Edg/125.0.0.0'}

2024-05-25 16:57:39 [scrapy.middleware] INFO: Enabled downloader middlewares:

['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware',

 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',

 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',

 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',

 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',

 'scrapy.downloadermiddlewares.retry.RetryMiddleware',

 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',

 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',

 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',

 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',

 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',

 'scrapy.downloadermiddlewares.stats.DownloaderStats']

2024-05-25 16:57:39 [scrapy.middleware] INFO: Enabled spider middlewares:

['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',

 'scrapy.spidermiddlewares.referer.RefererMiddleware',

 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',

 'scrapy.spidermiddlewares.depth.DepthMiddleware']

2024-05-25 16:57:39 [scrapy.middleware] INFO: Enabled item pipelines:

['scrapy.pipelines.images.ImagesPipeline']

2024-05-25 16:57:39 [scrapy.core.engine] INFO: Spider opened

2024-05-25 16:57:40 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

2024-05-25 16:57:40 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023

2024-05-25 16:57:40 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://desk.zol.com.cn/bizhi/9812_118227_2.html> (referer: None)

2024-05-25 16:57:40 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'desk-fd.zol-img.com.cn': <GET https://desk-fd.zol-img.com.cn/t_s960x600c5/g6/M00/0F/09/ChMkKWF4tn6IeIGwAD5r3l1JSDcAAU-5QFAFu4APmv2924.jpg>

2024-05-25 16:57:40 [scrapy.pipelines.media] ERROR: [Failure instance: Traceback: <class 'scrapy.pipelines.files.FileException'>: 

D:\pythonProject\pythonProject01\.venv\Lib\site-packages\twisted\internet\defer.py:536:addCallbacks

D:\pythonProject\pythonProject01\.venv\Lib\site-packages\twisted\internet\defer.py:1078:_runCallbacks

D:\pythonProject\pythonProject01\.venv\Lib\site-packages\scrapy\pipelines\media.py:197:_check_media_to_download

D:\pythonProject\pythonProject01\.venv\Lib\site-packages\twisted\internet\defer.py:536:addCallbacks

--- <exception caught here> ---

D:\pythonProject\pythonProject01\.venv\Lib\site-packages\twisted\internet\defer.py:1078:_runCallbacks

D:\pythonProject\pythonProject01\.venv\Lib\site-packages\scrapy\pipelines\files.py:459:media_failed

]

NoneType: None

2024-05-25 16:57:40 [scrapy.core.scraper] DEBUG: Scraped from <200 https://desk.zol.com.cn/bizhi/9812_118227_2.html>

{'image_urls': ['https://desk-fd.zol-img.com.cn/t_s960x600c5/g6/M00/0F/09/ChMkKWF4tn6IeIGwAD5r3l1JSDcAAU-5QFAFu4APmv2924.jpg'], 'images': []}

2024-05-25 16:57:40 [scrapy.core.engine] INFO: Closing spider (finished)

2024-05-25 16:57:40 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

{'downloader/exception_count': 1,

 'downloader/exception_type_count/scrapy.exceptions.IgnoreRequest': 1,

 'downloader/request_bytes': 330,

 'downloader/request_count': 1,

 'downloader/request_method_count/GET': 1,

 'downloader/response_bytes': 9679,

 'downloader/response_count': 1,

 'downloader/response_status_count/200': 1,

 'elapsed_time_seconds': 0.433368,

 'finish_reason': 'finished',

 'finish_time': datetime.datetime(2024, 5, 25, 8, 57, 40, 435966, tzinfo=datetime.timezone.utc),

 'httpcompression/response_bytes': 33411,

 'httpcompression/response_count': 1,

 'item_scraped_count': 1,

 'log_count/DEBUG': 6,

 'log_count/ERROR': 1,

 'log_count/INFO': 10,

 'offsite/domains': 1,

 'offsite/filtered': 1,

 'response_received_count': 1,

 'scheduler/dequeued': 1,

 'scheduler/dequeued/memory': 1,

 'scheduler/enqueued': 1,

 'scheduler/enqueued/memory': 1,

 'start_time': datetime.datetime(2024, 5, 25, 8, 57, 40, 2598, tzinfo=datetime.timezone.utc)}

2024-05-25 16:57:40 [scrapy.core.engine] INFO: Spider closed (finished)



Python全系列/第十六阶段:Python 爬虫开发/scrapy框架使用 167楼
Python全系列/第十六阶段:Python 爬虫开发/scrapy框架使用(旧) 169楼
Python全系列/第十六阶段:Python 爬虫开发/scrapy框架使用 170楼
Python全系列/第十六阶段:Python 爬虫开发/移动端爬虫开发- 171楼
Python全系列/第十六阶段:Python 爬虫开发/移动端爬虫开发- 172楼
Python全系列/第十六阶段:Python 爬虫开发/爬虫基础 174楼
Python全系列/第十六阶段:Python 爬虫开发/爬虫基础 175楼
Python全系列/第十六阶段:Python 爬虫开发/爬虫基础 176楼
Python全系列/第十六阶段:Python 爬虫开发/爬虫基础 177楼
Python全系列/第十六阶段:Python 爬虫开发/爬虫基础 178楼

老师,你好,我按照代码文档写的代码运行出现以下出错,不知道什么原因,方便解答一下吗

from fake_useragent import UserAgent
import requests
from lxml import etree

# 发送请求
class Downloader():
    def do_download(self,url):
        print(url)
        headers = {'User-Agent' : UserAgent().chrome}
        resp = requests.get(url, headers=headers)
        if resp.status_code == 200:
            resp.encoding = 'utf-8'
            return resp.text

# 数据分析
class Parser():
    def do_parse(self,html):
        e = etree.HTML(html)
        # 写要爬取的内容的提取
        contents = [div.xpath('string(.)').strip() for div in e.xpath('//div[@class="content"]')]
        urls = ['https://www.qiushibaike.com{}'.format(url) for url in e.xpath('//ul[@class="pagination"]/li/a/@href')]
        return contents,urls

# 数据保存
class DataOutPut():
    def do_save(self,datas):
        with open('duanzi2.txt','a',encoding='utf-8') as f:
            for data in datas:
                f.write(data + '\n')

# URL管理器
class URLManager():
    def __init__(self):
        self.new_url = set()
        self.old_url = set()

    # 加入一个url的方法
    def add_new_url(self,url):
        if url is not None and url != '' and url not in self.old_url:
            self.new_url.add(url)
    # 加入多个url
    def add_new_urls(self,urls):
        for url in urls:
            self.add_new_url(url)
    # 获取一个url
    def get_new_url(self):
        url = self.new_url.pop()
        self.old_url.add(url)
        return url
    # 获取还有多少个url要爬取c
    def get_new_url_size(self):
        return len(self.new_url)

    # 获取是否还有url要爬取
    def have_new_url(self):
        return self.get_new_url_size() > 0

# 调度器
class Scheduler:
    def __init__(self):
        self.downloader = Downloader()
        self.parser = Parser()
        self.data_out_put = DataOutPut()
        self.url_manger = URLManager()

    def start(self,url):
        self.url_manger.add_new_urls(url)
        while self.url_manger.have_new_url():
            url = self.url_manger.get_new_url()
            html = self.downloader.do_download(url)
            datas,urls = self.parser.do_parse(html)
            self.data_out_put.do_save(datas)
            self.url_manger.add_new_urls(urls)

# 主函数
if __name__ == '__main__':
    scheduler = Scheduler()
    url = 'https://www.qiushibaike.com/text/'
    scheduler.start(url)

image.png

Python全系列/第十六阶段:Python 爬虫开发/爬虫反反爬- 179楼

课程分类

百战程序员微信公众号

百战程序员微信小程序

©2014-2026百战汇智(北京)科技有限公司 All Rights Reserved 北京亦庄经济开发区科创十四街 赛蒂国际工业园
网站维护:百战汇智(北京)科技有限公司
京公网安备 11011402011233号    京ICP备18060230号-3    营业执照    经营许可证:京B2-20212637