会员可以在此提问,百战程序员老师有问必答
对大家有帮助的问答会被标记为“推荐”
看完课程过来浏览一下别人提的问题,会帮你学得更全面
截止目前,同学们一共提了 132358个问题

from fake_useragent import UserAgent
import requests
from lxml import etree

#发送请求
class Downloader():
    def do_download(self,url):
        print(url)
        headers = {"User-Agent":UserAgent.chrome}
        resp = requests.get(url,headers=headers)
        #如果=200,则发送请求成功
        if resp.status_code == 200:
            resp.encoding='utf-8'
            return resp.text

#数据解析
class Parser():
    def do_parse(self,html):
        e = etree.HTML(html)
        #将网页信息string化才能输出,strip()两边去除空格
        contents = [div.xpath('string').strip() for div in e.xpath( '//div[@class="content"]')]
        urls = ['https://www.qiushibaike.com{}'.format(url) for url in e.xpath('//ul[@class="pagination"]/li/a/@href')]
        return contents,urls

#数据保存
class DataOutPut():
    def do_save(self,datas):
        with open('duanzii.txt','a',encoding='utf-8') as f:
            for data in datas:
                f.write(data+'\n')

#url管理器
class URLManger():
    #初始化
    def __init__(self):
        #新旧url可以让自己更清晰的分辨
        self.new_url=set()
        self.old_url=set()
    #加入一个Url
    def add_new_url(self,url):
        #严谨url不能为空,不能为空字符串,也不能在爬取过的url里面
        if url is not None and url !='' and url not in self.old_url:
            self.new_url.add(url)
    #加入多个Url
    def add_new_urls(self,urls):
        for url in urls:
            # 判断,直接调用上面的方法
            url = self.add_new_url(url)

    #获取一个Url
    def get_new_url(self):
        # 取出一个url并删除,pop表示删除
        url = self.add_new_url.pop()
        # 将用完的url加入old中,用来判断是否用过
        self.old_url.add(url)
        return url
    #获取还有多少个Url要爬取
    def get_new_url_siaze(self):
          return len(self.new_url)
    #获取是否还有多少Url要爬取
    def have_new_url(self):
        return self.get_new_url_siaze() > 0

#数据调度器,调度四个个类
class Scheduler:
    def __init__(self):
        self.downloader = Downloader()
        self.paser = Parser()
        self.data_out_put = DataOutPut()
        self.url_manager = URLManger()
    def start(self,url):
        self.url_manager.add_new_url(url)
        while self.url_manager.have_new_url():
            #去除url
            url = self.url_manager.get_new_url()
            #下载
            html= self.downloader.do_download(url)
            #解析,data当前页面的数据,urls以及要爬取的数据
            datas,urls = self.paser.do_parse(html)
            #处理上一步的data
            self.data_out_put.do_save(datas)
            #处理上一步的urls
            self.url_manager.add_new_urls(urls)


if __name__=='__main__':
    scheduler = Scheduler()
    url = 'https://www.qiushibaike.com/text/'
    scheduler.start(url)

image.png

这是完整代码,pop还是不行

Python 全系列/第十五阶段:Python 爬虫开发/爬虫反反爬- 961楼

D:\python文件\爬虫\demo\first_scrapy>scrapy crawl baidu

2020-05-26 18:03:37 [scrapy.utils.log] INFO: Scrapy 2.1.0 started (bot: first_scrapy)

2020-05-26 18:03:37 [scrapy.utils.log] INFO: Versions: lxml 4.5.1.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.7.6 (tags/v3.7.6:43364a7ae0, De

c 19 2019, 00:42:30) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1g  21 Apr 2020), cryptography 2.9.2, Platform Windows-10-10.0.17134-SP0

2020-05-26 18:03:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor

2020-05-26 18:03:37 [scrapy.crawler] INFO: Overridden settings:

{'BOT_NAME': 'first_scrapy',

 'NEWSPIDER_MODULE': 'first_scrapy.spiders',

 'ROBOTSTXT_OBEY': True,

 'SPIDER_MODULES': ['first_scrapy.spiders']}

2020-05-26 18:03:37 [scrapy.extensions.telnet] INFO: Telnet Password: 2dc378fbfcafc19b

2020-05-26 18:03:37 [scrapy.middleware] INFO: Enabled extensions:

['scrapy.extensions.corestats.CoreStats',

 'scrapy.extensions.telnet.TelnetConsole',

 'scrapy.extensions.logstats.LogStats']

2020-05-26 18:03:37 [scrapy.middleware] INFO: Enabled downloader middlewares:

['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',

 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',

 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',

 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',

 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',

 'scrapy.downloadermiddlewares.retry.RetryMiddleware',

 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',

 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',

 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',

 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',

 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',

 'scrapy.downloadermiddlewares.stats.DownloaderStats']

2020-05-26 18:03:37 [scrapy.middleware] INFO: Enabled spider middlewares:

['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',

 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',

 'scrapy.spidermiddlewares.referer.RefererMiddleware',

 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',

 'scrapy.spidermiddlewares.depth.DepthMiddleware']

2020-05-26 18:03:37 [scrapy.middleware] INFO: Enabled item pipelines:

[]

2020-05-26 18:03:37 [scrapy.core.engine] INFO: Spider opened

2020-05-26 18:03:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

2020-05-26 18:03:37 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023

2020-05-26 18:03:37 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://baidu.com/robots.txt> (referer: None)

2020-05-26 18:03:37 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET http://baidu.com/>

2020-05-26 18:03:38 [scrapy.core.engine] INFO: Closing spider (finished)

2020-05-26 18:03:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

{'downloader/exception_count': 1,

 'downloader/exception_type_count/scrapy.exceptions.IgnoreRequest': 1,

 'downloader/request_bytes': 219,

 'downloader/request_count': 1,

 'downloader/request_method_count/GET': 1,

 'downloader/response_bytes': 2680,

 'downloader/response_count': 1,

 'downloader/response_status_count/200': 1,

 'elapsed_time_seconds': 0.291164,

 'finish_reason': 'finished',

 'finish_time': datetime.datetime(2020, 5, 26, 10, 3, 38, 88538),

 'log_count/DEBUG': 2,

 'log_count/INFO': 10,

 'response_received_count': 1,

 'robotstxt/forbidden': 1,

 'robotstxt/request_count': 1,

 'robotstxt/response_count': 1,

 'robotstxt/response_status_count/200': 1,

 'scheduler/dequeued': 1,

 'scheduler/dequeued/memory': 1,

 'scheduler/enqueued': 1,

 'scheduler/enqueued/memory': 1,

 'start_time': datetime.datetime(2020, 5, 26, 10, 3, 37, 797374)}

2020-05-26 18:03:38 [scrapy.core.engine] INFO: Spider closed (finished)


我也是按照视频爬百度,没有返回html

Python 全系列/第十五阶段:Python 爬虫开发/移动端爬虫开发- 962楼
Python 全系列/第十五阶段:Python 爬虫开发/爬虫反反爬- 963楼
Python 全系列/第十五阶段:Python 爬虫开发/爬虫反反爬- 964楼

from selenium import webdriver
from time import sleep


# 构造浏览器
chrome = webdriver.Chrome()
# 发送请求访问,url
chrome.get(
    'https://yys.cbg.163.com/cgi/mweb/pl/role?view_loc=equip_list')
js= 'chrome.document.documentElement.scrollTop=10000'
chrome.execute_script(js)
sleep(5)

#输入要搜索的内容
qf = chrome.find_elements_by_class_name('icon-text')
jg = chrome.find_elements_by_class_name('price')
sc = chrome.find_elements_by_class_name('collect')
ssr = chrome.find_elements_by_class_name('base')
for qfs,jgs,scs,ssrs in zip(qf,jg,sc,ssr):
    print(qfs.text,jgs.text,scs.text,ssrs.text)

Traceback (most recent call last):

  File "E:/untitled1/爬虫/数据提取/", line 11, in <module>

    chrome.execute_script(js)

  File "E:\untitled1\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 636, in execute_script

    'args': converted_args})['value']

  File "E:\untitled1\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute

    self.error_handler.check_response(response)

  File "E:\untitled1\venv\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response

    raise exception_class(message, screen, stacktrace)

selenium.common.exceptions.JavascriptException: Message: javascript error: Cannot read property 'documentElement' of undefined

  (Session info: chrome=81.0.4044.138)



Process finished with exit code 1


我添加了滚动条代码就一直报这个错误 删除滚动条代码一切正常

Python 全系列/第十五阶段:Python 爬虫开发/爬虫反反爬- 965楼
Python 全系列/第十五阶段:Python 爬虫开发/爬虫基础(旧) 968楼
Python 全系列/第十五阶段:Python 爬虫开发/爬虫基础(旧) 969楼
Python 全系列/第十五阶段:Python 爬虫开发/爬虫基础(旧) 970楼
Python 全系列/第十五阶段:Python 爬虫开发/爬虫反反爬- 971楼
Python 全系列/第十五阶段:Python 爬虫开发/爬虫基础(旧) 972楼

课程分类

百战程序员微信公众号

百战程序员微信小程序

©2014-2025百战汇智(北京)科技有限公司 All Rights Reserved 北京亦庄经济开发区科创十四街 赛蒂国际工业园
网站维护:百战汇智(北京)科技有限公司
京公网安备 11011402011233号    京ICP备18060230号-3    营业执照    经营许可证:京B2-20212637