会员可以在此提问,百战程序员老师有问必答
对大家有帮助的问答会被标记为“推荐”
看完课程过来浏览一下别人提的问题,会帮你学得更全面
截止目前,同学们一共提了 132647个问题

半夜突然醒来,闲着无聊敲了个爬虫的代码,用selenium实现一个自动登录之类的,代码如下所示:

"""
   用 selenium 实现对中国大学mocc的登录
   程序运行报错:正在处理中
"""
from selenium import webdriver
from time import sleep
fox = webdriver.Firefox()
url ='https://www.icourse163.org/member/login.htm?returnUrl=aHR0cHM6Ly93d3cuaWNvdXJzZTE2My5vcmcvaW5kZXguaHRt#/webLoginIndex'
fox.get(url)
sleep(3)

# 点击登录按钮,弹出登录界面
fox.find_element_by_css_selector('#auto-id-1628452551743').click()
sleep(1)
# 选择其他登录方式
fox.find_element_by_css_selector('#login-cnt > div > div > div > div.ux-login-set-scan-code_ft > span').click()
# 获取账号框和密码框,输入密码
fox.find_element_by_css_selector('#auto-id-1628452775609').send_keys('*************')
sleep(1)
fox.find_element_by_css_selector('#auto-id-1628452775612').send_keys('*************')
sleep(1)
fox.find_element_by_css_selector('#dologin').click()
sleep(1)

print(fox.current_url)
print(fox.page_source)
sleep(5)


sleep(2)
fox.quit()

报错信息如下:

     

C:\Users\Administrator\AppData\Local\Programs\Python\Python39\python.exe D:/pythonProject2/实战python网络爬虫/selenium的使用/selenium_03.py

Traceback (most recent call last):

  File "D:\pythonProject2\实战python网络爬虫\selenium的使用\selenium_03.py", line 13, in <module>

    fox.find_element_by_css_selector('#auto-id-1628452551743').click()

  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 598, in find_element_by_css_selector

    return self.find_element(by=By.CSS_SELECTOR, value=css_selector)

  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 976, in find_element

    return self.execute(Command.FIND_ELEMENT, {

  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute

    self.error_handler.check_response(response)

  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response

    raise exception_class(message, screen, stacktrace)

selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: #auto-id-1628452551743


无法定位到元素,敢问老师怎么处理?

进程已结束,退出代码为 1


Python 全系列/第十六阶段:Python 爬虫开发/爬虫反反爬- 886楼
Python 全系列/第十六阶段:Python 爬虫开发/爬虫反反爬 887楼
Python 全系列/第十六阶段:Python 爬虫开发/爬虫基础 888楼

from fake_useragent import UserAgent
import ssl
import requests
from lxml import etree

from time import sleep

def get_html(url):
    '''
    :param url: 要爬取的地址
    :return: 返回html
    '''
    headers = {"User_Agent": UserAgent().random}
    resp = requests.get(url,headers=headers)

    #status_code   返回状态码
    if resp.status_code == 200:
        resp.encoding='utf-8'
        return resp.text
    else:
        return None

def parse_list(html):
    '''
    :param html: 传递进来有一个电影列表的的html
    :return: 返回一个电影的url
    '''

    e = etree.HTML(html)
    # 解决验证CA
    # ssl._create_default_https_context = ssl._create_unverified_context
    list_url = ['https://www.qidian.com{}'.format(url)for url in e.xpath('//div[@class="book-img-box"]/a/@href')]
    return list_url

def parse_index(html):
    '''
    :param html: 传递一个有电影信息的html
    :return: 已经提取好的电影信息
    '''
    e = etree.HTML(html)
    name = e.xpath('//div/h1/span/a/text()')
    return name

def main():
    num = int(input("请输入要获取多少页:"))
    for page in range(num):
        url = 'https://www.qidian.com/all?&page={}'.format(page+1)
        list_html = get_html(url)
        list_url = parse_list(list_html)
        for url in list_url:
            info_html = get_html(url)
            move = parse_index(info_html)
            print(move)

if __name__ == '__main__':
    main()



老师,您帮我看一下,为什么我这个最后返回的是空列表啊,我debug看了一下是这个出问题了。返回了空值,但是我使用插件看了看没问题啊

name = e.xpath('//div/h1/span/a/text()')


Python 全系列/第十六阶段:Python 爬虫开发/爬虫反反爬- 889楼
Python 全系列/第十六阶段:Python 爬虫开发/移动端爬虫 891楼
Python 全系列/第十六阶段:Python 爬虫开发/爬虫基础 894楼
Python 全系列/第十六阶段:Python 爬虫开发/scrapy框架使用(旧) 895楼

2024-05-22 15:42:18 [scrapy.utils.log] INFO: Scrapy 2.6.1 started (bot: scrapy07)

2024-05-22 15:42:18 [scrapy.utils.log] INFO: Versions: lxml 5.2.2.0, libxml2 2.11.7, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 22.4.0, Python 3.11.3 (tags/v3.11.3:f3909b8, Apr  4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)], pyOpenSSL 22.0.0 (OpenSSL 1.1.1n  15 Mar 2022), cryptography 36.0.2, Platform Windows-10-10.0.22621-SP0

2024-05-22 15:42:18 [scrapy.crawler] INFO: Overridden settings:

{'BOT_NAME': 'scrapy07',

 'NEWSPIDER_MODULE': 'scrapy07.spiders',

 'SPIDER_MODULES': ['scrapy07.spiders']}

2024-05-22 15:42:18 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor

2024-05-22 15:42:18 [scrapy.extensions.telnet] INFO: Telnet Password: c658f43dc33b4451

2024-05-22 15:42:18 [scrapy.middleware] INFO: Enabled extensions:

['scrapy.extensions.corestats.CoreStats',

 'scrapy.extensions.telnet.TelnetConsole',

 'scrapy.extensions.logstats.LogStats']

Unhandled error in Deferred:

2024-05-22 15:42:18 [twisted] CRITICAL: Unhandled error in Deferred:


Traceback (most recent call last):

  File "D:\python_env\spider2_env\Lib\site-packages\scrapy\crawler.py", line 206, in crawl  

    return self._crawl(crawler, *args, **kwargs)

  File "D:\python_env\spider2_env\Lib\site-packages\scrapy\crawler.py", line 210, in _crawl 

    d = crawler.crawl(*args, **kwargs)

  File "D:\python_env\spider2_env\Lib\site-packages\twisted\internet\defer.py", line 1905, in unwindGenerator

    return _cancellableInlineCallbacks(gen)

  File "D:\python_env\spider2_env\Lib\site-packages\twisted\internet\defer.py", line 1815, in _cancellableInlineCallbacks

    _inlineCallbacks(None, gen, status)

--- <exception caught here> ---

  File "D:\python_env\spider2_env\Lib\site-packages\twisted\internet\defer.py", line 1660, in _inlineCallbacks

    result = current_context.run(gen.send, result)

  File "D:\python_env\spider2_env\Lib\site-packages\scrapy\crawler.py", line 101, in crawl  

    self.spider = self._create_spider(*args, **kwargs)

  File "D:\python_env\spider2_env\Lib\site-packages\scrapy\crawler.py", line 113, in _create_spider

    return self.spidercls.from_crawler(self, *args, **kwargs)

  File "D:\code\python\spider_code\scrapy07\scrapy07\spiders\selenium.py", line 29, in from_crawler

    s = Service(executable_path='scrapy07/chromedriver.exe')

builtins.TypeError: Service.__init__() got an unexpected keyword argument 'executable_path' 


--- Logging error ---

Traceback (most recent call last):

  File "E:\Program Files\Python\Python311\Lib\logging\__init__.py", line 1110, in emit      

    msg = self.format(record)

          ^^^^^^^^^^^^^^^^^^^

  File "E:\Program Files\Python\Python311\Lib\logging\__init__.py", line 953, in format     

    return fmt.format(record)

           ^^^^^^^^^^^^^^^^^^

  File "E:\Program Files\Python\Python311\Lib\logging\__init__.py", line 695, in format     

    record.exc_text = self.formatException(record.exc_info)

                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "E:\Program Files\Python\Python311\Lib\logging\__init__.py", line 645, in formatException

    traceback.print_exception(ei[0], ei[1], tb, None, sio)

  File "E:\Program Files\Python\Python311\Lib\traceback.py", line 124, in print_exception   

    te = TracebackException(type(value), value, tb, limit=limit, compact=True)

         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "E:\Program Files\Python\Python311\Lib\traceback.py", line 690, in __init__

    self.stack = StackSummary._extract_from_extended_frame_gen(

                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "E:\Program Files\Python\Python311\Lib\traceback.py", line 416, in _extract_from_extended_frame_gen

    for f, (lineno, end_lineno, colno, end_colno) in frame_gen:

  File "E:\Program Files\Python\Python311\Lib\traceback.py", line 353, in _walk_tb_with_full_positions

    positions = _get_code_position(tb.tb_frame.f_code, tb.tb_lasti)

                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "E:\Program Files\Python\Python311\Lib\traceback.py", line 366, in _get_code_position

    positions_gen = code.co_positions()

                    ^^^^^^^^^^^^^^^^^

AttributeError: '_Code' object has no attribute 'co_positions'

Call stack:

  File "D:\python_env\spider2_env\Lib\site-packages\twisted\internet\defer.py", line 344, in __del__

    log.failure(format, self.failResult, debugInfo=debugInfo)

  File "D:\python_env\spider2_env\Lib\site-packages\twisted\logger\_logger.py", line 190, in failure

    self.emit(level, format, log_failure=failure, **kwargs)

  File "D:\python_env\spider2_env\Lib\site-packages\twisted\logger\_logger.py", line 142, in emit

    self.observer(event)

  File "D:\python_env\spider2_env\Lib\site-packages\twisted\logger\_observer.py", line 81, in __call__

    observer(event)

  File "D:\python_env\spider2_env\Lib\site-packages\twisted\logger\_legacy.py", line 90, in __call__

    self.legacyObserver(event)

  File "D:\python_env\spider2_env\Lib\site-packages\twisted\python\log.py", line 579, in emit

    _publishNew(self._newObserver, eventDict, textFromEventDict)

  File "D:\python_env\spider2_env\Lib\site-packages\twisted\logger\_legacy.py", line 147, in publishToNewObserver

    observer(eventDict)

  File "D:\python_env\spider2_env\Lib\site-packages\twisted\logger\_stdlib.py", line 112, in __call__

    self.logger.log(stdlibLevel, StringifiableFromEvent(event), exc_info=excInfo)

Message: <twisted.logger._stdlib.StringifiableFromEvent object at 0x0000026D24D85550>       

Arguments: ()


老师这个报错怎么解决啊?

Python 全系列/第十六阶段:Python 爬虫开发/scrapy框架使用 896楼
Python 全系列/第十六阶段:Python 爬虫开发/爬虫反反爬 897楼
Python 全系列/第十六阶段:Python 爬虫开发/爬虫反反爬 899楼
Python 全系列/第十六阶段:Python 爬虫开发/scrapy框架使用(旧) 900楼

课程分类

百战程序员微信公众号

百战程序员微信小程序

©2014-2025百战汇智(北京)科技有限公司 All Rights Reserved 北京亦庄经济开发区科创十四街 赛蒂国际工业园
网站维护:百战汇智(北京)科技有限公司
京公网安备 11011402011233号    京ICP备18060230号-3    营业执照    经营许可证:京B2-20212637