会员可以在此提问,百战程序员老师有问必答
对大家有帮助的问答会被标记为“推荐”
看完课程过来浏览一下别人提的问题,会帮你学得更全面
截止目前,同学们一共提了 132358个问题
Python 全系列/第十五阶段:Python 爬虫开发/爬虫基础(旧) 781楼
Python 全系列/第十五阶段:Python 爬虫开发/scrapy 框架高级 782楼
Python 全系列/第十五阶段:Python 爬虫开发/爬虫反反爬- 786楼

屏幕截图 2021-03-20 074704.png

2021-03-20T07:43:45.126+0800 [initandlisten] MongoDB starting : pid=6844 port=27017 dbpath=d:\mongodb_64\db 64-bit host=LAPTOP-MSSFAU8A

2021-03-20T07:43:45.130+0800 [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2

2021-03-20T07:43:45.132+0800 [initandlisten] db version v2.6.5

2021-03-20T07:43:45.132+0800 [initandlisten] git version: e99d4fcb4279c0279796f237aa92fe3b64560bf6

2021-03-20T07:43:45.132+0800 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49

2021-03-20T07:43:45.132+0800 [initandlisten] allocator: system

2021-03-20T07:43:45.132+0800 [initandlisten] options: { storage: { dbPath: "d:\mongodb_64\db" } }

2021-03-20T07:43:45.132+0800 [initandlisten] exception in initAndListen: 10296

*********************************************************************

 ERROR: dbpath (d:\mongodb_64\db) does not exist.

 Create this directory or give existing directory in --dbpath.

 See http://dochub.mongodb.org/core/startingandstoppingmongo

*********************************************************************

, terminating

2021-03-20T07:43:45.133+0800 [initandlisten] dbexit:

2021-03-20T07:43:45.133+0800 [initandlisten] shutdown: going to close listening sockets...

2021-03-20T07:43:45.133+0800 [initandlisten] shutdown: going to flush diaglog...

2021-03-20T07:43:45.133+0800 [initandlisten] shutdown: going to close sockets...

2021-03-20T07:43:45.133+0800 [initandlisten] shutdown: waiting for fs preallocator...

2021-03-20T07:43:45.133+0800 [initandlisten] shutdown: lock for final commit...

2021-03-20T07:43:45.133+0800 [initandlisten] shutdown: final commit...

2021-03-20T07:43:45.134+0800 [initandlisten] shutdown: closing all files...

2021-03-20T07:43:45.135+0800 [initandlisten] closeAllFiles() finished

2021-03-20T07:43:45.135+0800 [initandlisten] dbexit: really exiting now

老师请问一下,为什么我这里是连接失败,我的命令都和老师输入的一样

屏幕截图 2021-03-20 074835.png

Python 全系列/第十五阶段:Python 爬虫开发/爬虫数据存储 788楼
Python 全系列/第十五阶段:Python 爬虫开发/移动端爬虫开发- 789楼

代码:

import scrapy


class BizhizuSpider(scrapy.Spider):
    name = 'bizhizu'
    allowed_domains = ['bizhizu.cn']
    # start_urls = ['https://www.bizhizu.cn/pic/7097.html']
    start_urls=['https://www.bizhizu.cn/pic/7097-0.html']

    def parse(self, response):
        image_url=response.xpath('//div[@class="pic"]/a[@id="photoimg"]/img/@src').extract_first()
        print(image_url)
        image_name=response.xpath('string(//div[@class="txt"]/h1)').extract_first()
        print(image_name)
        yield{
            "image_url":image_url,
            "image_name":image_name
        }
        next_url=response.xpath('//div[@class="photo_next"]//a/@href').extract_first()
        yield scrapy.Request(next_url,callback=self.parse)
from scrapy.pipelines.images import ImagesPipeline
from scrapy import Request
class PicturePipeline(ImagesPipeline):
    def get_media_requests(self, item, info):
        yield Request(item["image_url"],meta={"name":item["image_name"]})
    def file_path(self, request, response=None, info=None,*,item=None):
        name=request.meta["name"].strip()
        name=name.replace("/","_")
        return name+'.jpg'

运行结果:

屏幕截图 2021-03-19 184131.png老师请问一下,为什么我在爬取淘女郎图片的时候,每次爬取的图片名称都是一样的,但是image_url是不同的,麻烦老师帮我看看程序哪里出问题了?

Python 全系列/第十五阶段:Python 爬虫开发/scrapy 框架高级 790楼

from fake_useragent import UserAgent
import requests
from lxml import etree
from time import sleep

def get_html(url):
    #传递要爬取的地址
    #返回html
    headers= {"User-Agent":UserAgent().chrome}
    resp = requests.get(url,headers=headers)
    sleep(2)
    if resp.status_code == 200:
        return resp.text
    else:
        return None

def parse_list(html):
    #传递进来一个有电影列表的html
    #返回一个电影列表的url
    e = etree.HTML(html)
    list_url = ['https://maoyan.com/{}'.format(url) for url in e.xpath('//div[@class="movie-item-hover"]/a/@href')]
    return list_url

def pares_index(html):
    #传递进来一个有电影信息的html
    #返回已提取好的电影信息
    e = etree.HTML(html)
    name = e.xpath('//h1[@class="name"]/text()')
    type = e.xpath('//li[@class="ellipsis"]/a[1]/text()')
    actors = e.xpath('//li[@class="celebrity actor"]/div[@class="info"]/a/text()')
    actors = format_data(actors)
    return {"name":name,"type":type,"actors":actors}

#去重
def format_data(actors):
    actor_set = set()
    for actor in actors:
        actor_set.add(actor.strip())
    return actor_set

def main():
    num = int(input("请输入要获取多少页:"))
    for page in range(num):
        url = 'https://maoyan.com/films?showType=3&offset={}'.format(page*30)
        list_html = get_html(url)
        list_url = parse_list(list_html)
        for url in list_url:
            info_html = get_html(url)
            movie = pares_index(info_html)
            print(movie)

if __name__ == '__main__':
    main()

image.png

老师,我运行后出现如上报错,然后我尝试修改了一下代码如下。

image.png

但是执行后出来的效果如下图,求帮助。

image.png

Python 全系列/第十五阶段:Python 爬虫开发/爬虫反反爬- 791楼
Python 全系列/第十五阶段:Python 爬虫开发/爬虫基础(旧) 792楼
Python 全系列/第十五阶段:Python 爬虫开发/爬虫基础(旧) 793楼
Python 全系列/第十五阶段:Python 爬虫开发/爬虫基础(旧) 794楼

问题如下:

爬取下一页是,用xpath爬取,没有显示,xpath 写的是对的,不知道错在哪啊了,请老师指点

douluo.py

import scrapy


class DouluoSpider(scrapy.Spider):
    name = 'douluo'
    allowed_domains = ['baidu.com']
    start_urls = ['https://image.baidu.com/search/detail?ct=503316480&z=0&ipn=d&word=%E6%96%97%E7%BD%97%E5%A4%A7%E9%99%86&step_word=&hs=0&pn=0&spn=0&di=83380&pi=0&rn=1&tn=baiduimagedetail&is=0%2C0&istype=0&ie=utf-8&oe=utf-8&in=&cl=2&lm=-1&st=undefined&cs=1017836848%2C1501428868&os=3786179136%2C2901592361&simid=3481113337%2C309418197&adpicid=0&lpn=0&ln=1606&fr=&fmq=1615969790890_R&fm=&ic=undefined&s=undefined&hd=undefined&latest=undefined&copyright=undefined&se=&sme=&tab=0&width=undefined&height=undefined&face=undefined&ist=&jit=&cg=&bdtype=0&oriquery=&objurl=https%3A%2F%2Fgimg2.baidu.com%2Fimage_search%2Fsrc%3Dhttp%3A%2F%2Fimage.uc.cn%2Fs%2Fwemedia%2Fs%2Fupload%2F2019%2Fcf7fb507a5b57be658415dc028a11f9c.jpg%26refer%3Dhttp%3A%2F%2Fimage.uc.cn%26app%3D2002%26size%3Df9999%2C10000%26q%3Da80%26n%3D0%26g%3D0n%26fmt%3Djpeg%3Fsec%3D1618564439%26t%3D96bcfc5d23d8645386a36aeedb907c06&fromurl=ippr_z2C%24qAzdH3FAzdH3Fv5g_z%26e3Br6v7sp76j_z%26e3BvgAzdH3FwAzdH3Fgjof-k8ud88aw9k8wmumlv8l9cubjbb0mvvb1_z%26e3Bip4s%3Fpyrj%3D%25El%25la%25AA%25Ec%25AC%25AC%25Ec%25b8%25An%26t1%3Dk8ud88aw9k8wmumlv8l9cubjbb0mvvb1%26f%3D8%26prs%3Dv5gr6v7sp76j&gsm=1&rpstart=0&rpnum=0&islist=&querylist=&force=undefined']

    def parse(self, response):
        image_url = response.xpath('//div[@class="img-wrapper"]/img/@src').extract_first()
        yield {
            'image_urls': [image_url]
        }

        # 提取翻页的链接
        next_url = response.xpath('//span[@class="img-switch-btn"]').extract_first()
        yield scrapy.Request(response.urljoin(next_url),callback=self.parse())

image.png

Python 全系列/第十五阶段:Python 爬虫开发/scrapy 框架高级 795楼

课程分类

百战程序员微信公众号

百战程序员微信小程序

©2014-2025百战汇智(北京)科技有限公司 All Rights Reserved 北京亦庄经济开发区科创十四街 赛蒂国际工业园
网站维护:百战汇智(北京)科技有限公司
京公网安备 11011402011233号    京ICP备18060230号-3    营业执照    经营许可证:京B2-20212637