鱼C论坛

 找回密码
 立即注册
查看: 1111|回复: 4

[已解决]scrapy爬图求助

[复制链接]
发表于 2020-3-8 16:02:16 | 显示全部楼层 |阅读模式
5鱼币
import scrapy
from get_pic.items import GetPicItem

class GetpicSpider(scrapy.Spider):
    name = 'getpic'
    allowed_domains = ['jandan.net']
    start_urls = ['http://jandan.net/ooxx']
    custom_settings = {'DEFAULT_REQUEST_HEADERS':{'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 UBrowser/6.1.2107.204 Safari/537.3'}}  
    
    def parse(self,response):
        
        imgsites = response.xpath('//img[@referrerpolicy]/@src').extract()
        for each in imgsites:
            newurl = ''.join(['https:',each])
            name = each.split('/')[-1]
            item = GetPicItem()
            item['img_url'] = newurl
            item['name'] = name
            yield item

settings部分
ITEM_PIPELINES = {
    'get_pic.pipelines.GetPicPipeline': 300,
    'scrapy.pipelines.images.ImagesPipeline':1
}

IMAGES_URLS_FIELD = "pic_url"
project_dir = os.path.abspath(os.path.dirname(__file__))
IMAGES_STORE = os.path.join(project_dir,'images')

运行之后只是生成了images文件夹,不知道哪里出了问题,求解



2020-03-08 15:32:25 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: get_pic)
2020-03-08 15:32:25 [scrapy.utils.log] INFO: Versions: lxml 4.4.1.0, libxml2 2.9.9, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.7.4 (default, Aug  9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.8, Platform Windows-10-10.0.18362-SP0
2020-03-08 15:32:25 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'get_pic', 'NEWSPIDER_MODULE': 'get_pic.spiders', 'SPIDER_MODULES': ['get_pic.spiders']}
2020-03-08 15:32:25 [scrapy.extensions.telnet] INFO: Telnet Password: 70fe62bdacfa7562
2020-03-08 15:32:25 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2020-03-08 15:32:26 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-03-08 15:32:26 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-03-08 15:32:26 [scrapy.middleware] INFO: Enabled item pipelines:
['scrapy.pipelines.images.ImagesPipeline', 'get_pic.pipelines.GetPicPipeline']
2020-03-08 15:32:26 [scrapy.core.engine] INFO: Spider opened
2020-03-08 15:32:26 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-03-08 15:32:26 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-03-08 15:32:26 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://jandan.net/ooxx> (referer: None)
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx1.sinaimg.cn/mw600/0076BSS5ly1gcmjnagmj8j30dw0jmaby.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx2.sinaimg.cn/mw600/0076BSS5ly1gcmjmdojv0j318y0u0qc6.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx1.sinaimg.cn/mw600/0076BSS5ly1gcmjis6dlqj31900u0qv8.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx1.sinaimg.cn/thumb180/0076BSS5ly1gcmjhq2s13g30dw07tx6p.gif'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx2.sinaimg.cn/mw600/0076BSS5ly1gcmjdh2r4qj30nk0go0vn.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx3.sinaimg.cn/mw600/0076BSS5ly1gcmjbwegwgj31900u0dpm.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx3.sinaimg.cn/mw600/0076BSS5ly1gcmj8ikftpj30u0190tk2.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx2.sinaimg.cn/mw600/0076BSS5ly1gcmj6f3oc9j30ir0nfwgi.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx4.sinaimg.cn/mw600/0076BSS5ly1gcmj2dl78jj30u01954qp.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx2.sinaimg.cn/mw600/0076BSS5ly1gcmj0wfrksj30u011igry.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx3.sinaimg.cn/mw600/0076BSS5ly1gcmix7pqstj30u0192gqz.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx1.sinaimg.cn/thumb180/0076BSS5ly1gcmiv2o52dg30dw05vb29.gif'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx1.sinaimg.cn/mw600/0076BSS5ly1gcmiryxzwfj30u0140b29.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx1.sinaimg.cn/mw600/0076BSS5ly1gcmipjsi4dj30rs15otcc.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx4.sinaimg.cn/mw600/0076BSS5ly1gcmin956mlj31930u07f3.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx3.sinaimg.cn/mw600/0076BSS5ly1gcmik1gbgoj30m90xd40d.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx1.sinaimg.cn/mw600/0076BSS5ly1gcmie7cbqhj30u0196nmf.jpg'}
2020-03-08 15:32:27 [scrapy.core.scraper] DEBUG: Scraped from <200 http://jandan.net/ooxx>
{'img_url': 'https://wx4.sinaimg.cn/mw600/0076BSS5ly1gcmic43tglj318z0u0dme.jpg'}
2020-03-08 15:32:27 [scrapy.core.engine] INFO: Closing spider (finished)
2020-03-08 15:32:27 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 216,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 13059,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2020, 3, 8, 7, 32, 27, 63400),
'item_scraped_count': 18,
'log_count/DEBUG': 19,
'log_count/INFO': 9,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2020, 3, 8, 7, 32, 26, 640486)}
2020-03-08 15:32:27 [scrapy.core.engine] INFO: Spider closed (finished)
最佳答案
2020-3-8 16:02:17

如果使用imagepipeline,需要先在settings中添加三个变量
IMAGES_STORE = 'path'(图片储存路径)

IMAGES_URLS_FIELD = '储存图片网址的变量名'(变量名需要在Item模块中添加)#比如IMAGES_URLS_FIELD="cimage_urls"

IMAGES_RESUTE_FIELD = ‘’储存图片信息的变量名(同上)#比如
IMAGES_RESUTE_FIELD="cimages"

然后依次在Items模块中将上面两个对应的FIELD定义下:
cimage_urls = Field() #d对应上面的变量名
cimages = Field()

然后在settings模块中,将ITEM_PIPELINES中启用ImagesPipeline
ITEM_PIPELINES = {"scrapy.pipelines.images.ImagesPipeline":100}

我在查看scrapy源代码时,依稀记得后续的储存位置等需要用到这个result_field信息。所以估计是这里出了问题
另外如果煎蛋网中某些内容是动态加载,那返回的是空集也是肯定的。

最佳答案

查看完整内容

如果使用imagepipeline,需要先在settings中添加三个变量 IMAGES_STORE = 'path'(图片储存路径) IMAGES_URLS_FIELD = '储存图片网址的变量名'(变量名需要在Item模块中添加)#比如IMAGES_URLS_FIELD="cimage_urls" IMAGES_RESUTE_FIELD = ‘’储存图片信息的变量名(同上)#比如 IMAGES_RESUTE_FIELD="cimages" 然后依次在Items模块中将上面两个对应的FIELD定义下: cimage_urls = Field() #d对应上面的变量名 cim ...
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复

使用道具 举报

发表于 2020-3-8 16:02:17 | 显示全部楼层    本楼为最佳答案   

如果使用imagepipeline,需要先在settings中添加三个变量
IMAGES_STORE = 'path'(图片储存路径)

IMAGES_URLS_FIELD = '储存图片网址的变量名'(变量名需要在Item模块中添加)#比如IMAGES_URLS_FIELD="cimage_urls"

IMAGES_RESUTE_FIELD = ‘’储存图片信息的变量名(同上)#比如
IMAGES_RESUTE_FIELD="cimages"

然后依次在Items模块中将上面两个对应的FIELD定义下:
cimage_urls = Field() #d对应上面的变量名
cimages = Field()

然后在settings模块中,将ITEM_PIPELINES中启用ImagesPipeline
ITEM_PIPELINES = {"scrapy.pipelines.images.ImagesPipeline":100}

我在查看scrapy源代码时,依稀记得后续的储存位置等需要用到这个result_field信息。所以估计是这里出了问题
另外如果煎蛋网中某些内容是动态加载,那返回的是空集也是肯定的。
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复

使用道具 举报

发表于 2020-3-8 17:52:36 | 显示全部楼层
在settings里面,你少设置了一个IMAGES_RESULT
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复

使用道具 举报

 楼主| 发表于 2020-3-9 11:04:20 | 显示全部楼层
JAY饭 发表于 2020-3-8 17:52
在settings里面,你少设置了一个IMAGES_RESULT

能具体写一下吗
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复

使用道具 举报

 楼主| 发表于 2020-3-9 13:22:22 | 显示全部楼层
JAY饭 发表于 2020-3-9 11:50
如果使用imagepipeline,需要先在settings中添加三个变量
IMAGES_STORE = 'path'(图片储存路径)

感谢
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

小黑屋|手机版|Archiver|鱼C工作室 ( 粤ICP备18085999号-1 | 粤公网安备 44051102000585号)

GMT+8, 2024-12-22 21:11

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表