求助,用scrapy爬取妹子图网站,出现这样的错误?
本帖最后由 sheenblue 于 2021-1-30 11:45 编辑请问这样的错误是什么引起的?
Spider
import scrapy
from ..items import MeizituItem
class MeizituSpider(scrapy.Spider):
name = "meizitu"
allowed_domains = ['mzitu.com']
start_urls = ['https://www.mzitu.com/204999/2']
def parse(self,response):
srcs = response.xpath('//div[@class="main-image"]/p/a/img/@src').getall()
for src in srcs:
item = MeizituItem()
item['src'] =
yield item
2021-01-30 11:42:04 INFO: Scrapy 2.4.1 started (bot: meizitu)
2021-01-30 11:42:04 INFO: Versions: lxml 4.4.1.0, libxml2 2.9.9, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 20.3.0, Python 3.7.4 (default, Aug9 2019, 18:34:13) , pyOpenSSL 19.0.0 (OpenSSL 1.1.1d10 Sep 2019), cryptography 2.7, Platform Windows-10-10.0.17763-SP0
2021-01-30 11:42:04 DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2021-01-30 11:42:04 INFO: Overridden settings:
{'BOT_NAME': 'meizitu',
'NEWSPIDER_MODULE': 'meizitu.spiders',
'SPIDER_MODULES': ['meizitu.spiders'],
'USER_AGENT': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36'}
2021-01-30 11:42:04 INFO: Telnet Password: 922f95694e73542b
2021-01-30 11:42:04 INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2021-01-30 11:42:04 INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2021-01-30 11:42:04 INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2021-01-30 11:42:04 INFO: Enabled item pipelines:
['scrapy.pipelines.images.ImagesPipeline']
2021-01-30 11:42:04 INFO: Spider opened
2021-01-30 11:42:05 INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2021-01-30 11:42:05 INFO: Telnet console listening on 127.0.0.1:6023
2021-01-30 11:42:26 DEBUG: Retrying <GET https://www.mzitu.com/204999/2> (failed 1 times): TCP connection timed out: 10060: 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。.
2021-01-30 11:42:47 DEBUG: Retrying <GET https://www.mzitu.com/204999/2> (failed 2 times): TCP connection timed out: 10060: 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。.
2021-01-30 11:43:05 INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2021-01-30 11:43:08 ERROR: Gave up retrying <GET https://www.mzitu.com/204999/2> (failed 3 times): TCP connection timed out: 10060: 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。.
2021-01-30 11:43:08 ERROR: Error downloading <GET https://www.mzitu.com/204999/2>
Traceback (most recent call last):
File "C:\anaconda3\lib\site-packages\scrapy\core\downloader\middleware.py", line 45, in process_request
return (yield download_func(request=request, spider=spider))
twisted.internet.error.TCPTimedOutError: TCP connection timed out: 10060: 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。.
2021-01-30 11:43:08 INFO: Closing spider (finished)
2021-01-30 11:43:08 INFO: Dumping Scrapy stats:
{'downloader/exception_count': 3,
'downloader/exception_type_count/twisted.internet.error.TCPTimedOutError': 3,
'downloader/request_bytes': 900,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'elapsed_time_seconds': 63.266308,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2021, 1, 30, 3, 43, 8, 265391),
'log_count/DEBUG': 2,
'log_count/ERROR': 2,
'log_count/INFO': 11,
'retry/count': 2,
'retry/max_reached': 1,
'retry/reason_count/twisted.internet.error.TCPTimedOutError': 2,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2021, 1, 30, 3, 42, 4, 999083)}
2021-01-30 11:43:08 INFO: Spider closed (finished) 请求超时,代理ip失效了一般会是这种情况。如果你用的是免费代理ip,那么这种情况很正常,大部分免费代理ip都是用不了的。
页:
[1]