哈岁NB 发表于 2023-3-1 14:22:40

scrapy

大佬们,为什么这个一运行,用selenium启动浏览器是一闪而过呢
爬虫文件
import scrapy
from selenium import webdriver

class XinwenSpider(scrapy.Spider):
    name = 'xinwen'
    #allowed_domains = ['www.xxx.com']
    start_urls = ['https://news.163.com/']
    #创建浏览器对象,把浏览器作为类的成员
    bro = webdriver.Chrome('D:/技能/chromedriver.exe')
    model_urls =[]
    def parse(self, response):
      model_index =
      li_list = response.xpath('//*[@id="index2016_wrap"]/div/div/div/div/div/ul/li')
      for index in model_index:
            model_url = li_list.xpath('./a/@href').extract_first()

            self.model_urls.append(model_url)
            print(self.model_urls)
            for url in self.model_urls:
                yield scrapy.Request(url=url,callback=self.detail_parse)
    #解析详情页的新闻标题和url
    def detail_parse(self,response):
      page = response.xpath('/html/body/div/div/div/div/div/div/ul/li/div/div/div/div/h3/a/@href').extract_first()
      print(page)

    def closed(self,spider):
      self.bro.quit()

中间件
# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html

#scrapy封装响应对象的类
from scrapy.http import HtmlResponse
from scrapy import signals
from time import sleep

# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapter


class WanyiDownloaderMiddleware:


    def process_request(self, request, spider):

      return None

    def process_response(self, request, response, spider):
      #如何筛选指定的url
         #可以先找出指定四个板块的请求对象,然后根据请求对象定位
         #可以根据四个板块的url定位到四个板块的请求对象
      model_urls = spider.model_urls
      if request.url in model_urls:
            bro = spider.bro#从爬虫类获取创建好的浏览器对象
            bro.get(request.url)
            sleep(1)
            page_text = bro.page_source
            #说明该request就是指定响应对象的请求对象
            #此处的response就是指定板块的响应对象
            response = HtmlResponse(url=request.url,request=request,
                                    encoding='utf-8',body=page_text)#body就是响应对象的响应对象,就是要篡改的数据
            return response
      else:
            return response

    def process_exception(self, request, exception, spider):
      pass

selenium不用在scrapy就可以正常操作,一和scrapy结合就闪退,也没有发起请求

isdkz 发表于 2023-3-1 14:37:21

可能是哪里出错了,看一下报错信息

哈岁NB 发表于 2023-3-1 15:23:50

isdkz 发表于 2023-3-1 14:37
可能是哪里出错了,看一下报错信息

DevTools listening on ws://127.0.0.1:1286/devtools/browser/f22e9ec7-e022-4014-9d24-354ea870b413
2023-03-01 15:23:15 INFO: Scrapy 2.7.1 started (bot: wanyi)
2023-03-01 15:23:15 INFO: Versions: lxml 4.9.2.0, libxml2 2.9.12, cssselect 1.2.0, parsel 1.7.0, w3lib 2.1.1, Twisted 20.3.0, Python 3.8.1 (tags/
v3.8.1:1b293b6, Dec 18 2019, 23:11:46) , pyOpenSSL 22.1.0 (OpenSSL 3.0.7 1 Nov 2022), cryptography 38.0.4, Platform Windows-10-10.0.19041
-SP0
2023-03-01 15:23:15 INFO: Overridden settings:
{'BOT_NAME': 'wanyi',
'NEWSPIDER_MODULE': 'wanyi.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'SPIDER_MODULES': ['wanyi.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor',
'USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) '
               'AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.109 '
               'Safari/537.36'}
2023-03-01 15:23:15 DEBUG: Using selector: SelectSelector
2023-03-01 15:23:15 DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2023-03-01 15:23:15 DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2023-03-01 15:23:15 INFO: Telnet Password: 3363e4532231a8d7
2023-03-01 15:23:16 INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
USB: usb_device_handle_win.cc:1046 Failed to read descriptor from node connection: 连
到系统上的设备没有发挥作用。 (0x1F)
Bluetooth: bluetooth_adapter_winrt.cc:1205 Getting Radio failed. Chrome will be unabl
e to change the power state by itself.
Bluetooth: bluetooth_adapter_winrt.cc:1283 OnPoweredRadioAdded(), Number of Powered R
adios: 1
Bluetooth: bluetooth_adapter_winrt.cc:1298 OnPoweredRadiosEnumerated(), Number of Pow
ered Radios: 1
2023-03-01 15:23:18 INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2023-03-01 15:23:18 INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2023-03-01 15:23:18 INFO: Enabled item pipelines:
[]
2023-03-01 15:23:18 INFO: Spider opened
2023-03-01 15:23:18 INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2023-03-01 15:23:18 INFO: Telnet console listening on 127.0.0.1:6023
2023-03-01 15:23:19 DEBUG: Crawled (200) <GET https://news.163.com/> (referer: None)
['https://news.163.com/world/']
['https://news.163.com/world/', 'https://data.163.com/special/datablog/']
2023-03-01 15:23:19 DEBUG: Filtered duplicate request: <GET https://news.163.com/world/> - no more duplicates will be shown (see DUPEFILTER_DEB
UG to show all duplicates)
['https://news.163.com/world/', 'https://data.163.com/special/datablog/', 'https://news.163.com/air/']
['https://news.163.com/world/', 'https://data.163.com/special/datablog/', 'https://news.163.com/air/', 'https://news.163.com/college']
2023-03-01 15:23:19 DEBUG: Crawled (200) <GET https://news.163.com/world/> (referer: https://news.163.com/)
2023-03-01 15:23:19 DEBUG: Crawled (200) <GET https://news.163.com/college> (referer: https://news.163.com/)
2023-03-01 15:23:19 DEBUG: Crawled (200) <GET https://news.163.com/air/> (referer: https://news.163.com/)
2023-03-01 15:23:19 DEBUG: Crawled (200) <GET https://data.163.com/special/datablog/> (referer: https://news.163.com/)
None
None
None
None
2023-03-01 15:23:19 INFO: Closing spider (finished)
2023-03-01 15:23:19 DEBUG: DELETE http://localhost:1282/session/9842f6fffdfdd79191970c39703d24ea {}
2023-03-01 15:23:19 DEBUG: http://localhost:1282 "DELETE /session/9842f6fffdfdd79191970c39703d24ea HTTP/1.1" 200 14
'scheduler/enqueued': 5,
'scheduler/enqueued/memory': 5,
'start_time': datetime.datetime(2023, 3, 1, 7, 23, 18, 813688)}
2023-03-01 15:23:21 INFO: Spider closed (finished)

哈岁NB 发表于 2023-3-1 15:26:40

isdkz 发表于 2023-3-1 14:37
可能是哪里出错了,看一下报错信息

这是报错信息

wiselin 发表于 2023-3-1 15:27:41

是不是你driver那个目录有中文名?

isdkz 发表于 2023-3-1 15:28:43

哈岁NB 发表于 2023-3-1 15:26
这是报错信息

没有报错呀,你说的闪退可能是因为它运行结束了

哈岁NB 发表于 2023-3-1 15:32:46

wiselin 发表于 2023-3-1 15:27
是不是你driver那个目录有中文名?

不是呀,同样的路径不用scrapy就能运行

哈岁NB 发表于 2023-3-1 15:33:34

isdkz 发表于 2023-3-1 15:28
没有报错呀,你说的闪退可能是因为它运行结束了

那他也没有怎么没对网页发起请求,返回的直接是none呢

哈岁NB 发表于 2023-3-1 15:36:42

isdkz 发表于 2023-3-1 15:28
没有报错呀,你说的闪退可能是因为它运行结束了

就显示个这,就退了

哈岁NB 发表于 2023-3-1 15:50:16

isdkz 发表于 2023-3-1 15:28
没有报错呀,你说的闪退可能是因为它运行结束了

找到原因了,没开启中间件{:10_266:} 感谢感谢

哈岁NB 发表于 2023-3-1 15:50:41

wiselin 发表于 2023-3-1 15:27
是不是你driver那个目录有中文名?

找到原因了,没开启中间件,感谢感谢
页: [1]
查看完整版本: scrapy