Scrapy框架爬虫出现Spider error processing错误
本帖最后由 zhouleiqiang 于 2020-9-3 12:56 编辑各位大佬,今天学到scrapy框架那里,就是爬取网址标题、网址内容、网址的问题,运行之后出现 Spider error processing错误,我开始单独在命令行窗口都能调试成功的,转到idle写了之后就出现这个错误了{:5_104:} ,
2020-09-02 18:19:35 DEBUG: Crawled (200) <GET https://curlie.org/Computers/Programming/Languages/Python/Resources/> (referer: None)
2020-09-02 18:19:35 ERROR: Spider error processing <GET https://curlie.org/Computers/Programming/Languages/Python/Resources/> (referer: None)
代码如下。
import scrapy
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
'https://curlie.org/Computers/Programming/Languages/Python/Books/',
'https://curlie.org/Computers/Programming/Languages/Python/Resources/'
]
def parse(self,response):
sel = scrapy.slection.Selector(response)
sites = sel.xpath('//div[@class="title-and-desc"]')
for site in sites:
#网址标题
title = site.xpath('div[@class="site-title"]/a/text()').extract()
#href是网址
link = site.xpath('div[@class="site-title"]/a/@href').extract()
#text是标题内容
text = site.xpath('div[@class="site-descr"]/text()').extract()
print(title,link,desc)
求助求助。 还没审核通过嘛{:5_104:} 1、你的allowed_domains设置的不被过滤的域是dmoz.org,allowed_domains会按你设置的域过滤掉第一个以后的url,你的start_urlsl中第二个url并没有在允许域中。所以直接被过滤掉了,也就没有被请求。
2、使用xpath提取数据直接使用response.xpath()就行,,例如:content_list = response.xpath('//div[@class="title-and-desc"]')。 YunGuo 发表于 2020-9-4 14:47
1、你的allowed_domains设置的不被过滤的域是dmoz.org,allowed_domains会按你设置的域过滤掉第一个以后的u ...
兄弟,还是不太行啊,我按你说的改了,还是有一样的错误{:10_266:}
import scrapy
from tutorial.items import CurlieItem
class CurlieSpider(scrapy.Spider):
name = "curlie"
allowed_domains = ["curlie.org"]
start_urls = ["https://curlie.org/Computers/Programming/Languages/Python/Books/",
"https://curlie.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self):
#sel = scrapy.slection.Selector(response)
sites =response.xpath('//div[@class="title-and-desc"]')
items =[]
for site in sites:
item = DmozItem()
#网址标题
item['title'] = site.xpath('div[@class="site-title"]/a/text()').extract()
#href是网址
item['link'] = site.xpath('div[@class="site-title"]/a/@href').extract()
#text是标题内容
item['text'] = site.xpath('div[@class="site-descr"]/text()').extract()
print(title,link,desc)
items.append(item)
return items
YunGuo 发表于 2020-9-4 14:47
1、你的allowed_domains设置的不被过滤的域是dmoz.org,allowed_domains会按你设置的域过滤掉第一个以后的u ...
兄弟,还是不太行啊,我按你说的改了,还是有 ERROR: Spider error processing 这个错误。 zhouleiqiang 发表于 2020-9-5 11:10
兄弟,还是不太行啊,我按你说的改了,还是有 ERROR: Spider error processing 这个错误。
import scrapy
from tutorial.items import CurlieItem
class CurlieSpider(scrapy.Spider):
name = "curlie"
allowed_domains = ["curlie.org"]
start_urls = ["https://curlie.org/Computers/Programming/Languages/Python/Books/",
"https://curlie.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self):
#sel = scrapy.slection.Selector(response)
sites =response.xpath('//div[@class="title-and-desc"]')
items =[]
for site in sites:
item = DmozItem()
#网址标题
item['title'] = site.xpath('div[@class="site-title"]/a/text()').extract()
#href是网址
item['link'] = site.xpath('div[@class="site-title"]/a/@href').extract()
#text是标题内容
item['text'] = site.xpath('div[@class="site-descr"]/text()').extract()
print(title,link,desc)
items.append(item)
return items zhouleiqiang 发表于 2020-9-5 11:12
import scrapy
from tutorial.items import CurlieItem
你的解析函数parse没有接收response,你复制我这个运行看看。
class CurlieSpider(scrapy.Spider):
name = 'curlie'
allowed_domains = ['curlie.org']
start_urls = [
'https://curlie.org/Computers/Programming/Languages/Python/Books/',
'https://curlie.org/Computers/Programming/Languages/Python/Resources/']
def parse(self, response):
content_list = response.xpath('//*[@id="site-list-content"]/div/div/div')
for content in content_list:
item = DomeItem()
item['title'] = content.xpath('div/a/text()').extract_first()
item['link'] = content.xpath('div/a/@href').extract_first()
item['text'] = content.xpath('div/text()').extract_first().strip()
yield item
页:
[1]