|
2鱼币
本帖最后由 ldc2020 于 2020-2-20 20:04 编辑
大神走过路过帮小弟一个忙,对你们来说应该很简单的。
不知道为什么想用scrapy爬鱼C的课后作业标签的链接爬不到。 以下是程序:
import scrapy
class FirstSpider(scrapy.Spider):
name = 'First'
allowed_domains = ['fishc.com.cn']
start_url = [
'https://fishc.com.cn/forum.php?mod=forumdisplay&fid=243&filter=typeid&typeid=398'
'https://fishc.com.cn/forum.php?mod=forumdisplay&fid=243&filter=typeid&typeid=403'
]
def parse(self, response):
sel = scrapy.selector.Selector(response) # 初始化一个sel
sites = sel.xpath('//ul[@id="thread_types"]/li[@class="xw1 a"]')
for site in sites:
link = site.xpath('a/@href').extract()
print(link)
然后用cmd 进行调用,得出以下结果:
C:\Users\Administrator\Desktop\python3.8\scrapy\aa>scrapy crawl First
2020-02-20 15:55:17 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: aa)
2020-02-20 15:55:17 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.8.1 (tags/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d 10 Sep 2019), cryptography 2.8, Platform Windows-10-10.0.18362-SP0
2020-02-20 15:55:17 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'aa', 'NEWSPIDER_MODULE': 'aa.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['aa.spiders']}
2020-02-20 15:55:17 [scrapy.extensions.telnet] INFO: Telnet Password: 1861eef3ccb12504
2020-02-20 15:55:17 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2020-02-20 15:55:18 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-02-20 15:55:18 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-02-20 15:55:18 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-02-20 15:55:18 [scrapy.core.engine] INFO: Spider opened
2020-02-20 15:55:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-02-20 15:55:18 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-02-20 15:55:18 [scrapy.core.engine] INFO: Closing spider (finished)
2020-02-20 15:55:18 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.008003,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2020, 2, 20, 7, 55, 18, 61650),
'log_count/INFO': 10,
'start_time': datetime.datetime(2020, 2, 20, 7, 55, 18, 53647)}
2020-02-20 15:55:18 [scrapy.core.engine] INFO: Spider closed (finished)
什么都没爬出来,帮忙看看原因呀,谢啦
网页如下: |
-
|