鱼C论坛

 找回密码
 立即注册
查看: 1453|回复: 4

python爬虫最后一讲没有打印信息,求大神指导!

[复制链接]
发表于 2019-3-11 20:13:08 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能^_^

您需要 登录 才可以下载或查看,没有账号?立即注册

x
本帖最后由 GmnsyKJ 于 2019-3-11 20:29 编辑

import scrapy

class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ['dmoztools.net']
    start_urls = ['http://www.dmoztools.net/Computers/Programming/Languages/Python/Books/',
                  'http://www.dmoztools.net/Computers/Programming/Languages/Python/Resources/']

    def parse(self, response):
        sel = scrapy.selector.Selector(response)
        sites = sel.xpath('//div[@id="site-list-content"]/div[@class="site-item"]/div[@class="title-and-desc"]')
        for site in sites:
            title = site.xpath('a/div[@class="site-title"]/text()').extract()
            link = site.xpath('a/@href').extract()
            desc = site.xpath('div[@class="site-descr "]/text()').extract()
            print(title, link, desc)
小甲鱼最新课程 -> https://ilovefishc.com
回复

使用道具 举报

 楼主| 发表于 2019-3-11 20:15:30 | 显示全部楼层
本帖最后由 GmnsyKJ 于 2019-3-11 20:22 编辑

照片出不来只能换文字了,下面是控制台执行完之后的信息
2019-03-11 20:11:00 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: ScrapyTest)
2019-03-11 20:11:00 [scrapy.utils.log] INFO: Versions: lxml 4.3.2.0, libxml2 2.9.9, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 18.9.0, Python 3.7.2 (default, Feb 12 2019, 08:15:36) - [Clang 10.0.0 (clang-1000.11.45.5)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b  26 Feb 2019), cryptography 2.6.1, Platform Darwin-18.2.0-x86_64-i386-64bit
2019-03-11 20:11:00 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'ScrapyTest', 'NEWSPIDER_MODULE': 'ScrapyTest.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['ScrapyTest.spiders']}
2019-03-11 20:11:00 [scrapy.extensions.telnet] INFO: Telnet Password: 504b44f2043cd9df
2019-03-11 20:11:00 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2019-03-11 20:11:00 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-03-11 20:11:00 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-03-11 20:11:00 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-03-11 20:11:00 [scrapy.core.engine] INFO: Spider opened
2019-03-11 20:11:00 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-03-11 20:11:00 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-03-11 20:11:01 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.dmoztools.net/robots.txt> (referer: None)
2019-03-11 20:11:02 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.dmoztools.net/Computers/Programming/Languages/Python/Books/> (referer: None)
2019-03-11 20:11:02 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.dmoztools.net/Computers/Programming/Languages/Python/Resources/> (referer: None)
2019-03-11 20:11:02 [scrapy.core.engine] INFO: Closing spider (finished)
2019-03-11 20:11:02 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 752,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'downloader/response_bytes': 14599,
'downloader/response_count': 3,
'downloader/response_status_count/200': 3,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2019, 3, 11, 12, 11, 2, 267374),
'log_count/DEBUG': 3,
'log_count/INFO': 9,
'memusage/max': 49577984,
'memusage/startup': 49577984,
'response_received_count': 3,
'robotstxt/request_count': 1,
'robotstxt/response_count': 1,
'robotstxt/response_status_count/200': 1,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2019, 3, 11, 12, 11, 0, 606128)}
2019-03-11 20:11:02 [scrapy.core.engine] INFO: Spider closed (finished)

小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2019-3-11 20:38:05 | 显示全部楼层
我尝试直接使用chrome里面的复制xpath的功能,在shell里面尝试是可以的,如下:
response.xpath('//*[@id="site-list-content"]/div[1]/div[3]/a/div/text()').extract
<bound method SelectorList.getall of [<Selector xpath='//*[@id="site-list-content"]/div[1]/div[3]/a/div/text()' data='Data Structures and Algorithms with Obje'>]>
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2019-3-11 20:41:25 | 显示全部楼层
但是只要把数字替换成class或者id就不行了,如下:
response.xpath('//*[@id="site-list-content"]/div[@id="site-item"]/div[id="title-and-desc"]/a/div/text()').extract
<bound method SelectorList.getall of []>


response.xpath('//*[@id="site-list-content"]/div[@class="site-item"]/div[class="title-and-desc"]/a/div/text()').extract
<bound method SelectorList.getall of []>


response.xpath('//*[@class="site-list-content"]/div[@class="site-item"]/div[class="title-and-desc"]/a/div/text()').extract
<bound method SelectorList.getall of []>
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2019-3-11 21:35:41 | 显示全部楼层
解决了。。。。少打了一个空格。。。
div[@class="site-item "]。。。。这里少了一个空格,打扰了各位
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

小黑屋|手机版|Archiver|鱼C工作室 ( 粤ICP备18085999号-1 | 粤公网安备 44051102000585号)

GMT+8, 2026-1-14 12:14

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表