鱼C论坛

 找回密码
 立即注册
查看: 2525|回复: 2

关于初学scrapy迭代列表和爬取超链接的问题

[复制链接]
发表于 2018-3-28 17:10:02 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能^_^

您需要 登录 才可以下载或查看,没有账号?立即注册

x
本帖最后由 wongyusing 于 2018-3-28 22:05 编辑

问题:
1.遇到想下图这种超链接该怎么获取,图中的“百度网盘”是一个网址,但是无法得到具体链接,该如何处理?

截图1

截图1

2.迭代问题,代码如下:
  1. # -*- coding: utf-8 -*-
  2. import scrapy
  3. from scrapy.http import Request


  4. class GogoSpider(scrapy.Spider):
  5.     name = 'gogo'
  6.     allowed_domains = ['wbs360.com']
  7.     start_urls = ['http://www.wbs360.com/books/']
  8.     #http://www.wbs360.com/books/?p-1.html

  9.     def find_all_url(self):#该网站有页面“书库”是可以获取所有书籍的URL
  10.         url_0 = 'www.wbs360.com/books/?p-'
  11.         url_1 = '.html'
  12.         #经观察得知该网站小说详情页的url后缀是465至4435。共3970本
  13.         #但经计算该站小说只有3956,中间的14本为无效链接或者是反爬虫机制设立的坑
  14.         #故此不考虑拼接,只从页面获取所有小说的url
  15.         for num in range(1,133):#由实验得出该网站有132页,加一
  16.             url_2 = url_0 + str(num) + url_1#拼接url
  17.             print(url_2)
  18.             yield Request(url_2, callback=self.parse)
  19.         #问题出在这里‘find_all_url’函数迭代出的结果无法传入到parse函数里
  20.         #parse函数只会导入start_urls的url。


  21.     def parse(self, response):
  22.         #print(response.text)
  23.         sel = scrapy.selector.Selector(response)
  24.         sites = sel.xpath('//*[@id="book"]/div/div[3]/ul[2]/li')
  25.         for site in sites:
  26.             detail = site.xpath('a/@href').extract()#获取小说的详情页链接
  27.             print(detail)
复制代码


item文件
  1. # -*- coding: utf-8 -*-

  2. # Define here the models for your scraped items
  3. #
  4. # See documentation in:
  5. # https://doc.scrapy.org/en/latest/topics/items.html

  6. import scrapy


  7. class Book0Item(scrapy.Item):
  8.     # define the fields for your item here like:
  9.     # name = scrapy.Field()
  10.     sort = scrapy.Field()  #分类
  11.     title = scrapy.Field() #小说名
  12.     author = scrapy.Field()#作者名
  13.     link = scrapy.Field()  #下载地址

复制代码



shell信息
  1. 2018-03-28 16:20:30 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: book_0)
  2. 2018-03-28 16:20:30 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.4.0, w3lib 1.19.0, Twisted 17.9.0, Python 3.5.2 (default, Nov 23 2017, 16:37:01) - [GCC 5.4.0 20160609], pyOpenSSL 17.5.0 (OpenSSL 1.1.0h  27 Mar 2018), cryptography 2.2.2, Platform Linux-4.10.0-28-generic-x86_64-with-Ubuntu-16.04-xenial
  3. 2018-03-28 16:20:30 [scrapy.crawler] INFO: Overridden settings: {'TELNETCONSOLE_ENABLED': False, 'BOT_NAME': 'book_0', 'NEWSPIDER_MODULE': 'book_0.spiders', 'SPIDER_MODULES': ['book_0.spiders']}
  4. 2018-03-28 16:20:30 [scrapy.middleware] INFO: Enabled extensions:
  5. ['scrapy.extensions.logstats.LogStats',
  6. 'scrapy.extensions.corestats.CoreStats',
  7. 'scrapy.extensions.memusage.MemoryUsage']
  8. 2018-03-28 16:20:30 [scrapy.middleware] INFO: Enabled downloader middlewares:
  9. ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
  10. 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
  11. 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
  12. 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
  13. 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
  14. 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
  15. 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
  16. 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
  17. 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
  18. 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
  19. 'scrapy.downloadermiddlewares.stats.DownloaderStats']
  20. 2018-03-28 16:20:30 [scrapy.middleware] INFO: Enabled spider middlewares:
  21. ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
  22. 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
  23. 'scrapy.spidermiddlewares.referer.RefererMiddleware',
  24. 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
  25. 'scrapy.spidermiddlewares.depth.DepthMiddleware']
  26. 2018-03-28 16:20:30 [scrapy.middleware] INFO: Enabled item pipelines:
  27. []
  28. 2018-03-28 16:20:30 [scrapy.core.engine] INFO: Spider opened
  29. 2018-03-28 16:20:30 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
  30. 2018-03-28 16:20:31 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.wbs360.com/books/> (referer: None)
  31. ['http://www.wbs360.com/book/4435.html']
  32. ['http://www.wbs360.com/book/4434.html']
  33. ['http://www.wbs360.com/book/4433.html']
  34. ['http://www.wbs360.com/book/4432.html']
  35. ['http://www.wbs360.com/book/4431.html']
  36. ['http://www.wbs360.com/book/4430.html']
  37. ['http://www.wbs360.com/book/4429.html']
  38. ['http://www.wbs360.com/book/4428.html']
  39. ['http://www.wbs360.com/book/4427.html']
  40. ['http://www.wbs360.com/book/4426.html']
  41. ['http://www.wbs360.com/book/4425.html']
  42. ['http://www.wbs360.com/book/4424.html']
  43. ['http://www.wbs360.com/book/4423.html']
  44. ['http://www.wbs360.com/book/4422.html']
  45. ['http://www.wbs360.com/book/4421.html']
  46. ['http://www.wbs360.com/book/4420.html']
  47. ['http://www.wbs360.com/book/4419.html']
  48. ['http://www.wbs360.com/book/4418.html']
  49. ['http://www.wbs360.com/book/4417.html']
  50. ['http://www.wbs360.com/book/4416.html']
  51. ['http://www.wbs360.com/book/4415.html']
  52. ['http://www.wbs360.com/book/4414.html']
  53. ['http://www.wbs360.com/book/4413.html']
  54. ['http://www.wbs360.com/book/4412.html']
  55. ['http://www.wbs360.com/book/4411.html']
  56. ['http://www.wbs360.com/book/4410.html']
  57. ['http://www.wbs360.com/book/4409.html']
  58. ['http://www.wbs360.com/book/4408.html']
  59. ['http://www.wbs360.com/book/4407.html']
  60. ['http://www.wbs360.com/book/4406.html']
  61. 2018-03-28 16:20:31 [scrapy.core.engine] INFO: Closing spider (finished)
  62. 2018-03-28 16:20:31 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
  63. {'downloader/request_bytes': 219,
  64. 'downloader/request_count': 1,
  65. 'downloader/request_method_count/GET': 1,
  66. 'downloader/response_bytes': 4724,
  67. 'downloader/response_count': 1,
  68. 'downloader/response_status_count/200': 1,
  69. 'finish_reason': 'finished',
  70. 'finish_time': datetime.datetime(2018, 3, 28, 8, 20, 31, 157313),
  71. 'log_count/DEBUG': 1,
  72. 'log_count/INFO': 7,
  73. 'memusage/max': 740130816,
  74. 'memusage/startup': 740130816,
  75. 'response_received_count': 1,
  76. 'scheduler/dequeued': 1,
  77. 'scheduler/dequeued/memory': 1,
  78. 'scheduler/enqueued': 1,
  79. 'scheduler/enqueued/memory': 1,
  80. 'start_time': datetime.datetime(2018, 3, 28, 8, 20, 30, 664665)}
  81. 2018-03-28 16:20:31 [scrapy.core.engine] INFO: Spider closed (finished)
复制代码



文件地址链接: https://pan.baidu.com/s/1eIN6z_umvADds0O0-dSSig 密码: pjh6
小甲鱼最新课程 -> https://ilovefishc.com
回复

使用道具 举报

发表于 2018-3-28 17:49:20 | 显示全部楼层
链接不就是百度云的么,就是你标记的地方的上一个a标签里面的链接就是了
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2018-3-28 18:06:18 | 显示全部楼层
gopythoner 发表于 2018-3-28 17:49
链接不就是百度云的么,就是你标记的地方的上一个a标签里面的链接就是了

点开后是一个非法链接
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

小黑屋|手机版|Archiver|鱼C工作室 ( 粤ICP备18085999号-1 | 粤公网安备 44051102000585号)

GMT+8, 2025-12-28 22:21

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表