鱼C论坛

 找回密码
 立即注册
查看: 129|回复: 8

[已解决]爬虫代码无法访问百度百科

[复制链接]
发表于 2022-8-4 12:08:02 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能^_^

您需要 登录 才可以下载或查看,没有账号?立即注册

x
难道百度百科也设置反爬了么,使用这个代码无法访问百度百科,请问这段代码应该怎么改,headers参数应该怎么写
  1. >>> import urllib.request
  2. >>> from bs4 import BeautifulSoup
  3. >>> url = "http://baike.baidu.com/view/284853.htm"
  4. >>> response = urllib.request.urlopen(url)
  5. >>> html = response.read()
  6. >>> soup = BeautifulSoup(html, 'html.parser')
  7. >>> import re
  8. >>> for each in soup.find_all(href = re.complie('view'))
  9. SyntaxError: expected ':'
  10. >>> for each in soup.find_all(href = re.complie('view')):
  11.             print(each.text, "->", ''.join(['http://baike,baidu.com', \
  12.                                     each['href']]))

  13.    
  14. Traceback (most recent call last):
  15.   File "<pyshell#11>", line 1, in <module>
  16.     for each in soup.find_all(href = re.complie('view')):
  17. AttributeError: module 're' has no attribute 'complie'. Did you mean: 'compile'?
  18. >>> for each in soup.find_all(href = re.compile('view')):
  19.             print(each.text, "->", ''.join(['http://baike,baidu.com', \
  20.                                     each['href']]))

  21.    
  22. >>> for each in soup.find_all(href = re.compile('view')):
  23.             print(each.text, "->", ''.join(['http://baike,baidu.com', \
  24.                                     each['href']]))
复制代码
最佳答案
2022-8-4 15:47:45
tommyyu 发表于 2022-8-4 15:26
那个地址是百度百科爬虫的,代码是按甲鱼书上来的……(匹配所有在这个介绍里面的链接)
是不是我代码敲 ...

代码过时了,你要清楚,爬虫都是具有时效性的,随着网站的改变,爬虫可能会失效
现在百度百科的url中早就不是view了,而是item
你把那个http://baike.baidu.com/view/284853.htm打开一下就会知道网页自动跳转到了https://baike.baidu.com/item/%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB
所以代码应修改为
  1. import urllib.request
  2. from bs4 import BeautifulSoup
  3. url = "http://baike.baidu.com/view/284853.htm"
  4. headers = {
  5.     "User-Agent":"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36",
  6.     "Cookie":"BIDUPSID=A92A6802F7D86AED4D4E206949C7BDAD; PSTM=1650383456; BAIDUID=A92A6802F7D86AED6342FE5ECB6326A0:FG=1; BD_UPN=12314353; BDUSS=lKbG5Van54QlllVVlRfkZtSHB5UVFyZDJRQTFYVkZiRDBvMVk4Znc4OUpwWTlpRVFBQUFBJCQAAAAAAAAAAAEAAAChTKa30-7W5tL40NAzNDUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEkYaGJJGGhiUU; BDUSS_BFESS=lKbG5Van54QlllVVlRfkZtSHB5UVFyZDJRQTFYVkZiRDBvMVk4Znc4OUpwWTlpRVFBQUFBJCQAAAAAAAAAAAEAAAChTKa30-7W5tL40NAzNDUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEkYaGJJGGhiUU; ispeed_lsm=2; ab_sr=1.0.1_NGU2NGZmNjBmMWMwODlmNmNmMzNkYjRiYTA5ODM2N2M2M2QyZGIyNTg4MmMxNGI3ZmFmM2NlYzI1ZDI3NTQzNzQzZjQ5ODE4YjY2MGNhZTBhMTYwYzUwYzliNDZkM2I5NTMxYjE0NDM0ODUwZjAyOTE3YTM0MWU1Y2NiMDM1MmFjMWIxM2FjZWZiN2U1MDMwYmYxNTc3YWUwMTdjMzI0OTdkMzgwNTRmNjU3NDEyMWQ2OTc2YzQ0NjlhZjllYTI3; BA_HECTOR=2085012k0k252k20ag03951d1hcr8fj16; ZFY=l1Dlk45JxTCwOPrlwIUjq8tRX3jy5nZT9F9jDP2NNnA:C; BAIDUID_BFESS=BFC9AF9D2D83193E805DC3D919F0D1B6:FG=1; BD_HOME=1; delPer=0; BD_CK_SAM=1; PSINO=6; H_PS_PSSID=36553_36460_36725_36455_36452_36691_36167_36694_36697_36816_36652_36773_36746_36760_36769_36765_26350; BDRCVFR[feWj1Vr5u3D]=mk3SLVN4HKm; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; H_PS_645EC=233f6REQC1PnLDurVcWHYoD2TfKhF1re3d6AcrXwDgE25OQ55x1kmFuxvDE",
  7.     "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"
  8. }
  9. request = urllib.request.Request(url,headers=headers)
  10. response = urllib.request.urlopen(request)
  11. html = response.read()
  12. soup = BeautifulSoup(html, 'html.parser')
  13. import re
  14. for each in soup.find_all(href=re.compile('item')):
  15.     print(each.text, "->", ''.join(['http://baike.baidu.com',each['href']]))
复制代码
屏幕截图 2022-08-04 115600.jpg
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
发表于 2022-8-4 12:16:55 | 显示全部楼层
百度百科是动态刷新页面,直接urlopen无法获得在浏览器上看到的内容
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
发表于 2022-8-4 12:38:45 | 显示全部楼层
  1. import urllib.request
  2. from bs4 import BeautifulSoup
  3. url = "http://baike.baidu.com/view/284853.htm"
  4. headers = {
  5.     "User-Agent":"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36",
  6.     "Cookie":"BIDUPSID=A92A6802F7D86AED4D4E206949C7BDAD; PSTM=1650383456; BAIDUID=A92A6802F7D86AED6342FE5ECB6326A0:FG=1; BD_UPN=12314353; BDUSS=lKbG5Van54QlllVVlRfkZtSHB5UVFyZDJRQTFYVkZiRDBvMVk4Znc4OUpwWTlpRVFBQUFBJCQAAAAAAAAAAAEAAAChTKa30-7W5tL40NAzNDUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEkYaGJJGGhiUU; BDUSS_BFESS=lKbG5Van54QlllVVlRfkZtSHB5UVFyZDJRQTFYVkZiRDBvMVk4Znc4OUpwWTlpRVFBQUFBJCQAAAAAAAAAAAEAAAChTKa30-7W5tL40NAzNDUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEkYaGJJGGhiUU; ispeed_lsm=2; ab_sr=1.0.1_NGU2NGZmNjBmMWMwODlmNmNmMzNkYjRiYTA5ODM2N2M2M2QyZGIyNTg4MmMxNGI3ZmFmM2NlYzI1ZDI3NTQzNzQzZjQ5ODE4YjY2MGNhZTBhMTYwYzUwYzliNDZkM2I5NTMxYjE0NDM0ODUwZjAyOTE3YTM0MWU1Y2NiMDM1MmFjMWIxM2FjZWZiN2U1MDMwYmYxNTc3YWUwMTdjMzI0OTdkMzgwNTRmNjU3NDEyMWQ2OTc2YzQ0NjlhZjllYTI3; BA_HECTOR=2085012k0k252k20ag03951d1hcr8fj16; ZFY=l1Dlk45JxTCwOPrlwIUjq8tRX3jy5nZT9F9jDP2NNnA:C; BAIDUID_BFESS=BFC9AF9D2D83193E805DC3D919F0D1B6:FG=1; BD_HOME=1; delPer=0; BD_CK_SAM=1; PSINO=6; H_PS_PSSID=36553_36460_36725_36455_36452_36691_36167_36694_36697_36816_36652_36773_36746_36760_36769_36765_26350; BDRCVFR[feWj1Vr5u3D]=mk3SLVN4HKm; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; H_PS_645EC=233f6REQC1PnLDurVcWHYoD2TfKhF1re3d6AcrXwDgE25OQ55x1kmFuxvDE",
  7.     "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"
  8. }
  9. request = urllib.request.Request(url,headers=headers)
  10. response = urllib.request.urlopen(request)
  11. html = response.read()
  12. soup = BeautifulSoup(html, 'html.parser')
  13. import re
  14. for each in soup.find_all(href = re.compile('view')):
  15.     print(each.text, "->", ''.join(['http://baike.baidu.com',each['href']]))
复制代码
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
 楼主| 发表于 2022-8-4 12:54:56 | 显示全部楼层

那这个headers字典的键值对需要根据什么来定
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
 楼主| 发表于 2022-8-4 12:57:37 | 显示全部楼层
本帖最后由 tommyyu 于 2022-8-4 15:24 编辑


还有,运行结果怎么这么离谱
屏幕截图 2022-08-04 115600.jpg
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
发表于 2022-8-4 13:00:34 | 显示全部楼层
tommyyu 发表于 2022-8-4 12:57
还有,运行结果怎么这么离谱

你想爬什么数据,我就改了爬取部分,提取部分我没改你一个字
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
发表于 2022-8-4 14:46:58 | 显示全部楼层
方法拼错了:
  1. re.compile()
复制代码
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
 楼主| 发表于 2022-8-4 15:26:31 | 显示全部楼层
临时号 发表于 2022-8-4 13:00
你想爬什么数据,我就改了爬取部分,提取部分我没改你一个字

那个地址是百度百科爬虫的,代码是按甲鱼书上来的……(匹配所有在这个介绍里面的链接)
是不是我代码敲错了
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
发表于 2022-8-4 15:47:45 | 显示全部楼层    本楼为最佳答案   
tommyyu 发表于 2022-8-4 15:26
那个地址是百度百科爬虫的,代码是按甲鱼书上来的……(匹配所有在这个介绍里面的链接)
是不是我代码敲 ...

代码过时了,你要清楚,爬虫都是具有时效性的,随着网站的改变,爬虫可能会失效
现在百度百科的url中早就不是view了,而是item
你把那个http://baike.baidu.com/view/284853.htm打开一下就会知道网页自动跳转到了https://baike.baidu.com/item/%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB
所以代码应修改为
  1. import urllib.request
  2. from bs4 import BeautifulSoup
  3. url = "http://baike.baidu.com/view/284853.htm"
  4. headers = {
  5.     "User-Agent":"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36",
  6.     "Cookie":"BIDUPSID=A92A6802F7D86AED4D4E206949C7BDAD; PSTM=1650383456; BAIDUID=A92A6802F7D86AED6342FE5ECB6326A0:FG=1; BD_UPN=12314353; BDUSS=lKbG5Van54QlllVVlRfkZtSHB5UVFyZDJRQTFYVkZiRDBvMVk4Znc4OUpwWTlpRVFBQUFBJCQAAAAAAAAAAAEAAAChTKa30-7W5tL40NAzNDUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEkYaGJJGGhiUU; BDUSS_BFESS=lKbG5Van54QlllVVlRfkZtSHB5UVFyZDJRQTFYVkZiRDBvMVk4Znc4OUpwWTlpRVFBQUFBJCQAAAAAAAAAAAEAAAChTKa30-7W5tL40NAzNDUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEkYaGJJGGhiUU; ispeed_lsm=2; ab_sr=1.0.1_NGU2NGZmNjBmMWMwODlmNmNmMzNkYjRiYTA5ODM2N2M2M2QyZGIyNTg4MmMxNGI3ZmFmM2NlYzI1ZDI3NTQzNzQzZjQ5ODE4YjY2MGNhZTBhMTYwYzUwYzliNDZkM2I5NTMxYjE0NDM0ODUwZjAyOTE3YTM0MWU1Y2NiMDM1MmFjMWIxM2FjZWZiN2U1MDMwYmYxNTc3YWUwMTdjMzI0OTdkMzgwNTRmNjU3NDEyMWQ2OTc2YzQ0NjlhZjllYTI3; BA_HECTOR=2085012k0k252k20ag03951d1hcr8fj16; ZFY=l1Dlk45JxTCwOPrlwIUjq8tRX3jy5nZT9F9jDP2NNnA:C; BAIDUID_BFESS=BFC9AF9D2D83193E805DC3D919F0D1B6:FG=1; BD_HOME=1; delPer=0; BD_CK_SAM=1; PSINO=6; H_PS_PSSID=36553_36460_36725_36455_36452_36691_36167_36694_36697_36816_36652_36773_36746_36760_36769_36765_26350; BDRCVFR[feWj1Vr5u3D]=mk3SLVN4HKm; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; H_PS_645EC=233f6REQC1PnLDurVcWHYoD2TfKhF1re3d6AcrXwDgE25OQ55x1kmFuxvDE",
  7.     "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"
  8. }
  9. request = urllib.request.Request(url,headers=headers)
  10. response = urllib.request.urlopen(request)
  11. html = response.read()
  12. soup = BeautifulSoup(html, 'html.parser')
  13. import re
  14. for each in soup.find_all(href=re.compile('item')):
  15.     print(each.text, "->", ''.join(['http://baike.baidu.com',each['href']]))
复制代码
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

小黑屋|手机版|Archiver|鱼C工作室 ( 粤ICP备18085999号-1

GMT+8, 2022-8-17 19:10

Powered by Discuz! X3.4

Copyright © 2001-2021, Tencent Cloud.

快速回复 返回顶部 返回列表