Stevan 发表于 2022-4-22 00:44:49

编码问题(中文乱码)爬虫报错,求大佬

大佬们,我在自学bs4时遇到编码问题的报错求解决
#网站url:https://sanguo.5000yan.com/
#源码如下
import requests
from bs4 import BeautifulSoup

if __name__ =='__main__':
   url='https://sanguo.5000yan.com/'
   headers={ 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.88 Safari/537.36'
         }
   req=requests.get(url=url,headers=headers).content#这里使用content是因为使用text会乱码
   soup=BeautifulSoup(req,'lxml')
   san_list=soup.select('.sidamingzhu-list-mulu li')
   fp=open('./sanguo.txt','w',encoding='utf-8')
   for li in san_list:
          title=li.a.string
          list_url=li.a['href']
          page_text=requests.get(url=list_url,headers=headers).content#这里不管是.content还是.text都是乱码
          soup1=BeautifulSoup(page_text,'lxml')
          result=soup.find('div','class_="grap"').text
          fp.write(title +':'+result + '\n')#运行是这里报错AttributeError: 'NoneType' object has no attribute 'text'
          print(title,'爬取成功!')

isdkz 发表于 2022-4-22 00:44:50

Stevan 发表于 2022-4-22 12:53
#我搞懂了中文乱码的解决方法:requests.get().content.decode('utf-8')先转成二进制在转utf-8编码格式
# ...

import requests
from bs4 import BeautifulSoup

if __name__ =='__main__':
   url='https://sanguo.5000yan.com/'
   headers={ 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.88 Safari/537.36'
         }
   req=requests.get(url=url,headers=headers).content#这里使用content是因为使用text会乱码
   soup=BeautifulSoup(req,'lxml')
   san_list=soup.select('.sidamingzhu-list-mulu li')
   fp=open('./sanguo.txt','w',encoding='utf-8')
   for li in san_list:
          title=li.a.string
          list_url=li.a['href']
          resp=requests.get(url=list_url,headers=headers)
          resp.encoding = 'utf-8'                        # 指定编码为 'utf-8'
          soup1=BeautifulSoup(resp.text,'lxml')
          result=soup1.find('div',class_="grap").text
          fp.write(title +':'+result + '\n')
          print(title,'爬取成功!')

Stevan 发表于 2022-4-22 12:53:11

#我搞懂了中文乱码的解决方法:requests.get().content.decode('utf-8')先转成二进制在转utf-8编码格式
#源码
import requests
from bs4 import BeautifulSoup

if __name__ =='__main__':
   url='https://sanguo.5000yan.com/'
   headers={ 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.88 Safari/537.36'
         }
   req=requests.get(url=url,headers=headers).content
   soup=BeautifulSoup(req,'lxml')
   san_list=soup.select('.sidamingzhu-list-mulu li')
   fp=open('./sanguo.txt','w',encoding='utf-8')
   for li in san_list:
          title=li.a.string
          list_url=li.a['href']
          page_text=requests.get(url=list_url,headers=headers).content.decode('utf-8')
          soup1=BeautifulSoup(page_text,'lxml')
          result=soup1.find('div',class_='grap')
          result1=result.text
          #print(result1)
         
          fp.write(title +':'+result1 + '\n')
          print(title,'爬取成功!')
   #附上运行结果,找了俩小时的文献{:5_104:}
#希望我的问题案例,对各位有类似问题的帮助

LYF511 发表于 2022-4-23 15:04:54

啊这。。。requests模块可以帮你编码!
import requests

url = 'https://sanguo.5000yan.com/'
headers = { 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.88 Safari/537.36'}
res = requests.get(url=url,headers=headers)
print(res.text) # 目的是为了能打印出来(乱码)
res.encoding = "utf-8" # 转码
print(res.text) # 正常
还有,以后发代码要用代码功能(标签),要不然会被人骂的{:10_277:}
页: [1]
查看完整版本: 编码问题(中文乱码)爬虫报错,求大佬