爬虫编码解码后为英文的问题救助
本帖最后由 937135952 于 2020-7-25 20:37 编辑代码:
import requests
from bs4 import BeautifulSoup
def JinRuYeMian(wangzi):
html = wangzi
headers = {
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36',
'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language':'en-US,en;q=0.5',
'Accept-Encoding':'gzip',
'DNT':'1',
'Connection':'close'
}
page = requests.get(html,headers=headers)
soup_obj=BeautifulSoup(page.content,'html.parser')
txt_content = soup_obj.find(id = 'mainNewsContent')
jiema = txt_content.encode('GB2312')
jiema2 = jiema.decode('GB2312')
print(jiema2)
c=int(1)
file_name = str(c) +'攻略' +'.txt'
with open(file_name,'w') as temp:
temp.write(str(jiema.decode('GB2312')))
print('攻略'+file_name+'已经完成下载')
c = c+1
if __name__ =='__main__':
JinRuYeMian('https://3gmfw.cn//article/html2/2020/04/26/523599.html')
输出结果:
<li><span class="list-icon2">11</span><a href="/article/html2/2020/04/26/523590.html" title="°Ù±ä´óÕì̽ÀÇÈË֮ѪÐ×ÊÖÊÇË£¿ °Ù±ä´óÕì̽ÀÇÈË֮Ѫ¹¥ÂÔ">°Ù±ä´óÕì̽ÀÇÈË֮ѪÐ×ÊÖÊÇË£¿ °Ù±ä´óÕì̽ÀÇÈË֮Ѫ¹¥ÂÔ</a></li>
<li><span class="list-icon2">12</span><a href="/article/html2/2020/04/26/523589.html" title="°Ù±ä´óÕì̽ԡ»ðÐ×ÊÖÊÇË£¿ °Ù±ä´óÕì̽ԡ»ð¹¥ÂÔ">°Ù±ä´óÕì̽ԡ»ðÐ×ÊÖÊÇË£¿ °Ù±ä´óÕì̽ԡ»ð¹¥ÂÔ</a></li>
网页内容为中文,我想把它爬下来,但是不知道转换成了这种字符串咋解决,求大佬指教
本帖最后由 xiaosi4081 于 2020-7-24 17:13 编辑
txt_content = soup_obj.find(id = 'mainNewsContent')
jiema = txt_content.encode('utf-8')
jiema2 = jiema.decode('utf-8')
print(jiema2)
c=1
file_name = str(c)+'.txt'
with open(file_name,'w') as temp:
temp.write(str(jiema.decode('utf-8')))
请发下完整代码 这个不叫英文,这个叫html编码格式。
发完整代码,让我们确认一下网站使用的编码,在决定encode格式。
Twilight6 发表于 2020-7-24 17:41
请发下完整代码
好嘞,谢谢各位 suchocolate 发表于 2020-7-24 20:08
这个不叫英文,这个叫html编码格式。
发完整代码,让我们确认一下网站使用的编码,在决定encode格式。
好嘞,谢谢各位
代码:
import requests
from bs4 import BeautifulSoup
def JinRuYeMian(wangzi):
html = wangzi
headers = {
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36',
'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language':'en-US,en;q=0.5',
'Accept-Encoding':'gzip',
'DNT':'1',
'Connection':'close'
}
page = requests.get(html,headers=headers)
soup_obj=BeautifulSoup(page.content,'html.parser')
txt_content = soup_obj.find(id = 'mainNewsContent')
jiema = txt_content.encode('GB2312')
jiema2 = jiema.decode('GB2312')
print(jiema2)
c=int(1)
file_name = str(c) +'攻略' +'.txt'
with open(file_name,'w') as temp:
temp.write(str(jiema.decode('GB2312')))
print('攻略'+file_name+'已经完成下载')
c = c+1
if __name__ =='__main__':
JinRuYeMian('https://3gmfw.cn//article/html2/2020/04/26/523599.html') xiaosi4081 发表于 2020-7-24 17:12
好像不太行,我发了全部代码...要不再看看? 本帖最后由 suchocolate 于 2020-7-26 20:15 编辑
soup不太熟,试试xpath。
import requests
from lxml import etree
def main(url):
headers = {'User-Agent': 'firefox'}
r = requests.get(url, headers=headers)
r.encoding = 'gbk'
html = etree.HTML(r.text)
result = html.xpath('//div[@id="mainNewsContent"]/p/text()')
print(result)
if __name__ == '__main__':
main('https://3gmfw.cn/article/html2/2020/04/26/523599.html')
谢啦 suchocolate 发表于 2020-7-26 20:13
soup不太熟,试试xpath。
result = html.xpath('//div[@id="mainNewsContent"]/p/text()')
这里面/p是什么意思阿....就同一个网站里我换了个网址运行的结果就是空白了 937135952 发表于 2020-7-28 16:30
result = html.xpath('//div[@id="mainNewsContent"]/p/text()')
这里面/p是什么意思阿....就同一个 ...
这个贴子的网页结果,小说的内容在div的p节点的下面,所以选p。
换了网页得研究新网页的html结构,爬虫就是这样,自己的需求得自己定制。
页:
[1]