今天的我更强了 发表于 2020-6-29 14:46:31

编码问题

import requests
import bs4

def open_url(url):
    headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'}
    res=requests.get(url,headers=headers)
    print(res.encoding)
    return res
def find_soup(res):
    soup=bs4.BeautifulSoup(res.text,'html.parser')
    movie=[]
    targets=soup.find_all('div',class_='hd')
    for each in targets:
      each=each.a.span.text
      movie.append(each)
    score=[]
    targets1=soup.find_all('span',class_='rating_num')
    for each in targets1:
      score.append(each.text)
    author=[]
    targets2=soup.find_all('div',class_='bd')
    for each in targets2:
      try:
            author.append(each.p.text.split('\n').strip()+each.p.text.split('\n').strip())
      except:
            continue
    result=[]
    length=len(movie)
    for i in range(length):
      each=movie+''+score+''+author+'\n'
      result.append(each)
    return result
def find_pages(res):
    soup=bs4.BeautifulSoup(res.text,'html.parser')
    req=soup.find('span',class_="next").previous_sibling.previous_sibling.text
    return int(req)
def main():
    url='https://movie.douban.com/top250'
    res=open_url(url)
    depth=find_pages(res)
    result=[]
    for i inrange(depth):
      url=url+'?start=%s&filter='%str(i*25)
      pages=open_url(url)
      rresult=find_soup(pages)
      result.extend(rresult)
    with open('豆瓣250.txt','w',encoding='utf-8') as f:
      f.writelines(result)


最后这个地方with open('豆瓣250.txt','w',encoding='utf-8') as f:为什么要加上,encoding='utf-8'?

suchocolate 发表于 2020-6-29 17:09:40

open文件默认编码格式基于系统平台,未必是utf-8,手工指定utf-8防止出现乱码或解析不了等情况。

xiaofeiyu 发表于 2020-6-29 20:55:23

加上更严谨,因为他有可能是用ascii编码写入数据的,而中文无法用ascii解码,加上encoding='utf-8'就声明用utf-8解码

今天的我更强了 发表于 2020-6-29 21:16:09

suchocolate 发表于 2020-6-29 17:09
open文件默认编码格式基于系统平台,未必是utf-8,手工指定utf-8防止出现乱码或解析不了等情况。

好的谢谢
页: [1]
查看完整版本: 编码问题