鱼C论坛

 找回密码
 立即注册
查看: 1768|回复: 2

求助关于第55课的课后习题

[复制链接]
发表于 2019-9-1 09:48:46 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能^_^

您需要 登录 才可以下载或查看,没有账号?立即注册

x
本帖最后由 轻松kuma 于 2019-9-1 09:49 编辑

按照小甲鱼的代码出现如下错误:
源代码:
import urllib.request
import urllib.parse
import re
from bs4 import BeautifulSoup

def main():
    keyword = input("请输入关键词:")
    keyword = urllib.parse.urlencode({"word":keyword})
    #keyword = keyword.decode('ascii')
    response = urllib.request.urlopen("http://baike.baidu.com/search/word?%s" % keyword)
    html = response.read()
    soup = BeautifulSoup(html, "html.parser")

    for each in soup.find_all(href=re.compile("view")):
        content = ''.join([each.text])
        url2 = ''.join(["http://baike.baidu.com", each["href"]])
        response2 = urllib.request.urlopen(url2)
        html2 = response2.read()
        soup2 = BeautifulSoup(html2, "html.parser")
        if soup2.h2:
            content = ''.join([content, soup2.h2.text])
        content = ''.join([content, " -> ", url2])
        print(content)

if __name__ == "__main__":
    main()

报错代码:
Traceback (most recent call last):
  File "C:\Users\Administrator\Desktop\我要学Python\test.py", line 28, in <module>
    main()
  File "C:\Users\Administrator\Desktop\我要学Python\test.py", line 19, in main
    response2 = urllib.request.urlopen(url2)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\urllib\request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\urllib\request.py", line 525, in open
    response = self._open(req, data)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\urllib\request.py", line 543, in _open
    '_open', req)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\urllib\request.py", line 503, in _call_chain
    result = func(*args)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\urllib\request.py", line 1345, in http_open
    return self.do_open(http.client.HTTPConnection, req)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\urllib\request.py", line 1317, in do_open
    encode_chunked=req.has_header('Transfer-encoding'))
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 1244, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 1255, in _send_request
    self.putrequest(method, url, **skips)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 1122, in putrequest
    self._output(request.encode('ascii'))
UnicodeEncodeError: 'ascii' codec can't encode characters in position 36-39: ordinal not in range(128)

尝试了很多方法,比如引入
import importlib,sys
importlib.reload(sys)
或者from urllib.parse import quote
但是依然出错,小白感觉好心累,有没有大佬可以指点一下呀?

小甲鱼最新课程 -> https://ilovefishc.com
回复

使用道具 举报

发表于 2019-9-1 15:58:27 | 显示全部楼层
我觉得你可以先把url2显示出来看看,是不是内容有些奇怪
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2020-1-27 18:04:07 | 显示全部楼层
facevoid 发表于 2019-9-1 15:58
我觉得你可以先把url2显示出来看看,是不是内容有些奇怪

我也遇到了这个问题,url2是能正常打印的打印出来的是一堆这样的东西:http://baike.baidu.com/item/%E7% ... 461#viewPageContent
问题是出现在这里吗?
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

小黑屋|手机版|Archiver|鱼C工作室 ( 粤ICP备18085999号-1 | 粤公网安备 44051102000585号)

GMT+8, 2026-1-18 00:35

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表