oneface 发表于 2020-12-7 11:26:56

小甲鱼 55讲动动手第一题

import urllib.request as ur
import urllib.parse as upa
from bs4 import BeautifulSoup as bso
import re



def main():
    keyword = input('请输入关键词:')
    keyword2 = upa.quote(keyword)
    url = 'http://baike.baidu.com/item/%s' % keyword2
    http1 = ur.Request(url)
    http1.addheaders = [('user-Agent',
                         'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36')]
    html1 = ur.urlopen(http1).read().decode('utf-8')
    soup = bso(html1, 'html.parser')
    for i in soup.find_all(href=re.compile('item')):
      content = i.text   
      url2 = 'http://baike.baidu.com' + upa.quote(i['href'])   # 问题1
      http2 = ur.Request(url2)
      http2.addheaders = [('user-Agent',
                           'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36')]
      html2 = ur.urlopen(http2).read().decode('utf-8')
      soup2 = bso(html2, 'html.parser')
      if soup2.h2:
            content = ''.join()   
      content = ''.join()
      print(content)


if __name__ == '__main__':
    main()


问题1 为什么那里要进行编码,而不是直接使用(直接使用i['href'] 会报错)
url2 = 'http://baike.baidu.com' + upa.quote(i['href'])-->   url2 = 'http://baike.baidu.com' +i['href']


问题2 为什么这段代码关键词为"猪八戒"的时候,在爬到倒数第二个的时候会报错,关键词为其他就没问题,比如"明天"就可以爬完

笨鸟学飞 发表于 2020-12-7 12:41:23

===========问题1:
quote()
传入参数类型:字符串
功能:将单个字符串编码转化为 %xx 的形式
按照标准, URL 只允许一部分 ASCII 字符(数字字母和部分符号),其他的字符(如汉字)是不符合 URL 标准的。
所以 URL 中使用其他字符就需要进行 URL 编码。
===========问题2:
你可以根据报错信息自己分析啊,网站更新了么算法肯定得跟着改动的。
或者你把报错信息贴上来看看
另外,requests模块比urllib模块好用的多,现在基本上都是用requests模块了吧{:10_258:}

oneface 发表于 2020-12-7 13:11:22

笨鸟学飞 发表于 2020-12-7 12:41
===========问题1:
quote()
传入参数类型:字符串


我在自己学一下requests模块,下面是关键词为‘猪八戒’的报错信息

raceback (most recent call last):
File "D:\lib\urllib\request.py", line 1318, in do_open
    encode_chunked=req.has_header('Transfer-encoding'))
File "D:\lib\http\client.py", line 1239, in request
    self._send_request(method, url, body, headers, encode_chunked)
File "D:\lib\http\client.py", line 1285, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
File "D:\lib\http\client.py", line 1234, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
File "D:\lib\http\client.py", line 1026, in _send_output
    self.send(msg)
File "D:\lib\http\client.py", line 964, in send
    self.connect()
File "D:\lib\http\client.py", line 936, in connect
    (self.host,self.port), self.timeout, self.source_address)
File "D:\lib\socket.py", line 704, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "D:\lib\socket.py", line 745, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: getaddrinfo failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "E:\55讲.py", line 35, in <module>
    main()
File "E:\55讲.py", line 27, in main
    html2 = ur.urlopen(http2).read().decode('utf-8')
File "D:\lib\urllib\request.py", line 223, in urlopen
    return opener.open(url, data, timeout)
File "D:\lib\urllib\request.py", line 526, in open
    response = self._open(req, data)
File "D:\lib\urllib\request.py", line 544, in _open
    '_open', req)
File "D:\lib\urllib\request.py", line 504, in _call_chain
    result = func(*args)
File "D:\lib\urllib\request.py", line 1346, in http_open
    return self.do_open(http.client.HTTPConnection, req)
File "D:\lib\urllib\request.py", line 1320, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error getaddrinfo failed>
页: [1]
查看完整版本: 小甲鱼 55讲动动手第一题