|
|
马上注册,结交更多好友,享用更多功能^_^
您需要 登录 才可以下载或查看,没有账号?立即注册
x
- import urllib.request
- import urllib.parse
- import re
- from bs4 import BeautifulSoup
- def main():
- keyword = input("请输入关键词:")
- keyword = urllib.parse.urlencode({"word":keyword})
- response = urllib.request.urlopen("http://baike.baidu.com/search/word?%s" % keyword)
- html = response.read()
- soup = BeautifulSoup(html, "html.parser")
- for each in soup.find_all(href=re.compile("view")):
- content = ''.join([each.text])
- url2 = ''.join(["http://baike.baidu.com", each["href"]])
- response2 = urllib.request.urlopen(url2)
- html2 = response2.read()
- soup2 = BeautifulSoup(html2, "html.parser")
- if soup2.h2:
- content = ''.join([content, soup2.h2.text])
- content = ''.join([content, " -> ", url2])
- print(content)
- if __name__ == "__main__":
- main()
复制代码
这是第55讲的课后题,- keyword = input("请输入关键词:")
- keyword = urllib.parse.urlencode({"word":keyword})
- response = urllib.request.urlopen("http://baike.baidu.com/search/word?%s" % keyword)
复制代码
对于我们要找的网页不就是"http://baike.baidu.com/search/word?keyword"吗,为什么要先urlencode?这边不就是要传入一个字符串就好吗?
对比之前课上有道翻译的一下段:
- data = {}
- data['type'] = 'AUTO'
- data['i'] = content
- data['doctype'] = 'json'
- data['xmlVersion'] = '1.6'
- data['keyfrom'] = 'fanyi.web'
- data['ue'] = 'UTF-8'
- data['typoResult'] = 'true'
- data = urllib.parse.urlencode(data).encode('utf-8')
- req = urllib.request.Request(url, data)
复制代码
这里的data是包含我打开网页后要翻译的内容,按照平常是打开网页自己输入的内容,这个我可以理解。 但上面的意思应该是仅仅打开网页为何还要urllib.parse.urlencode({"word":keyword})
|
|