|
马上注册,结交更多好友,享用更多功能^_^
您需要 登录 才可以下载或查看,没有账号?立即注册
x
编写一个爬虫,让用户输入关键字,爬百度百科“网络爬虫”的词条(链接 -> http://baike.baidu.com/view/284853.htm),将所有包含“view”的链接按下边格式打印出来
这是答案,有一个疑问
response = urllib.request.urlopen("http://baike.baidu.com/search/word?%s" % keyword)
这里面的/search/word?%s % keyword是干嘛的,我在浏览器看了,用百度百科搜一个关键词网址并不会有/search/word
- import urllib.request
- import urllib.parse
- import re
- from bs4 import BeautifulSoup
- def main():
- keyword = input("请输入关键词:")
- keyword = urllib.parse.urlencode({"word":keyword})
- response = urllib.request.urlopen("http://baike.baidu.com/search/word?%s" % keyword)
- html = response.read()
- soup = BeautifulSoup(html, "html.parser")
- for each in soup.find_all(href=re.compile("view")):
- content = ''.join([each.text])
- url2 = ''.join(["http://baike.baidu.com", each["href"]])
- response2 = urllib.request.urlopen(url2)
- html2 = response2.read()
- soup2 = BeautifulSoup(html2, "html.parser")
- if soup2.h2:
- content = ''.join([content, soup2.h2.text])
- content = ''.join([content, " -> ", url2])
- print(content)
- if __name__ == "__main__":
- main()
复制代码
%s 这边就是 % 格式化,就是将你 input 的内容先通过 urlencode转为 URL 编码,然后在格式化到下面的链接中去,这样做到搜索的功能
代码应该改成这样才能正常运行的:
- import urllib.request
- import urllib.parse
- import re
- from bs4 import BeautifulSoup
- def main():
- keyword = input("请输入关键词:")
- keyword = urllib.parse.urlencode({"word":keyword})
- response = urllib.request.urlopen("http://baike.baidu.com/search/word?%s" % keyword)
- html = response.read()
- soup = BeautifulSoup(html, "html.parser")
- for each in soup.find_all(href=re.compile("item"))[7:]:
- content = ''.join([each.text])
- url2 = ''.join(["http://baike.baidu.com", each["href"]])
- response2 = urllib.request.urlopen(url2)
- html2 = response2.read()
- soup2 = BeautifulSoup(html2, "html.parser")
- if soup2.h2:
- content = ''.join([content, soup2.h2.text])
- content = ''.join([content, " -> ", url2])
- print(content)
- if __name__ == "__main__":
- main()
复制代码
标签 href 的关键词是 item , 而且前 7个标签无用,所以从第七个开始
|
|