|
马上注册,结交更多好友,享用更多功能^_^
您需要 登录 才可以下载或查看,没有账号?立即注册
x
- import urllib.request
- from bs4 import BeautifulSoup
- import re
- url = 'https://www.biduo.cc/biquge/39_39888/c13353637.html'
- headers = {
- 'Accept-Language': 'zh-CN',
- 'Cache-Control': 'no-cache',
- 'Connection': 'Keep-Alive',
- 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.18363'
- }
- res = urllib.request.Request(url=url,headers=headers)
- response = urllib.request.urlopen(res)
- html = response.read().decode("gbk",'ignore')
- regular = re.compile(".*?<br><br> ")
- m = regular.findall(html)
- for each in m:
- each = each[:-32]
- print(' ',end='')
- if ' ' in each :
- print(each.split(';')[-1])
- else:
- print(each)
复制代码
搞了个爬虫爬了一张小说,有没有大佬给改得更精简好用让我观摩一下
- import urllib.request
- import re
- url = 'https://www.biduo.cc/biquge/39_39888/c13353637.html'
- headers = {
- 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '
- '(KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.18363'
- }
- response = urllib.request.urlopen(urllib.request.Request(url=url, headers=headers))
- html = response.read().decode("gbk", 'ignore')
- for each in re.findall(".*?<br><br> ", html):
- each = each[:-32]
- print(' ', (each.split(';')[-1] if ' ' in each else each))
复制代码
|
|