|
马上注册,结交更多好友,享用更多功能^_^
您需要 登录 才可以下载或查看,没有账号?立即注册
x
输入代码后结果为None,没有出现爬虫具体结果。请各位鱼油协助!非常感谢!
>>>
代码:
import requests
import bs4
def open_url(url):
headers = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36 Edg/88.0.705.56'}
res = requests.get(url,headers=headers)
return res
def find_data(res):
soup = bs4.BeautifulSoup(res.text,'html.parser')
content = soup.find(id="Cnt-Main-Article-QQ")
print(content)
def main():
url = 'https://www.chyxx.com/industry/201901/711262.html'
res = open_url(url)
find_data(res)
if __name__ == '__main__':
main()
本帖最后由 qq1151985918 于 2021-1-30 23:28 编辑
我刚刚查了下,Cnt-Main-Article-QQ 应该是腾讯新闻某些文章的id,你搞得不对,看看我给你改的是不是想要的
- import requests
- import bs4
- def open_url(url):
- headers = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36 Edg/88.0.705.56'}
- res = requests.get(url,headers=headers)
- res.encoding = "gbk"
- return res
- def find_data(res):
- soup = bs4.BeautifulSoup(res.text,'html.parser')
-
- content = soup.find(id="contentBody")
- print(content.text)
- def main():
- url = 'https://www.chyxx.com/industry/201901/711262.html'
- res = open_url(url)
- find_data(res)
- if __name__ == '__main__':
- main()
复制代码
|
|