|
马上注册,结交更多好友,享用更多功能^_^
您需要 登录 才可以下载或查看,没有账号?立即注册
x
- # _*_ coding:utf-8
- # author:小军
- # @Time:1/30/2021 3:12 PM
- # @File:requests网页采集器.py
- import requests
- if __name__ == '__main__':
- headers = {'Uers_Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36 Edg/88.0.705.53'}
- url = 'https://www.sogou.com/web?'
- content = input('请输入搜索内容:')
- param = { 'query': content }
- response = requests.get(url = url,params = param ,headers = headers)
- page_text = response.text
- FileName = content + '.html'
- with open(FileName,'w',encoding='utf-8') as fp:
- fp.write(page_text)
- print(FileName,'保存成功!')
复制代码
返回的 .html文件 在浏览器打开是 404 Not Found
大佬们为啥呢!!!
是的,被反爬了
反爬原因是爬虫伪装头部信息写错了
- # _*_ coding:utf-8
- # author:小军
- # @Time:1/30/2021 3:12 PM
- # @File:requests网页采集器.py
- import requests
- if __name__ == '__main__':
- headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36 Edg/88.0.705.53'}
- url = 'https://www.sogou.com/web'
- content = input('请输入搜索内容:')
- param = { 'query': content }
- response = requests.get(url = url,params = param ,headers = headers)
- page_text = response.text
- FileName = content + '.html'
- with open(FileName,'w',encoding='utf-8') as fp:
- fp.write(page_text)
- print(FileName,'保存成功!')
复制代码
|
|