python爬虫第二讲“实战”:405 not allowed 什么意思?
想知道是因为360翻译也不允许爬虫访问了吗?我学到爬虫实战那一课的时候,弹幕里说,有道翻译不让爬虫访问,但是360翻译可以。所以我就上了360翻译。但是出错了。
Traceback (most recent call last):
File "C:\Users\IWMAI\Desktop\爬虫实验.py.py", line 36, in <module>
response = urllib.request.urlopen(url, data)
File "C:\Users\IWMAI\AppData\Local\Programs\Python\Python39\lib\urllib\request.py", line 214, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\IWMAI\AppData\Local\Programs\Python\Python39\lib\urllib\request.py", line 523, in open
response = meth(req, response)
File "C:\Users\IWMAI\AppData\Local\Programs\Python\Python39\lib\urllib\request.py", line 632, in http_response
response = self.parent.error(
File "C:\Users\IWMAI\AppData\Local\Programs\Python\Python39\lib\urllib\request.py", line 561, in error
return self._call_chain(*args)
File "C:\Users\IWMAI\AppData\Local\Programs\Python\Python39\lib\urllib\request.py", line 494, in _call_chain
result = func(*args)
File "C:\Users\IWMAI\AppData\Local\Programs\Python\Python39\lib\urllib\request.py", line 641, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 405: Not Allowed
源代码:
import urllib.request
import urllib.parse
url = "http://s.qhupdate.com/so/vertical_click.gif?p=&u=http%3A%2F%2Ffanyi.so.com%2F&id=144965027.875403218865349400.1609334695518.6006&guid=144965027.875403218865349400.1609334695518.6006&pro=fanyi&value=0&mod=pcfanyi&src=&q=%E5%A4%A7%E6%B2%99%E9%9B%95&abv=&type=trans&t=1609468480087"
data = {}
data['p'] = ' '
data['u'] = 'http://fanyi.so.com/'
data['id'] = '144965027.875403218865349400.1609334695518.6006'
data['guid'] = '144965027.875403218865349400.1609334695518.6006'
data['pro'] = 'fanyi'
data['value'] = '0'
data['mod'] = 'pcfanyi'
data['src'] = ' '
data['q'] = '大沙雕'
data['abv'] = ' '
data['type'] = 'trans'
data['t'] = '1609468480087'
data = urllib.parse.urlencode(data).encode('utf-8')
response = urllib.request.urlopen(url, data)
html = response.read().decode('utf-8')
print(html)
求大佬解答!谢谢!
(实在对不起,我如果放图片,帖子长度就超出限制了……)
百度,有道,bing,360的可以爬的,只是需要根据自己的需求,找好url:
import requests
def main():
url = 'http://fanyi.so.com/index/search'
params = {'eng': '0', 'validate': '', 'gnore_trans': '0', 'query': '大沙雕'}
headers = {'user-agent': 'firefox', 'origin': 'http://fanyi.so.com', 'pro': 'fanyi', 'Referer': 'http://fanyi.so.com/'}
r = requests.get(url, headers=headers, params=params)
r.encoding = 'utf-8'
print(r.json()['data']['fanyi'])
if __name__ == '__main__':
main()
页:
[1]