newbison 发表于 2022-7-31 09:29:35

很简单的爬虫遇到SSLError怎么办?

错误如下,加了verify=False,还是不行。请问这该怎么解决呢?谢谢啦。
requests.exceptions.SSLError: HTTPSConnectionPool(host='xiaohua.zol.com.cn', port=443): Max retries exceeded with url: /new/3.html (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', '', 'unsafe legacy renegotiation disabled')])")))

import requests
from lxml import etree
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"}

# 解析列表页
def parse_page(page_url):
    resp = requests.get(url = page_url, headers=headers, verify=False)
    print(resp.text)

def main():
    page_url = "https://xiaohua.zol.com.cn/new/3.html"
    parse_page(page_url)

if __name__ == "__main__":
    main()

Twilight6 发表于 2022-7-31 21:04:21


试试取消 SSL 验证:

# 全局取消验证证书
import ssl
ssl._create_default_https_context = ssl._create_unverified_context

newbison 发表于 2022-8-1 06:42:57

requests.exceptions.SSLError: HTTPSConnectionPool(host='xiaohua.zol.com.cn', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', '', 'unsafe legacy renegotiation disabled')])")))

谢谢回复。我把SSL这两行加进去之后,还是不行哎。{:5_104:}

z5560636 发表于 2022-8-1 10:03:24

测试代码没问题啊,是不是你请求的次数太多? 被服务器拒绝了。
页: [1]
查看完整版本: 很简单的爬虫遇到SSLError怎么办?