zttwm 发表于 2021-9-24 03:41:12

小白爬虫代理设置问题求助,requests只要走代理就无法连接

本帖最后由 zttwm 于 2021-9-24 12:17 编辑

{:10_254:} RT,由于刚学了爬虫,迫不及待地想在工作中应用,由于尝试代理设置出问题了,所以拿百度试水(https)代理,IP从 http://www.ip3366.net/ 爬过来的,筛选出https代理,然后去拿百度试水,只要返回状态码就算代理挂成功了,然鹅,29个代理站点每个尝试10此,全部HTTPS代理全部连接失败,连状态码都没有返回。
赶紧百度,{:10_250:} ,找到的解决办法是[不要挂代理],[不要挂代理]?[不要挂代理]!!!!


错误信息如下:

Traceback (most recent call last):
File "D:\test\venv\lib\site-packages\urllib3\connectionpool.py", line 696, in urlopen
    self._prepare_proxy(conn)
File "D:\test\venv\lib\site-packages\urllib3\connectionpool.py", line 964, in _prepare_proxy
    conn.connect()
File "D:\test\venv\lib\site-packages\urllib3\connection.py", line 364, in connect
    conn = self._connect_tls_proxy(hostname, conn)
File "D:\test\venv\lib\site-packages\urllib3\connection.py", line 501, in _connect_tls_proxy
    socket = ssl_wrap_socket(
File "D:\test\venv\lib\site-packages\urllib3\util\ssl_.py", line 453, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
File "D:\test\venv\lib\site-packages\urllib3\util\ssl_.py", line 495, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock)
File "C:\Users\erdd\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(
File "C:\Users\erdd\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 1040, in _create
    self.do_handshake()
File "C:\Users\erdd\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()
FileNotFoundError: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\test\venv\lib\site-packages\requests\adapters.py", line 439, in send
    resp = conn.urlopen(
File "D:\test\venv\lib\site-packages\urllib3\connectionpool.py", line 755, in urlopen
    retries = retries.increment(
File "D:\test\venv\lib\site-packages\urllib3\util\retry.py", line 574, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='www.baidu.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', FileNotFoundError(2, 'No such file or directory')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\test\test2.py", line 6, in <module>
    resp=requests.get(url,headers=head,proxies=prox,verify=False)
File "D:\test\venv\lib\site-packages\requests\api.py", line 75, in get
    return request('get', url, params=params, **kwargs)
File "D:\test\venv\lib\site-packages\requests\api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
File "D:\test\venv\lib\site-packages\requests\sessions.py", line 542, in request
    resp = self.send(prep, **send_kwargs)
File "D:\test\venv\lib\site-packages\requests\sessions.py", line 655, in send
    r = adapter.send(request, **kwargs)
File "D:\test\venv\lib\site-packages\requests\adapters.py", line 510, in send
    raise ProxyError(e, request=request)
requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.baidu.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', FileNotFoundError(2, 'No such file or directory')))

或是

Traceback (most recent call last):
File "D:\test\venv\lib\site-packages\urllib3\connection.py", line 174, in _new_conn
    conn = connection.create_connection(
File "D:\test\venv\lib\site-packages\urllib3\util\connection.py", line 96, in create_connection
    raise err
File "D:\test\venv\lib\site-packages\urllib3\util\connection.py", line 86, in create_connection
    sock.connect(sa)
ConnectionRefusedError: 由于目标计算机积极拒绝,无法连接。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\test\venv\lib\site-packages\urllib3\connectionpool.py", line 696, in urlopen
    self._prepare_proxy(conn)
File "D:\test\venv\lib\site-packages\urllib3\connectionpool.py", line 964, in _prepare_proxy
    conn.connect()
File "D:\test\venv\lib\site-packages\urllib3\connection.py", line 358, in connect
    conn = self._new_conn()
File "D:\test\venv\lib\site-packages\urllib3\connection.py", line 186, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x000001CD1AB9D280>: Failed to establish a new connection: 由于目标计算机积极拒绝,无法连接。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\test\venv\lib\site-packages\requests\adapters.py", line 439, in send
    resp = conn.urlopen(
File "D:\test\venv\lib\site-packages\urllib3\connectionpool.py", line 755, in urlopen
    retries = retries.increment(
File "D:\test\venv\lib\site-packages\urllib3\util\retry.py", line 574, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='www.baidu.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001CD1AB9D280>: Failed to establish a new connection: 由于目标计算机积极拒绝,无法连接。')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\test\test2.py", line 6, in <module>
    resp=requests.get(url,headers=head,proxies=prox,verify=False)
File "D:\test\venv\lib\site-packages\requests\api.py", line 75, in get
    return request('get', url, params=params, **kwargs)
File "D:\test\venv\lib\site-packages\requests\api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
File "D:\test\venv\lib\site-packages\requests\sessions.py", line 542, in request
    resp = self.send(prep, **send_kwargs)
File "D:\test\venv\lib\site-packages\requests\sessions.py", line 655, in send
    r = adapter.send(request, **kwargs)
File "D:\test\venv\lib\site-packages\requests\adapters.py", line 510, in send
    raise ProxyError(e, request=request)
requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.baidu.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001CD1AB9D280>: Failed to establish a new connection: 由于目标计算机积极拒绝,无法连接。')))



为了确认不是代理服务器的问题,直接局域网内,设置了一台代理机子,然后尝试,还是上面的错误:

import requests

prox={'https':'https://192.168.1.205:7890'}
url='https://www.baidu.com'   #这个地方IP换了一堆,只要是https都没有用
head={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:92.0) Gecko/20100101 Firefox/92.0'}
resp=requests.get(url,headers=head,proxies=prox,verify=False)
print(resp.text)


这是代码。。

顺便附上测试用的代码:
import requests
import time
from lxml import etree

head={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:92.0) Gecko/20100101 Firefox/92.0'}
ip_porxy_list=[]
for ip_source_page in range (11):
    time.sleep(1)
    url_ip_source=f'http://www.ip3366.net/?stype=1&page={ip_source_page}'
    resp_ip=requests.get(url_ip_source,headers=head)
    resp_tmp=etree.HTML(resp_ip.content)
    ip_type=resp_tmp.xpath('/html/body/div/div/table/tbody//td/text()')
    type_num=0

    for ip_type_extra in ip_type:
      type_num+=1
      if ip_type_extra=='HTTPS':
            ip_data=resp_tmp.xpath(f'/html/body/div/div/table/tbody/tr[{type_num}]/td/text()')
            ip_port=resp_tmp.xpath(f'/html/body/div/div/table/tbody/tr[{type_num}]/td/text()')
            ip_proxy=f'{ip_data}:{ip_port}'
            ip_porxy_list.append(ip_proxy)
            print(ip_porxy_list)
    resp_ip.close()   #上半部分为取得HTTPS代理的IP和端口信息的,已添加print输出

#后半部分为测试,只要有状态码返回就算成功了,但是结果是满屏的failed
url='https://www.baidu.com'
for proxip in ip_porxy_list:
    try_times = 0
    prox={'https':f'https://{proxip}'}
    for i in range(10):
      try:
            resp=requests.get(url,headers=head,proxies=prox,verify=False)
            if resp.status_code == 200:
                print(resp.text)
                print(proxip)
                resp.close()
                break
            else:
                print(resp.status_code)
                resp.close()

自己找到了原因(此时已经回答到了4楼),代理网址使用了https://的关系,用http://的话代理能用。但是啊,用http去连代理,肯定不安全,既然设置了悬赏,那有没有大佬能帮忙看看怎么用https去连代理?

suchocolate 发表于 2021-9-24 03:41:13

zttwm 发表于 2021-9-24 12:18
私下和大佬讨论了一番算是找到原因了,但是又有新的疑惑,所以改了下问题

验证代理能不能用,要看使用代理后访问,对方看到的IP是不是代理的IP,而不是状态码。下面的代码是验证代理可不可用的方法:import requests


def main():
    proxy = {'http': 'ip:port'}
    url = 'https://my.ip.cn/api/index?ip=&type=0'
    headers = {'user-agent': 'firefox', 'Referer': 'https://my.ip.cn/', 'X-Requested-With': 'XMLHttpRequest'}
    r = requests.get(url, headers=headers, proxies=proxy)
    print(r.text)


if __name__ == '__main__':
    main()
另外https的代理,要看你用的代理支不支持,如果支持,按照代理供应商的信息,把http改成https即可。

wp231957 发表于 2021-9-24 06:44:26

免费代理几乎都不可用,所以想玩代理就要花银子

zttwm 发表于 2021-9-24 08:48:31

wp231957 发表于 2021-9-24 06:44
免费代理几乎都不可用,所以想玩代理就要花银子

花银子的也一样,自己搭的代理服务器照样不行{:10_277:},浏览器挂上去就能很好地跑,一用爬虫就没反应了

suchocolate 发表于 2021-9-24 10:13:42

“只要返回状态码就算代理挂成功了”,这个不一定的。而且从报错上看,就是代理不行。
自建的代理,在同一内网?

zttwm 发表于 2021-9-24 12:18:15

suchocolate 发表于 2021-9-24 10:13
“只要返回状态码就算代理挂成功了”,这个不一定的。而且从报错上看,就是代理不行。
自建的代理,在同一 ...

私下和大佬讨论了一番算是找到原因了,但是又有新的疑惑{:10_285:},所以改了下问题
页: [1]
查看完整版本: 小白爬虫代理设置问题求助,requests只要走代理就无法连接