鱼C论坛

 找回密码
 立即注册
查看: 2392|回复: 2

爬虫访问HTTPS网站的出错urlopen error [Errno 11001] getaddrinfo failed

[复制链接]
发表于 2020-2-3 18:19:02 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能^_^

您需要 登录 才可以下载或查看,没有账号?立即注册

x
本帖最后由 nottoday0927 于 2020-2-3 19:36 编辑

按照小甲鱼的视频写了段爬取图片的代码
因为煎蛋好像是加密了 所以换了个有规律的网站爬猫猫图

检查过能过正确获取图片地址,网上说是由于https的加密无法被requests请求成功
想问问各位大佬!有什么办法可以解决???

代码如下:

import urllib.request
import os
import random

def open_url(url):
    req = urllib.request.Request(url)
    req.add_header('User-Agent','Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36')
   
##    iplist =['39.137.69.10:80','59.56.28.198:80','39..106.223.134:80','39.1137.63.10:8080']
##    ip = random.choice(iplist)
##    proxy_support = urllib.request.ProxyHandler({'http:':ip})
##    opener= urllib.request.build_opener(proxy_support)
##    opener.open(url)
   
    response = urllib.request.urlopen(url)
    html = response.read()
    return html

      
def get_page(url):
    html = open_url(url).decode('utf-8')
   
    a = html.find('page-cur') + 10
    b = html.find('<',a) #页码
   
    return (html[a:b])
   
def find_imgs(page_url):
    html = open_url(page_url).decode('utf-8')
    img_addrs = []

    a = html.find('img src=')#找不到find会返回-1
    #b = html.find('.jpg',a,a+255)
   
    while a != -1:#若存在img
        b = html.find('.jpg',a,a+255)
        if b != -1:#若存在jpg
            img_addrs.append('https://www.'+html[a+11:b+4])
        else:
            b = a + 11

        a = html.find('img src=',b)#寻找下一个jpg,起始位置在先前的jpg之后
        
##        for each in img_addrs:
##            print(each)
        
        return img_addrs

def save_imgs(folder,img_addrs):
    for each in img_addrs:
        filename = each.split('/')[-1]
        with open(filename,'wb') as f:
            img = open_url(each)
            f.write(img)

def downloadcat(folder = 'CAT',pages = 3):
    os.chdir('E://study//python')
    os.mkdir(folder)

    url = 'https://www.ivsky.com/tupian/mao_t124/'
    page_num = int(get_page(url))

    for i in range(pages):
        page_num -= i
        page_url = url +'index_'+ str(page_num) +'.html'
        img_addrs = find_imgs(page_url)#将图片地址保存
        save_imgs(folder,img_addrs)

if __name__ == '__main__':
    downloadcat()
        


错误原因:
Traceback (most recent call last):
  File "F:\python3\lib\urllib\request.py", line 1318, in do_open
    encode_chunked=req.has_header('Transfer-encoding'))
  File "F:\python3\lib\http\client.py", line 1239, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "F:\python3\lib\http\client.py", line 1285, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "F:\python3\lib\http\client.py", line 1234, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "F:\python3\lib\http\client.py", line 1026, in _send_output
    self.send(msg)
  File "F:\python3\lib\http\client.py", line 964, in send
    self.connect()
  File "F:\python3\lib\http\client.py", line 1392, in connect
    super().connect()
  File "F:\python3\lib\http\client.py", line 936, in connect
    (self.host,self.port), self.timeout, self.source_address)
  File "F:\python3\lib\socket.py", line 704, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
  File "F:\python3\lib\socket.py", line 745, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\study\python\downloadcat.py", line 70, in <module>
    downloadcat()
  File "E:\study\python\downloadcat.py", line 67, in downloadcat
    save_imgs(folder,img_addrs)
  File "E:\study\python\downloadcat.py", line 53, in save_imgs
    img = open_url(each)
  File "E:\study\python\downloadcat.py", line 15, in open_url
    respondse = urllib.request.urlopen(url)
  File "F:\python3\lib\urllib\request.py", line 223, in urlopen
    return opener.open(url, data, timeout)
  File "F:\python3\lib\urllib\request.py", line 526, in open
    response = self._open(req, data)
  File "F:\python3\lib\urllib\request.py", line 544, in _open
    '_open', req)
  File "F:\python3\lib\urllib\request.py", line 504, in _call_chain
    result = func(*args)
  File "F:\python3\lib\urllib\request.py", line 1361, in https_open
    context=self._context, check_hostname=self._check_hostname)
  File "F:\python3\lib\urllib\request.py", line 1320, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 11001] getaddrinfo failed>


小甲鱼最新课程 -> https://ilovefishc.com
回复

使用道具 举报

发表于 2020-2-3 19:15:49 | 显示全部楼层
  1. import requests
  2. from lxml import etree

  3. url="https://www.ivsky.com/tupian/mao_t124/"
  4. headers={"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3947.100 Safari/537.36"}
  5. reponse=requests.get(url=url,headers=headers)
  6. print(reponse)
复制代码


能返回200啊
e:\>python ex20.py
<Response [200]>
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2020-2-4 19:46:39 | 显示全部楼层
建议学习一下web,大概了解一下网页结构
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

小黑屋|手机版|Archiver|鱼C工作室 ( 粤ICP备18085999号-1 | 粤公网安备 44051102000585号)

GMT+8, 2025-10-2 04:35

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表