954321 发表于 2020-3-23 21:30:09

自己写的妹子图的爬虫,之前爬太快ip被封了,后面循环给加了减速求稳

import requests
importre
import time

headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.116 Safari/537.36'
      }

#处理列表页
def listpage(page):
    url = 'https://www.mzitu.com/page/{page}/'
    response = requests.get(url,headers=headers)
    #response.encoding = 'utf-8'
    #提取详情页网址
    detailurl = re.findall('<a href="(.*?)" target="_blank">',response.text)
    return detailurl

#处理详情页
def detailpage(url):
    for i in range(1,6):
      i = str(i)
      url2 = url+"/"+i
      response = requests.get(url2,headers=headers)
      image = re.findall('<img src="(.*?)" alt',response.text)
      for i in image:
            print(i)
            time.sleep(2)
            savedata(i)

#解限制
picreferer = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.116 Safari/537.36',
'referer': 'https://www.mzitu.com/mm/'
}
#保存数据
def savedata(i):
    filename = 'D:/图片/' + i.split('/')[-1]
    print(filename)
    response = requests.get(i,headers=picreferer)
    with open(filename,'wb') as f:
      f.write(response.content)

def main():
    for page in range(10,12):#爬取10到12页的,可自己改
      time.sleep(1)
      res = listpage(page)
      for url in res:
            time.sleep(1)
            detailpage(url)
#入口
if __name__=='__main__':
    main()

954321 发表于 2020-3-24 12:21:24

D:/图片/
没有这个文件夹的得先把它建好
页: [1]
查看完整版本: 自己写的妹子图的爬虫,之前爬太快ip被封了,后面循环给加了减速求稳