|
马上注册,结交更多好友,享用更多功能^_^
您需要 登录 才可以下载或查看,没有账号?立即注册
x
刚学了煎蛋网爬图片,由于煎蛋网改了,所以很多东西和小甲鱼录视频的时候不一样,就没按小甲鱼的来,感觉模块化还是没做好
代理ip就自己找了,因为只用了一个ip,所以加了个time模块防止被封
- import requests
- import bs4
- import time
- import os
- #请求头
- headers ={}
- headers['User-Agent'] = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36'
- #代理IP
- proxies = {'https':'175.146.211.249:9999'}
- #初始网址
- url = 'http://i.jandan.net/ooxx'
- #现在的煎蛋网址里面的页码不是page=多少了
- #获得下一页网址,每一页网页源代码里面都有有下一页地址
- def get_site(url):
- response = requests.get(url,headers =headers,proxies = proxies)
- soup = bs4.BeautifulSoup(response.text,'html.parser')
- item = soup.find('a',title='Older Comments')
- link = item.get('href')
- return 'http:'+link
-
- #下载每一页的图片,图片地址在网页源代码里面
- def download_mm(url,):
- response = requests.get(url,headers =headers,proxies = proxies)
- soup = bs4.BeautifulSoup(response.text,'html.parser')
- items = soup.select('div.commenttext a')
- for each in items:
- link = each['href']
- res = requests.get('http:'+link)
- mm_img = res.content
- with open('mm-{}.jpg'.format(link[-9:-5]),'wb') as f:
- f.write(mm_img)
- folder=input('请输入文件名:')
- os.mkdir(folder)
- os.chdir(folder)
- n = int(input('请输入要爬取多少页图片:'))
- for i in range(n):
- download_mm(url)
- url = get_site(url)
- time.sleep(1)
复制代码 |
|