|
马上注册,结交更多好友,享用更多功能^_^
您需要 登录 才可以下载或查看,没有账号?立即注册
x
小甲鱼的教程是几年前出的,所以对于正在学习第56讲的爬煎蛋网内容的鱼油们来说,会碰到许多问题。我一共在视频中发现了两个错误。
第一个错误是新浪服务器升级后的证书检查错误,解决方法我已经在上一篇帖子中提出了,帖子链接如下:煎蛋网爬虫错误1
下面来说第二个错误:
如果大家按照小甲鱼的视频教程去写,那么必然会出现一个“403forbidden”,这是由于小甲鱼老师的一个疏忽所导致的,下面我们来看错误代码:
- import urllib.request
- import os
- def url_open(url):
- req = urllib.request.Request(url)
- req.add_header('User-Agent','Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0')
- response = urllib.request.urlopen(url)
- html = response.read()
- return html
-
- def get_page(url):
-
- html = url_open(url).decode('utf-8')
- a = html.find('current-comment-page') + 23
- b = html.find(']', a)
- print('当前页面编号为:%d' %int((html[a:b])))
- return html[a:b]
- def find_imgs(url):
- html = url_open(url).decode('utf-8')
- img_addrs = []
-
- a = html.find('img src=')
- while a!=-1:
- b = html.find('.jpg', a, a+100)
- if b!= -1:
- img_addrs.append(html[a+9:b+4])
- else:
- b = a+9
- a = html.find('img src=',b)
- for each in img_addrs:
- print(each)
- return img_addrs
-
- def save_imgs(folder, img_addrs):
- for each in img_addrs:
- filename = each.split('/')[-1]
- with open(filename, 'wb') as f:
- img = url_open(each)
- f.write(img)
- def downloadgirls(folder = 'girls', pages = 10):
- os.mkdir(folder)
- os.chdir(folder)
- url = 'https://jandan.net/ooxx'
- page_num = int(get_page(url))
- for i in range(pages):
- page_num -= i
- page_url = url + '/page-' + str(page_num) + '#comments'
- img_addrs = find_imgs(page_url)
- save_imgs(folder, img_addrs)
- if __name__ == '__main__':
- downloadgirls()
复制代码
大家发现问题没有?
这里是解决方法:
|
|