【申精】小甲鱼教程爬虫篇的第二个错误及改正
小甲鱼的教程是几年前出的,所以对于正在学习第56讲的爬煎蛋网内容的鱼油们来说,会碰到许多问题。我一共在视频中发现了两个错误。第一个错误是新浪服务器升级后的证书检查错误,解决方法我已经在上一篇帖子中提出了,帖子链接如下:煎蛋网爬虫错误1
下面来说第二个错误:
如果大家按照小甲鱼的视频教程去写,那么必然会出现一个“403forbidden”,这是由于小甲鱼老师的一个疏忽所导致的,下面我们来看错误代码:
import urllib.request
import os
def url_open(url):
req = urllib.request.Request(url)
req.add_header('User-Agent','Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0')
response = urllib.request.urlopen(url)
html = response.read()
return html
def get_page(url):
html = url_open(url).decode('utf-8')
a = html.find('current-comment-page') + 23
b = html.find(']', a)
print('当前页面编号为:%d' %int((html)))
return html
def find_imgs(url):
html = url_open(url).decode('utf-8')
img_addrs = []
a = html.find('img src=')
while a!=-1:
b = html.find('.jpg', a, a+100)
if b!= -1:
img_addrs.append(html)
else:
b = a+9
a = html.find('img src=',b)
for each in img_addrs:
print(each)
return img_addrs
def save_imgs(folder, img_addrs):
for each in img_addrs:
filename = each.split('/')[-1]
with open(filename, 'wb') as f:
img = url_open(each)
f.write(img)
def downloadgirls(folder = 'girls', pages = 10):
os.mkdir(folder)
os.chdir(folder)
url = 'https://jandan.net/ooxx'
page_num = int(get_page(url))
for i in range(pages):
page_num -= i
page_url = url + '/page-' + str(page_num) + '#comments'
img_addrs = find_imgs(page_url)
save_imgs(folder, img_addrs)
if __name__ == '__main__':
downloadgirls()
大家发现问题没有?
这里是解决方法:
**** Hidden Message ***** @小甲鱼 @康小泡 @拈花小仙 @冬雪雪冬 @小甲鱼 帮at 厉害了,word 哥 拈花小仙 发表于 2016-12-6 14:28
@小甲鱼 帮at
谢谢! 厉害
xiaoliu01 发表于 2016-12-6 17:18
厉害
谢谢 看看看 刚刚出这个问题,看看 楼主能不能帮忙看一下这个问题。。。。。 zxr951211 发表于 2016-12-13 07:16
楼主能不能帮忙看一下这个问题。。。。。
文件名有问题吧,无法保存写入 谢谢楼主指出! 老王他师父 发表于 2016-12-13 12:45
文件名有问题吧,无法保存写入
。。。这段时间有事没有上论坛。。。。这个我当时那天下午在别的帖子里找到结果了,是我代码还有点问题,还是谢谢楼主 2365
Traceback (most recent call last):
File "C:\Users\samash\Desktop\1.py", line 37, in <module>
downloadgirls()
File "C:\Users\samash\Desktop\1.py", line 27, in downloadgirls
page_num = int(get_page(url))
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType' ================== RESTART: E:\workspace\py\download_pic.py ==================
Traceback (most recent call last):
File "E:\workspace\py\download_pic.py", line 74, in <module>
download_pic()
File "E:\workspace\py\download_pic.py", line 65, in download_pic
page_num = int(get_page(url))
File "E:\workspace\py\download_pic.py", line 14, in get_page
html = url_open().decode('utf-8')
File "E:\workspace\py\download_pic.py", line 6, in url_open
req = urllib.request.Request(url)
NameError: name 'url' is not defined
学习学习 谢谢! 膜拜,学习~ 学习下学习下 要看看,求看