【求助】煎蛋爬妹子图403如何解决
import urllib.requestimport os
import base64
import re
def get_page(url):
req = urllib.request.Request(url)
req.add_header('User-Agent:', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36')
response = urllib.request.urlopen(url)
html = response.read().decode('utf-8')
a = r'<span class="current-comment-page">[(\d+)]</span>'
page_list = re.findall(a,html)
print(page_list)
def base_num(page_num):
pass
def find_pic_url(page_url):
pass
def save_pic(folder, pic_url):
pass
def download_mm(folder = 'mm', pages = 10):
#创建一个文件夹
os.mkdir(folder)
os.chdir(folder)
url = 'http://jandan.net/ooxx'
#获取页码
page_num = int(get_page(url))
for i in range(pages):
page_num -=i
#base64加密
base_num(page_num)
#获取页码地址
page_url = url + str(base_num) + '#comments'
#获取图片具体地址并保存成列表
pic_url = find_pic_url(page_url)
save_pic(folder, pic_url)
if __name__ == '__main__':
download_mm()
这个是代码 ,测试第一个的时候直接403,大佬教我怎么解决 本帖最后由 Twilight6 于 2020-8-10 17:01 编辑
把 response = urllib.request.urlopen(url) 改成 response = urllib.request.urlopen(req)
以及 header 里面的 User-Agent 字符串里的冒号去掉
def get_page(url):
req = urllib.request.Request(url)
req.add_header('User-Agent',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36')
response = urllib.request.urlopen(req)
html = response.read().decode('utf-8')
a = r'<span class="current-comment-page">[(\d+)]</span>'
page_list = re.findall(a, html)
print(page_list)
论坛里好多人都是这个问题,明明建了req,最后还是没调用,以哪里的代码做的模板? Twilight6 发表于 2020-8-10 16:56
把 response = urllib.request.urlopen(url) 改成 response = urllib.request.urlopen(req)
以及 he ...
我的正则表达式没问题吧,为什么还是报错{:5_104:}
页:
[1]