|
马上注册,结交更多好友,享用更多功能^_^
您需要 登录 才可以下载或查看,没有账号?立即注册
x
各位鱼友好,最近在看小甲鱼的python视频,到爬虫这里,然后照着小甲鱼的代码尝试着写了一个最简单的爬虫爬妹子图网的图片,结果在最后保存图片的时候遇到问题了,程序运行的时候总是报错- with open(filename,'wb') as f:
- FileNotFoundError: [Errno 2] No such file or directory: '2015a/05/24/limg.jpg'
复制代码
我网上查了一下,有人说open里面的路径必须是绝对路径才行,然后我换成绝对路径试了一下,发现还是不行。求高手解答!
爬虫代码如下,简单到无以复加
- #爬妹子图网图片
- import requests
- import os
- head = {}
- head['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36'
- url = "https://meizitu.com/a/4510.html"
- def GetHtml(url):
- req = requests.get(url, head)
- html = req.text
- return html
- def FindImgsAddr(html):
- html = GetHtml(url)
- ImgAddr = []
- a = html.find('img src=')
- while a!= -1:
- b = html.find('.jpg', a, a+255)
- if b != -1:
- ImgAddr.append(html[a+9:b+4])
- else:
- b = a + 9
- a = html.find("img src=", b)
-
- return ImgAddr
- #print(FindImgsAddr(GetHtml(url))) #打印图片地址
- def DownloadImgs():
-
- Imgaddr = FindImgsAddr(GetHtml(url))
-
- for each in Imgaddr:
-
- #print(each) #看图片地址是否正确
-
- filename = each.split("uploads/")[-1]
-
- #print(filename) # 图片的文件名
-
- with open(filename,'wb') as f:
- img = requests.get(each)
- f.write(img.content)
- print(filename+"保存完成")
- if __name__ == '__main__':
- DownloadImgs()
-
- '''
- 错误报告:
- with open(filename,'wb') as f:
- FileNotFoundError: [Errno 2] No such file or directory: '2015a/05/24/limg.jpg'
- '''
复制代码
|
|