|
马上注册,结交更多好友,享用更多功能^_^
您需要 登录 才可以下载或查看,没有账号?立即注册
x
本帖最后由 加冰青柠檬 于 2020-4-3 14:48 编辑
新人第一次发帖,跟着甲鱼老师学习了这么久,刚开始只是为了参加大学生数学建模比赛学习python爬虫,入坑之后跟着甲鱼老师复习了一下C语言,后来又跟着小甲鱼学习数据结构与算法,然后又跟了零基础入门学习python,现在终于开始了我一开始的目的,爬取了妹子图的网站推荐图片,这也是第一次我自主的完成爬虫实战,妹子什么的不重要,主要就是和大家交流交流技术,有什么不到位还请大家指点指点
模块清单:
requests
BeautifulSoup
lxml
- import requests
- from bs4 import BeautifulSoup
- import lxml
- import os
- def get_home_url(home_url):
- headers={"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.100 Safari/537.36"}
- res=requests.get(home_url,headers=headers)
- soup=BeautifulSoup(res.text,"lxml")
- soup_mm_url=soup.select(".postlist #pins .lazy ")
- return soup_mm_url
- def get_mm_url(soup_mm_url):
- load_url = {}
- for count in soup_mm_url:
- # print(count.get("data-original"))
- load_url[count.get("alt")]=count.get("data-original")
- # print(count.get("alt"))
- save_img(load_url)
- def os_op():
- os.mkdir("XXOO_max")
- os.chdir("XXOO_max")
- # print(load_url)
- def save_img(load_url):
- for i in load_url:
- url=load_url[i]
- headers_mm={"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.100 Safari/537.36",
- "referer": "https://www.mzitu.com/"}
- resp=requests.get(url,headers=headers_mm)
- filename=i+".jpg"
- # print(filename)
- with open(filename,"wb") as fp:
- fp.write(resp.content)
- def main():
- os_op()
- for page in range(2,10):
- home_url = "https://www.mzitu.com/page/"
- home_url=home_url+str(page)
- print("开始爬取第%d页图片"%page)
- print(home_url)
- soup_mm_url=get_home_url(home_url)
- get_mm_url(soup_mm_url)
- if __name__ == "__main__":
- main()
复制代码
另外爬完之后我笔记本的网断了,???,不会是我家路由器担心的身体吧,,,现在我用的是手机热点,各位鱼油有什么好办法吗 |
|