马上注册,结交更多好友,享用更多功能^_^
您需要 登录 才可以下载或查看,没有账号?立即注册
x
废话少说,先上图:
在网上下载gho系统时,找到了一个叫技术员联盟的网站(http://www.jsgho.net),有个图片的标签,点开一看,哇,全是美女(穿了衣服的),刚好学到爬虫,练练手吧。观察了一下,发现所有的图片都在www.jsgho.net/image/下面,查看了一上网页的源代码,图片在<li></li>标签的<img src里面,不过有个小问题就是网址不全,取下来后还要加上网站的地址,开工。(附全部网址),新人,代码写的很烂,请大神们多多指点,谢谢。import requests
from bs4 import BeautifulSoup
import re
import os
def getHTMLText(url):
try:
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36'}
r = requests.get(url, headers=headers)
r.raise_for_status()
r.encoding = 'utf-8'
return r.text
except:
return ''
def get_img(html):
try:
p = r'<li><img src="([^"]*\.jpg)"'
imglist = re.findall(p, html)
soup = BeautifulSoup(html, 'lxml')
name = soup.title.text
root = r"D:\Pictures//"
path = root + name
except:
print("")
try:
os.mkdir(path)
except FileExistsError:
pass
os.chdir(path)
for each in imglist:
filename = each.split('/')[-1]
url = "http://www.jsgho.net"
each = url + each
r = requests.get(each)
with open(filename, 'wb') as f:
f.write(r.content)
def main(url):
html = getHTMLText(url)
get_img(html)
with open("C://Users//Administrator//Desktop//sitemap.txt") as f:
urls = f.readlines()
for url in urls:
url = url.strip('\n')
main(url)
|