有没有办法能让下载速度快一点
RT
开启了6个线程,CPU直接拉满,下载20本电子书最快的时间都花了3分钟
300M的宽带居然花这么长时间,我手动下载一本基本是秒下
希望有大佬来指点迷津,需要怎么做才能让速度快起来{:10_254:}
代码如下:
import requests
import parsel
from concurrent.futures import ThreadPoolExecutor as pool
def get_link(url, headers):
target = "http://www.qishus.com/"
res = requests.get(url,headers=headers)
res.encoding = "gbk"
res = parsel.Selector(res.text)
return ')]
def get_txt(link):
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36',
}
ress = requests.get(link, headers=headers)
ress.encoding = "gbk"
ress = parsel.Selector(ress.text)
# txt下载地址
txt = ress.xpath('//*[@id="downAddress"]/a/@href').get()
# print(txt)
# 电子书名
title = ress.xpath('//*[@id="downInfoArea"]/dt/text()').get()
# print(title[:-7])
# 作者名字
name = ress.xpath('//dd[@class="downInfoRowL"]/text()').getall()
# print(name)
# 书名
bookname = title[:-7] + ".txt"
# 电子书名与作者名合并
filename = title[:-7] + '-' + name + ".txt"
print(filename)
# 下载TXT格式的电子书到本地
with open('./电子书/' + filename, "wb") as f:
f.write(requests.get(txt).content)
# 更换电子书里面的部分内容
s = '更多精彩,更多好书,尽在恒星文学'
with open('./电子书/' + filename, "r") as f:
lists =f.readlines()
lists = s
lists[-1] = s
# for each in lists:
# with open('./电子书2/'+ filename, "a+") as f:
# f.write(str(each))
# print(f"{title[:-7]} -----》转换成功!")
# 重新写入
with open('./电子书/' + filename, "w+") as f:
f.writelines(lists)
if __name__ == "__main__":
url = "http://www.qishus.com/xuanhuan/list1_1.html"
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36',
}
link = get_link(url, headers)
# get_txt(link, headers)
# 开启线程池
pl = pool(max_workers=6)
pl.map(get_txt, link)
pl.shutdown() 啊这{:10_266:} Kayko 发表于 2021-7-12 11:55
啊这
{:10_284:} 你要明白一点,线程不是越多就越快。人家网站响应不过来,你线程再多也没用。另外突然间频繁请求,会导致网站服务器响应慢,请求越快越多,响应速度就越慢,除非这个网站的带宽非常高,不过一般用于架设这种小说网站的服务器带宽都不会很高。还有就是这个网站服务器是在美国,所以就两个字,无解。
页:
[1]