本帖最后由 是肥宅SRP啊 于 2023-4-29 14:38 编辑
没加入协程之前,我是可以正常获取到href里的各章网址,也是可以获取到各章节的文字内容的,但是加入协程之后就不行。
协程学的一知半解的,我不知道哪里错了,help!!import asyncio
import aiohttp
import aiofiles
import requests
from lxml import etree
#获取URL
def get_url():
url = "https://www.zanghaihua.org/book/40627/"
headers = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36 SLBrowser/8.0.0.12022 SLBChan/8"
}
http = requests.get(url,headers = headers)
http.encoding = "gbk"
parse_html = etree.HTML(http.text)
hrefs = parse_html.xpath("//li/a/@href")[16:87]
return hrefs
#获取文本
async def get_text(url):
headers = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36 SLBrowser/8.0.0.12022 SLBChan/8"
}
async with aiohttp.ClientSession() as session:
async with session.get(url, headers=headers) as res:
res.encoding = "utf-8"
parse_html = await res.text()
parse_html = etree.HTML(parse_html)
content = parse_html.xpath('//div[@class="content"]/text()')
title = parse_html.xpath('//div[@class="reader-main"]/h1/text()')
# print(title)
# print(content)
async with aiofiles.open(f"{title}.text",mode = "w", coding = "utf-8") as f:
await f.write(content)
#创建协程_
async def get_task(href_list):
tasks = []
for href in href_list:
t = asyncio.create_task(get_text(href))
tasks.append(t)
await asyncio.wait(tasks)
def main():
href_lists = get_url()
asyncio.run(get_task(href_lists))
从你的代码来看,你已经开始了异步协程的学习,并将其应用到了抓取网页的任务中。在这里,我会给出一些建议来帮助你改进代码。
1. aiohttp 的响应对象 res 没有 encoding 属性,因此在这里设置 res.encoding = "utf-8" 是错误的。你可以在 res.text() 函数中传入 encoding 参数来指定编码方式,如: await res.text(encoding='utf-8') 。
2. 使用 aiofiles.open() 时,你应该使用 encoding 参数而不是 coding 。
3. title 和 content 都是列表类型,而不是字符串。在写入文件之前,需要将它们转换为字符串。
这里是修改后的代码:
import asyncio
import aiohttp
import aiofiles
import requests
from lxml import etree
#获取URL
def get_url():
url = "https://www.zanghaihua.org/book/40627/"
headers = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36 SLBrowser/8.0.0.12022 SLBChan/8"
}
http = requests.get(url,headers = headers)
http.encoding = "gbk"
parse_html = etree.HTML(http.text)
hrefs = parse_html.xpath("//li/a/@href")[16:87]
return hrefs
#获取文本
async def get_text(url):
headers = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36 SLBrowser/8.0.0.12022 SLBChan/8"
}
async with aiohttp.ClientSession() as session:
async with session.get(url, headers=headers) as res:
parse_html = await res.text(encoding='utf-8')
parse_html = etree.HTML(parse_html)
content = parse_html.xpath('//div[@class="content"]/text()')
title = parse_html.xpath('//div[@class="reader-main"]/h1/text()')
async with aiofiles.open(f"{title[0]}.txt", mode="w", encoding="utf-8") as f:
await f.write("\n".join(content))
#创建协程
async def get_task(href_list):
tasks = []
for href in href_list:
t = asyncio.create_task(get_text(href))
tasks.append(t)
await asyncio.wait(tasks)
def main():
href_lists = get_url()
asyncio.run(get_task(href_lists))
if __name__ == "__main__":
main()
经过这些修改,代码应该能正常运行并获取各章节的文本内容。希望这对你有所帮助!如果还有其他问题,请随时提问。
|