爬取知乎
各位大佬好,我想要爬取知乎某一个问题下的回答,但是好像被反爬了,不知道怎么解决,以下是我的代码import urllib.requestfrom lxml import etree
def creat_request():
url = 'https://www.zhihu.com/question/291457090/answer/572425905'
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36'
}
request = urllib.request.Request(url = url,headers = headers)
return request
def get_content(request):
response = urllib.request.urlopen(request)
content = response.read().decode('utf-8')
return content
def down_load(content):
tree = etree.HTML(content)
name_list = tree.xpath('//*[@id="QuestionAnswers-answers"]//div[@class="List-item"]//div[@class="RichContent-inner"]//p//text()')
for i in range(len(name_list)):
name = name_list
print(name)
with open("攀枝花对四川的认同感.txt",'w',encoding='utf-8')as fp:
fp.write(str(name_list))
if __name__ == '__main__':
request = creat_request()
content = get_content(request)
down_load(content) 没有被反爬,是你的xpath写错了,去掉前面的//*[@id="QuestionAnswers-answers"]//div[@class="List-item"] 月下孤井 发表于 2022-5-20 17:04
没有被反爬,是你的xpath写错了,去掉前面的//*[@id="QuestionAnswers-answers"]//div[@class="List-item"]
修改了xpath返回的还是一个空列表啊,之前那个xpath路径我用浏览器哪个插件能够检索到想要的内容 我这里运行着在down_load函数里print(content)是有正确值的,xpath之后就没有了,改了xpath后运行结果都是没问题的
页:
[1]