python爬虫二级页面
我按照https://zhuanlan.zhihu.com/p/34206711这位老师的方式试图抓取一个网页信息对应链接。我需要抓取url默认显示十项结果,所以按照这个方法获取的也只有十项。但是页面中有show more,展开后有一百多项,并且展开后url地址也不会改变,所以即使展开了抓取还是只有十项
求助怎么修改一下这个程序或者使我的url能包含所有的展开项然后套用这个程序。url: https://www.nytimes.com/search?dropmab=false&endDate=20190131&query=§ions=Obituaries%7Cnyt%3A%2F%2Fsection%2F91a0dd36-11a8-5a06-9b6e-3a3d2f281894&startDate=20190101&types=article
小白+第一次提问,也不知我说清楚了没…😓# In:
from requests_html import HTMLSession
# In:
session = HTMLSession()
# In:
url = 'https://www.nytimes.com/search?dropmab=false&endDate=20190131&query=§ions=Obituaries%7Cnyt%3A%2F%2Fsection%2F91a0dd36-11a8-5a06-9b6e-3a3d2f281894&startDate=20190101&types=article'
# In:
r = session.get(url)
# In:
print(r.html.text)
# In:
r.html.links
# In:
r.html.absolute_links
# In:
sel = '#site-content > div > div:nth-child(2) > div.css-46b038 > ol > li:nth-child(1) > div > div > div > a'
# In:
results = r.html.find(sel)
# In:
results
# In:
results.text
# In:
results.absolute_links
# In:
def get_text_link_from_sel(sel):
mylist = []
try:
results = r.html.find(sel)
for result in results:
mytext = result.text
mylink = list(result.absolute_links)
mylist.append((mytext, mylink))
return mylist
except:
return None
# In:
list(results.absolute_links)
# In:
def get_text_link_from_sel(sel):
mylist = []
try:
results = r.html.find(sel)
for result in results:
mytext = result.text
mylink = list(result.absolute_links)
mylist.append((mytext, mylink))
return mylist
except:
return None
# In:
print(get_text_link_from_sel(sel))
# In:
sel = '#site-content > div > div> div > ol > li > div > div > div > a'
# In:
print(get_text_link_from_sel(sel))
# In:
import pandas as pd
# In:
df = pd.DataFrame(get_text_link_from_sel(sel))
# In:
df
# In:
df.to_csv('output.csv', encoding='gbk', index=False)
国外服务器吗 打不开你的目标url 点击加载后,可以注意到向https://samizdat-graphql.nytimes.com/graphql/v2这个地址发送了一个post请求,返回了一个json格式的内容:
{
"data": {
#省略
}, {
"node": {
"__typename": "BodegaResult",
"node": {
"id": "QXJ0aWNsZTpueXQ6Ly9hcnRpY2xlLzdiOTE1YTM4LTg5N2QtNWQ5Yi1hNWJkLTVkOWQ1YzY2NmUxNA==",
"__typename": "Article",
"url": "https://www.nytimes.com/2019/01/28/obituaries/peter-magowan-dead.html",
"uri": "nyt://article/7b915a38-897d-5d9b-a5bd-5d9d5c666e14",
"promotionalHeadline": "Peter Magowan, Giants Fan Turned Giants’ Owner, Is Dead at 76",
"promotionalSummary": "While running Safeway Stores, Mr. Magowan headed a group that bought the team and kept it from moving from San Francisco to Florida.",
#省略
可以发现新的内容就在这里面,知道了流程那就只需要向对应的地址发送一个post请求就行了。需要注意,这里post请求发送的内容是payload,也就是一个json格式:
{
"operationName": "SearchRootQuery",
"variables": {
"first": 10,
"sort": "best",
"beginDate": "20190101",
"endDate": "20190131",
"filterQuery": "((section_uri: \"nyt://section/91a0dd36-11a8-5a06-9b6e-3a3d2f281894\")) AND ((data_type: \"article\"))",
"sectionFacetFilterQuery": "((data_type: \"article\"))",
"typeFacetFilterQuery": "((section_uri: \"nyt://section/91a0dd36-11a8-5a06-9b6e-3a3d2f281894\"))",
"sectionFacetActive": true,
"typeFacetActive": true,
"cursor": "YXJyYXljb25uZWN0aW9uOjk="
},
"extensions": {
"persistedQuery": {
"version": 1,
"sha256Hash": "d632cb222f6a7dd48349d5789975e28fbf7017963fe1ec16fd20fa868b335842"
}
}
}
只需要创建一个对应的嵌套集合,然后json.dump成data就可以了
页:
[1]