哈岁NB 发表于 2023-1-17 11:06:34

爬虫

import requests
from lxml import html
etree = html.etree

url = 'http://fund.eastmoney.com/data/rankhandler.aspx'
header = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/5.36"

}

parms = {
    "op": "ph",
    "dt": "kf",
    "ft": "zq",
    "rs":"",
    "gs": "0",
    "sc": "1nzf",
    "st": "desc",
    "sd": "2022-01-17",
    "ed": "2023-01-17",
    "qdii": "041|",
    "tabSubtype": "041,,,,,",
    "pi": "1",
    "pn": "50",
    "dx": "1",
    "v": "0.5745516544026257"

}

response = requests.get(url=url,headers=header,params=parms)

page = response.text
print(page)

大佬们。为什么这个一爬取就显示var rankData ={ErrCode:-999,Data:"无访问权限"}

tommyyu 发表于 2023-1-17 11:14:09

本帖最后由 tommyyu 于 2023-1-17 11:17 编辑

这个的页面是不是就只有“var rankData ={ErrCode:-999,Data:"无访问权限"}”,为啥我点开看也是这样{:10_282:}

isdkz 发表于 2023-1-17 11:14:59

你这个链接不是还带了别的参数吗?在请求的时候 url 带上完整的参数,直接在那里右键复制url

哈岁NB 发表于 2023-1-17 11:17:35

isdkz 发表于 2023-1-17 11:14
你这个链接不是还带了别的参数吗?在请求的时候 url 带上完整的参数,直接在那里右键复制url

参数都带了,还是显示var rankData ={ErrCode:-999,Data:"无访问权限"}

哈岁NB 发表于 2023-1-17 11:17:52

tommyyu 发表于 2023-1-17 11:14
这个的页面是不是就只有“var rankData ={ErrCode:-999,Data:"无访问权限"}”,为啥我访问也是这样{:10_282 ...

我也不知道{:5_100:}

isdkz 发表于 2023-1-17 11:21:51

哈岁NB 发表于 2023-1-17 11:17
参数都带了,还是显示var rankData ={ErrCode:-999,Data:"无访问权限"}

你这个请求是打开哪个链接出来的,我帮你调试看看

isdkz 发表于 2023-1-17 11:33:17

加上 Referer,这个字段是说明你是从哪个链接发起请求的,这是一种防盗链的手段,只允许你从特定的链接发起请求
import requests
from lxml import html
etree = html.etree

url = 'http://fund.eastmoney.com/data/rankhandler.aspx'
header = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/5.36",
   'Referer':'http://fund.eastmoney.com/data/fundranking.html'            # 加上这个

}

parms = {
    "op": "ph",
    "dt": "kf",
    "ft": "zq",
    "rs":"",
    "gs": "0",
    "sc": "1nzf",
    "st": "desc",
    "sd": "2022-01-17",
    "ed": "2023-01-17",
    "qdii": "041|",
    "tabSubtype": "041,,,,,",
    "pi": "1",
    "pn": "50",
    "dx": "1",
    "v": "0.5745516544026257"

}

response = requests.get(url=url,headers=header,params=parms)

page = response.text
print(page)

isdkz 发表于 2023-1-17 11:34:41

import requests
from lxml import html
etree = html.etree

url = 'http://fund.eastmoney.com/data/rankhandler.aspx'
header = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/5.36",
    'Referer':'http://fund.eastmoney.com/data/fundranking.html'         # 加上这个,说明你是从哪个链接发起请求的,有一些url只允许从特定的链接发起请求

}

parms = {
    "op": "ph",
    "dt": "kf",
    "ft": "zq",
    "rs":"",
    "gs": "0",
    "sc": "1nzf",
    "st": "desc",
    "sd": "2022-01-17",
    "ed": "2023-01-17",
    "qdii": "041|",
    "tabSubtype": "041,,,,,",
    "pi": "1",
    "pn": "50",
    "dx": "1",
    "v": "0.5745516544026257"

}

response = requests.get(url=url,headers=header,params=parms)

page = response.text
print(page)

哈岁NB 发表于 2023-1-17 11:40:29

isdkz 发表于 2023-1-17 11:34


好的好的,成功了
页: [1]
查看完整版本: 爬虫