鱼C论坛

 找回密码
 立即注册
查看: 2642|回复: 10

[已解决]爬取json

[复制链接]
发表于 2023-2-11 11:21:06 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能^_^

您需要 登录 才可以下载或查看,没有账号?立即注册

x
xpath怎样写可以把__INITIAL_STATE__后面的json字符串提取出来呢
  1. from selenium import webdriver
  2. from time import sleep
  3. import requests
  4. from lxml import html
  5. etree = html.etree

  6. import json
  7. from selenium.webdriver.common.by import By
  8. bro = webdriver.Chrome('D:/技能/chromedriver.exe')

  9. bro.get('https://xiaoyuan.zhaopin.com/job/CC407288330J40383568908')
  10. sleep(2)

  11. cookies = bro.get_cookies()
  12. print(cookies)
  13. bro.quit()

  14. #解析cookie
  15. dic = {}
  16. for cookie in cookies:
  17.     dic[cookie['name']] = cookie['value']


  18. #解析页面源码
  19. url = 'https://xiaoyuan.zhaopin.com/api/sou?S_SOU_FULL_INDEX=java&S_SOU_POSITION_SOURCE_TYPE=&pageIndex=1&S_SOU_POSITION_TYPE=2&S_SOU_WORK_CITY=&S_SOU_JD_INDUSTRY_LEVEL=&S_SOU_COMPANY_TYPE=&S_SOU_REFRESH_DATE=&order=12&pageSize=30&_v=0.43957010&at=8d9987f50aed40bc8d1362e9c44a7fba&rt=ed9c026545294384a20a4473e1e2ecd3&x-zp-page-request-id=0933f66d64684fd6b0bc0756ed6791b6-1675650906506-860845&x-zp-client-id=b242c663-f23a-4571-aca7-de919a057afe'

  20. header = {
  21.     "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
  22. }
  23. url1 = 'https://xiaoyuan.zhaopin.com/api/sou?S_SOU_FULL_INDEX=java&S_SOU_POSITION_SOURCE_TYPE=&pageIndex=1&S_SOU_POSITION_TYPE=2&S_SOU_WORK_CITY=&S_SOU_JD_INDUSTRY_LEVEL=&S_SOU_COMPANY_TYPE=&S_SOU_REFRESH_DATE=&order=&pageSize=30&_v=0.06047107&at=9790578095794e1b9cc693485ef05237&rt=6448b0d50c2d460eb823575593f5a909&cityId=&jobTypeId=&jobSource=&industryId=&companyTypeId=&dateSearchTypeId=&x-zp-page-request-id=fcf1dcda72444dc6b8a17609bdb3a02f-1676083831807-687308&x-zp-client-id=b242c663-f23a-4571-aca7-de919a057afe'
  24. response = requests.get(url=url1,headers=header,cookies=dic).json()#json后要加括号
  25. list = response['data']['data']['list']
  26. for i in list:
  27.     name = i['name']
  28.     number = i['number']
  29.     job_url = 'https://xiaoyuan.zhaopin.com/job/' + number

  30.     page = requests.get(url=job_url, headers=header,cookies=dic).text
  31.     tree = etree.HTML(page)
  32.     #解析详情页数据
  33.     imformation = tree.xpath('/html/body/script[8]/text()')
  34.     print(imformation)
  35.     break
复制代码
我是这样写的,但输出的是一个空列表
最佳答案
2023-2-11 12:38:22
哈岁NB 发表于 2023-2-11 12:31
那该怎么才能把__INITIAL_STATE__后面的json字符串单独取出来呢,就是只要json字符串,不要__INITIAL_STA ...


  1. from selenium import webdriver
  2. from time import sleep
  3. import requests
  4. from lxml import html
  5. etree = html.etree

  6. import json
  7. from selenium.webdriver.common.by import By
  8. bro = webdriver.Chrome('D:/技能/chromedriver.exe')

  9. bro.get('https://xiaoyuan.zhaopin.com/job/CC407288330J40383568908')
  10. sleep(2)

  11. cookies = bro.get_cookies()
  12. print(cookies)
  13. bro.quit()

  14. #解析cookie
  15. dic = {}
  16. for cookie in cookies:
  17.     dic[cookie['name']] = cookie['value']


  18. #解析页面源码
  19. url = 'https://xiaoyuan.zhaopin.com/api/sou?S_SOU_FULL_INDEX=java&S_SOU_POSITION_SOURCE_TYPE=&pageIndex=1&S_SOU_POSITION_TYPE=2&S_SOU_WORK_CITY=&S_SOU_JD_INDUSTRY_LEVEL=&S_SOU_COMPANY_TYPE=&S_SOU_REFRESH_DATE=&order=12&pageSize=30&_v=0.43957010&at=8d9987f50aed40bc8d1362e9c44a7fba&rt=ed9c026545294384a20a4473e1e2ecd3&x-zp-page-request-id=0933f66d64684fd6b0bc0756ed6791b6-1675650906506-860845&x-zp-client-id=b242c663-f23a-4571-aca7-de919a057afe'

  20. header = {
  21.     "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
  22. }
  23. url1 = 'https://xiaoyuan.zhaopin.com/api/sou?S_SOU_FULL_INDEX=java&S_SOU_POSITION_SOURCE_TYPE=&pageIndex=1&S_SOU_POSITION_TYPE=2&S_SOU_WORK_CITY=&S_SOU_JD_INDUSTRY_LEVEL=&S_SOU_COMPANY_TYPE=&S_SOU_REFRESH_DATE=&order=&pageSize=30&_v=0.06047107&at=9790578095794e1b9cc693485ef05237&rt=6448b0d50c2d460eb823575593f5a909&cityId=&jobTypeId=&jobSource=&industryId=&companyTypeId=&dateSearchTypeId=&x-zp-page-request-id=fcf1dcda72444dc6b8a17609bdb3a02f-1676083831807-687308&x-zp-client-id=b242c663-f23a-4571-aca7-de919a057afe'
  24. response = requests.get(url=url1,headers=header,cookies=dic).json()#json后要加括号
  25. list = response['data']['data']['list']
  26. for i in list:
  27.     name = i['name']
  28.     number = i['number']
  29.     job_url = 'https://xiaoyuan.zhaopin.com/job/' + number

  30.     page = requests.get(url=job_url, headers=header,cookies=dic).text
  31.     tree = etree.HTML(page)
  32.     #解析详情页数据
  33.     information = tree.xpath('/html/body/script[6]/text()')[0].removeprefix('__INITIAL_STATE__=')                                                      # 改了这里
  34.     print(information)
复制代码
屏幕截图(18)_LI.jpg
小甲鱼最新课程 -> https://ilovefishc.com
回复

使用道具 举报

发表于 2023-2-11 11:39:11 | 显示全部楼层
  1. from selenium import webdriver
  2. from time import sleep
  3. import requests
  4. from lxml import html
  5. etree = html.etree

  6. import json
  7. from selenium.webdriver.common.by import By
  8. bro = webdriver.Chrome('D:/技能/chromedriver.exe')

  9. bro.get('https://xiaoyuan.zhaopin.com/job/CC407288330J40383568908')
  10. sleep(2)

  11. cookies = bro.get_cookies()
  12. print(cookies)
  13. bro.quit()

  14. #解析cookie
  15. dic = {}
  16. for cookie in cookies:
  17.     dic[cookie['name']] = cookie['value']


  18. #解析页面源码
  19. url = 'https://xiaoyuan.zhaopin.com/api/sou?S_SOU_FULL_INDEX=java&S_SOU_POSITION_SOURCE_TYPE=&pageIndex=1&S_SOU_POSITION_TYPE=2&S_SOU_WORK_CITY=&S_SOU_JD_INDUSTRY_LEVEL=&S_SOU_COMPANY_TYPE=&S_SOU_REFRESH_DATE=&order=12&pageSize=30&_v=0.43957010&at=8d9987f50aed40bc8d1362e9c44a7fba&rt=ed9c026545294384a20a4473e1e2ecd3&x-zp-page-request-id=0933f66d64684fd6b0bc0756ed6791b6-1675650906506-860845&x-zp-client-id=b242c663-f23a-4571-aca7-de919a057afe'

  20. header = {
  21.     "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
  22. }
  23. url1 = 'https://xiaoyuan.zhaopin.com/api/sou?S_SOU_FULL_INDEX=java&S_SOU_POSITION_SOURCE_TYPE=&pageIndex=1&S_SOU_POSITION_TYPE=2&S_SOU_WORK_CITY=&S_SOU_JD_INDUSTRY_LEVEL=&S_SOU_COMPANY_TYPE=&S_SOU_REFRESH_DATE=&order=&pageSize=30&_v=0.06047107&at=9790578095794e1b9cc693485ef05237&rt=6448b0d50c2d460eb823575593f5a909&cityId=&jobTypeId=&jobSource=&industryId=&companyTypeId=&dateSearchTypeId=&x-zp-page-request-id=fcf1dcda72444dc6b8a17609bdb3a02f-1676083831807-687308&x-zp-client-id=b242c663-f23a-4571-aca7-de919a057afe'
  24. response = requests.get(url=url1,headers=header,cookies=dic).json()#json后要加括号
  25. list = response['data']['data']['list']
  26. for i in list:
  27.     name = i['name']
  28.     number = i['number']
  29.     job_url = 'https://xiaoyuan.zhaopin.com/job/' + number

  30.     page = requests.get(url=job_url, headers=header,cookies=dic).text
  31.     tree = etree.HTML(page)
  32.     #解析详情页数据
  33.     imformation = tree.xpath('/html/body/script[6]/text()')         # 这里应该是 6
  34.     print(imformation)
  35.     break
复制代码
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2023-2-11 12:31:39 | 显示全部楼层

那该怎么才能把__INITIAL_STATE__后面的json字符串单独取出来呢,就是只要json字符串,不要__INITIAL_STATE__=
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2023-2-11 12:38:22 | 显示全部楼层    本楼为最佳答案   
哈岁NB 发表于 2023-2-11 12:31
那该怎么才能把__INITIAL_STATE__后面的json字符串单独取出来呢,就是只要json字符串,不要__INITIAL_STA ...


  1. from selenium import webdriver
  2. from time import sleep
  3. import requests
  4. from lxml import html
  5. etree = html.etree

  6. import json
  7. from selenium.webdriver.common.by import By
  8. bro = webdriver.Chrome('D:/技能/chromedriver.exe')

  9. bro.get('https://xiaoyuan.zhaopin.com/job/CC407288330J40383568908')
  10. sleep(2)

  11. cookies = bro.get_cookies()
  12. print(cookies)
  13. bro.quit()

  14. #解析cookie
  15. dic = {}
  16. for cookie in cookies:
  17.     dic[cookie['name']] = cookie['value']


  18. #解析页面源码
  19. url = 'https://xiaoyuan.zhaopin.com/api/sou?S_SOU_FULL_INDEX=java&S_SOU_POSITION_SOURCE_TYPE=&pageIndex=1&S_SOU_POSITION_TYPE=2&S_SOU_WORK_CITY=&S_SOU_JD_INDUSTRY_LEVEL=&S_SOU_COMPANY_TYPE=&S_SOU_REFRESH_DATE=&order=12&pageSize=30&_v=0.43957010&at=8d9987f50aed40bc8d1362e9c44a7fba&rt=ed9c026545294384a20a4473e1e2ecd3&x-zp-page-request-id=0933f66d64684fd6b0bc0756ed6791b6-1675650906506-860845&x-zp-client-id=b242c663-f23a-4571-aca7-de919a057afe'

  20. header = {
  21.     "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
  22. }
  23. url1 = 'https://xiaoyuan.zhaopin.com/api/sou?S_SOU_FULL_INDEX=java&S_SOU_POSITION_SOURCE_TYPE=&pageIndex=1&S_SOU_POSITION_TYPE=2&S_SOU_WORK_CITY=&S_SOU_JD_INDUSTRY_LEVEL=&S_SOU_COMPANY_TYPE=&S_SOU_REFRESH_DATE=&order=&pageSize=30&_v=0.06047107&at=9790578095794e1b9cc693485ef05237&rt=6448b0d50c2d460eb823575593f5a909&cityId=&jobTypeId=&jobSource=&industryId=&companyTypeId=&dateSearchTypeId=&x-zp-page-request-id=fcf1dcda72444dc6b8a17609bdb3a02f-1676083831807-687308&x-zp-client-id=b242c663-f23a-4571-aca7-de919a057afe'
  24. response = requests.get(url=url1,headers=header,cookies=dic).json()#json后要加括号
  25. list = response['data']['data']['list']
  26. for i in list:
  27.     name = i['name']
  28.     number = i['number']
  29.     job_url = 'https://xiaoyuan.zhaopin.com/job/' + number

  30.     page = requests.get(url=job_url, headers=header,cookies=dic).text
  31.     tree = etree.HTML(page)
  32.     #解析详情页数据
  33.     information = tree.xpath('/html/body/script[6]/text()')[0].removeprefix('__INITIAL_STATE__=')                                                      # 改了这里
  34.     print(information)
复制代码
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2023-2-11 12:43:17 | 显示全部楼层

好的,感谢感谢
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2023-2-11 15:57:03 | 显示全部楼层

运行怎么报了'lxml.etree._ElementUnicodeResult' object has no attribute 'removeprefix'这个错误,这是为什么 呀,上网查也没有查到
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2023-2-11 16:16:09 | 显示全部楼层
哈岁NB 发表于 2023-2-11 15:57
运行怎么报了'lxml.etree._ElementUnicodeResult' object has no attribute 'removeprefix'这个错误,这 ...

代码有改过吗?我这里运行没有问题,你那个报错是因为只有字符串才有 removeprefix 方法,这说明你获取到的不是字符串
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2023-2-11 16:26:11 | 显示全部楼层
本帖最后由 哈岁NB 于 2023-2-11 16:29 编辑
  1. from selenium import webdriver
  2. from time import sleep
  3. import requests
  4. from lxml import html
  5. etree = html.etree

  6. bro = webdriver.Chrome('D:/技能/chromedriver.exe')

  7. bro.get('https://xiaoyuan.zhaopin.com/job/CC407288330J40383568908')
  8. sleep(2)

  9. cookies = bro.get_cookies()
  10. print(cookies)
  11. bro.quit()

  12. #解析cookie
  13. dic = {}
  14. for cookie in cookies:
  15.     dic[cookie['name']] = cookie['value']


  16. #解析页面源码
  17. url = 'https://xiaoyuan.zhaopin.com/api/sou?S_SOU_FULL_INDEX=java&S_SOU_POSITION_SOURCE_TYPE=&pageIndex=1&S_SOU_POSITION_TYPE=2&S_SOU_WORK_CITY=&S_SOU_JD_INDUSTRY_LEVEL=&S_SOU_COMPANY_TYPE=&S_SOU_REFRESH_DATE=&order=12&pageSize=30&_v=0.43957010&at=8d9987f50aed40bc8d1362e9c44a7fba&rt=ed9c026545294384a20a4473e1e2ecd3&x-zp-page-request-id=0933f66d64684fd6b0bc0756ed6791b6-1675650906506-860845&x-zp-client-id=b242c663-f23a-4571-aca7-de919a057afe'

  18. header = {
  19.     "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
  20. }
  21. url1 = 'https://xiaoyuan.zhaopin.com/api/sou?S_SOU_FULL_INDEX=java&S_SOU_POSITION_SOURCE_TYPE=&pageIndex=1&S_SOU_POSITION_TYPE=2&S_SOU_WORK_CITY=&S_SOU_JD_INDUSTRY_LEVEL=&S_SOU_COMPANY_TYPE=&S_SOU_REFRESH_DATE=&order=&pageSize=30&_v=0.06047107&at=9790578095794e1b9cc693485ef05237&rt=6448b0d50c2d460eb823575593f5a909&cityId=&jobTypeId=&jobSource=&industryId=&companyTypeId=&dateSearchTypeId=&x-zp-page-request-id=fcf1dcda72444dc6b8a17609bdb3a02f-1676083831807-687308&x-zp-client-id=b242c663-f23a-4571-aca7-de919a057afe'
  22. response = requests.get(url=url1,headers=header,cookies=dic).json()#json后要加括号
  23. list = response['data']['data']['list']
  24. for i in list:
  25.     name = i['name']
  26.     number = i['number']
  27.     job_url = 'https://xiaoyuan.zhaopin.com/job/' + number

  28.     page = requests.get(url=job_url, headers=header,cookies=dic).text
  29.     tree = etree.HTML(page)
  30.     #解析详情页数据
  31.     information = tree.xpath('/html/body/script[6]/text()')[0].removeprefix('__INITIAL_STATE__=')
  32.     print(information)
复制代码
isdkz 发表于 2023-2-11 16:16
代码有改过吗?我这里运行没有问题,你那个报错是因为只有字符串才有 removeprefix 方法,这说明你获取到 ...


没改过,返回的类型是<class 'lxml.etree._ElementUnicodeResult'>这个
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2023-2-11 16:36:37 | 显示全部楼层
isdkz 发表于 2023-2-11 16:16
代码有改过吗?我这里运行没有问题,你那个报错是因为只有字符串才有 removeprefix 方法,这说明你获取到 ...

好像是我的python3.9才有,我用的3.8
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2023-2-11 17:21:14 | 显示全部楼层
哈岁NB 发表于 2023-2-11 16:36
好像是我的python3.9才有,我用的3.8

倒是忘了这茬了,removeprefix 这个方法是后面的版本才有的,如果不能用这个方法就直接用索引吧
  1. from selenium import webdriver
  2. from time import sleep
  3. import requests
  4. from lxml import html
  5. etree = html.etree

  6. bro = webdriver.Chrome('D:/技能/chromedriver.exe')

  7. bro.get('https://xiaoyuan.zhaopin.com/job/CC407288330J40383568908')
  8. sleep(2)

  9. cookies = bro.get_cookies()
  10. print(cookies)
  11. bro.quit()

  12. #解析cookie
  13. dic = {}
  14. for cookie in cookies:
  15.     dic[cookie['name']] = cookie['value']


  16. #解析页面源码
  17. url = 'https://xiaoyuan.zhaopin.com/api/sou?S_SOU_FULL_INDEX=java&S_SOU_POSITION_SOURCE_TYPE=&pageIndex=1&S_SOU_POSITION_TYPE=2&S_SOU_WORK_CITY=&S_SOU_JD_INDUSTRY_LEVEL=&S_SOU_COMPANY_TYPE=&S_SOU_REFRESH_DATE=&order=12&pageSize=30&_v=0.43957010&at=8d9987f50aed40bc8d1362e9c44a7fba&rt=ed9c026545294384a20a4473e1e2ecd3&x-zp-page-request-id=0933f66d64684fd6b0bc0756ed6791b6-1675650906506-860845&x-zp-client-id=b242c663-f23a-4571-aca7-de919a057afe'

  18. header = {
  19.     "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
  20. }
  21. url1 = 'https://xiaoyuan.zhaopin.com/api/sou?S_SOU_FULL_INDEX=java&S_SOU_POSITION_SOURCE_TYPE=&pageIndex=1&S_SOU_POSITION_TYPE=2&S_SOU_WORK_CITY=&S_SOU_JD_INDUSTRY_LEVEL=&S_SOU_COMPANY_TYPE=&S_SOU_REFRESH_DATE=&order=&pageSize=30&_v=0.06047107&at=9790578095794e1b9cc693485ef05237&rt=6448b0d50c2d460eb823575593f5a909&cityId=&jobTypeId=&jobSource=&industryId=&companyTypeId=&dateSearchTypeId=&x-zp-page-request-id=fcf1dcda72444dc6b8a17609bdb3a02f-1676083831807-687308&x-zp-client-id=b242c663-f23a-4571-aca7-de919a057afe'
  22. response = requests.get(url=url1,headers=header,cookies=dic).json()#json后要加括号
  23. list = response['data']['data']['list']
  24. for i in list:
  25.     name = i['name']
  26.     number = i['number']
  27.     job_url = 'https://xiaoyuan.zhaopin.com/job/' + number

  28.     page = requests.get(url=job_url, headers=header,cookies=dic).text
  29.     tree = etree.HTML(page)
  30.     #解析详情页数据
  31.     information = tree.xpath('/html/body/script[6]/text()')[0][18:]
  32.     print(information)
复制代码
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2023-2-11 19:10:41 | 显示全部楼层
isdkz 发表于 2023-2-11 17:21
倒是忘了这茬了,removeprefix 这个方法是后面的版本才有的,如果不能用这个方法就直接用索引吧

好的,感谢感谢
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

小黑屋|手机版|Archiver|鱼C工作室 ( 粤ICP备18085999号-1 | 粤公网安备 44051102000585号)

GMT+8, 2025-4-25 01:39

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表