鱼C论坛

 找回密码
 立即注册
查看: 878|回复: 12

[已解决]问题

[复制链接]
发表于 2020-4-17 17:55:53 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能^_^

您需要 登录 才可以下载或查看,没有账号?立即注册

x
报错KeyError: 'foreignList'
最佳答案
2020-4-17 18:37:05
data_2只包含:
chinaDayList
chinaDayAddList
dailyNewAddHistory
dailyHistory
wuhanDayList
articleList
provinceCompare
cityStatis
nowConfirmStatis
没有'foreignList',所以报错。
小甲鱼最新课程 -> https://ilovefishc.com
回复

使用道具 举报

发表于 2020-4-17 17:57:16 | 显示全部楼层
???
小甲鱼最新课程 -> https://ilovefishc.com
回复

使用道具 举报

发表于 2020-4-17 17:57:24 | 显示全部楼层
能再详细点吗?贴出代码呀!
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2020-4-17 17:58:37 | 显示全部楼层
值错误。foreignLis赋值的类型是不是不符合当前逻辑操作?
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2020-4-17 18:00:55 | 显示全部楼层
代码在这:
  1. # -*- coding: utf-8 -*-
  2. # 导入模块
  3. import json
  4. import requests
  5. import pandas as pd
  6. import csv
  7. # 抓取数据
  8. ## 先把数据都爬下来,查看数据结构,明确要整理保存的数据
  9. def catch_data1():
  10.     # url_1包含中国各省市当日实时数据(也有全球数据,但是腾讯改版后好久没更新了)
  11.     url_1 = 'https://view.inews.qq.com/g2/getOnsInfo?name=disease_h5'
  12.     response = requests.get(url=url_1).json()
  13.     data_1 = json.loads(response['data'])
  14.     return data_1
  15. data_1 = catch_data1()

  16. def catch_data2():
  17.     # url_2包含全球实时数据及历史数据、中国历史数据及每日新增数据
  18.     url_2 = 'https://view.inews.qq.com/g2/getOnsInfo?name=disease_other'
  19.     data_2 = json.loads(requests.get(url=url_2).json()['data'])
  20.     return data_2
  21. data_2 = catch_data2()

  22. lastUpdateTime = data_1["lastUpdateTime"]  # 腾讯最近更新时间
  23. directory = 'D:\\x4'
  24. # 获取中国当日实时数据
  25. china_data = data_1["areaTree"][0]["children"]
  26. ## 获取中国各城市当日实时数据
  27. filename = directory + lastUpdateTime.split(' ')[0] + "_china_city_data.csv"
  28. with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  29.     writer = csv.writer(csv_file)
  30.     header = ["province", "city_name", "total_confirm", "total_suspect", "total_dead", "total_heal",
  31.               "today_confirm", "lastUpdateTime"]
  32.     writer.writerow(header)
  33.     for j in range(len(china_data)):
  34.         province = china_data[j]["name"]  # 省份
  35.         city_list = china_data[j]["children"]  # 该省份下面城市列表
  36.         for k in range(len(city_list)):
  37.             city_name = city_list[k]["name"]  # 城市名称
  38.             total_confirm = city_list[k]["total"]["confirm"]  # 总确诊病例
  39.             total_suspect = city_list[k]["total"]["suspect"]  # 总疑似病例
  40.             total_dead = city_list[k]["total"]["dead"]  # 总死亡病例
  41.             total_heal = city_list[k]["total"]["heal"]  # 总治愈病例
  42.             today_confirm = city_list[k]["today"]["confirm"]  # 今日确诊病例
  43.             data_row = [province, city_name, total_confirm, total_suspect, total_dead,
  44.                         total_heal, today_confirm, lastUpdateTime]
  45.             writer.writerow(data_row)
  46. ## 获取中国各省当日实时数据
  47. filename = directory + lastUpdateTime.split(' ')[0] + "_china_province_data.csv"
  48. with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  49.     writer = csv.writer(csv_file)
  50.     header = ["province", "total_confirm", "total_suspect", "total_dead", "total_heal",
  51.               "today_confirm", "lastUpdateTime"]
  52.     writer.writerow(header)
  53.     for i in range(len(china_data)):
  54.         province = china_data[i]["name"]  # 省份
  55.         total_confirm = china_data[i]["total"]["confirm"]  # 总确诊病例
  56.         total_suspect = china_data[i]["total"]["suspect"]  # 总疑似病例
  57.         total_dead = china_data[i]["total"]["dead"]  # 总死亡病例
  58.         total_heal = china_data[i]["total"]["heal"]  # 总治愈病例
  59.         today_confirm = china_data[i]["today"]["confirm"]  # 今日确诊病例
  60.         data_row = [province, total_confirm, total_suspect, total_dead, total_heal, today_confirm, lastUpdateTime]
  61.         writer.writerow(data_row)
  62. # 获取中国历史数据及每日新增数据
  63. chinaDayList = pd.DataFrame(data_2["chinaDayList"])  # 中国历史数据
  64. filename = directory + lastUpdateTime.split(' ')[0] + "_china_history_data.csv"
  65. header = ["date", "confirm", "suspect", "dead", "heal", "nowConfirm", "nowSevere", "deadRate", "healRate"]
  66. chinaDayList = chinaDayList[header]  # 重排数据框列的顺序
  67. chinaDayList.to_csv(filename, encoding="utf_8_sig", index=False)

  68. chinaDayAddList = pd.DataFrame(data_2["chinaDayAddList"])  # 中国每日新增数据
  69. filename = directory + lastUpdateTime.split(' ')[0] + "_china_DayAdd_data.csv"
  70. header = ["date", "confirm", "suspect", "dead", "heal", "deadRate", "healRate"]
  71. chinaDayAddList = chinaDayAddList[header]  # 重排数据框列的顺序
  72. chinaDayAddList.to_csv(filename, encoding="utf_8_sig", index=False)
  73. # 湖北与非湖北历史数据
  74. def get_data_1():
  75.     with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  76.         writer = csv.writer(csv_file)
  77.         header = ["date", "dead", "heal", "nowConfirm", "deadRate", "healRate"]  # 定义表头
  78.         writer.writerow(header)
  79.         for i in range(len(hubei_notHhubei)):
  80.             data_row = [hubei_notHhubei[i]["date"], hubei_notHhubei[i][w]["dead"], hubei_notHhubei[i][w]["heal"],
  81.                         hubei_notHhubei[i][w]["nowConfirm"], hubei_notHhubei[i][w]["deadRate"],
  82.                         hubei_notHhubei[i][w]["healRate"]]
  83.             writer.writerow(data_row)

  84. hubei_notHhubei = data_2["dailyHistory"]  # 湖北与非湖北历史数据
  85. for w in ["hubei", "notHubei"]:
  86.     filename = directory + lastUpdateTime.split(' ')[0] + "_" + w + "_history_data.csv"
  87.     get_data_1()

  88. # 获取湖北省与非湖北每日新增数据
  89. hubei_DayAdd = pd.DataFrame(data_2["dailyNewAddHistory"])  # 中国历史数据
  90. filename = directory + lastUpdateTime.split(' ')[0] + "_hubei_notHubei_DayAdd_data.csv"
  91. hubei_DayAdd.to_csv(filename, encoding="utf_8_sig", index=False)
  92. # 获取武汉与非武汉每日新增数据
  93. wuhan_DayAdd = data_2["wuhanDayList"]
  94. filename = directory + lastUpdateTime.split(' ')[0] + "_wuhan_notWuhan_DayAdd_data.csv"
  95. with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  96.     writer = csv.writer(csv_file)
  97.     header = ["date", "wuhan", "notWuhan", "notHubei"]  # 定义表头
  98.     writer.writerow(header)
  99.     for i in range(len(wuhan_DayAdd)):
  100.         data_row = [wuhan_DayAdd[i]["date"], wuhan_DayAdd[i]["wuhan"]["confirmAdd"],
  101.                     wuhan_DayAdd[i]["notWuhan"]["confirmAdd"], wuhan_DayAdd[i]["notHubei"]["confirmAdd"], ]
  102.         writer.writerow(data_row)
  103. # 全球实时数据及历史数据
  104. ## 获取全球各地区实时数据
  105. global_data = data_2['foreignList']
  106. filename = directory + lastUpdateTime.split(' ')[0] + "_global_data.csv"
  107. with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  108.     writer = csv.writer(csv_file)
  109.     header = ["country", "date", "total_confirm", "total_suspect", "total_dead", "total_heal",
  110.               "today_confirm", "lastUpdateTime"]
  111.     writer.writerow(header)
  112.     # 先写入中国的数据
  113.     chinadate = lastUpdateTime.split(' ')[0][5:10].replace('-', '.')
  114.     chinaData = ["中国", chinadate, data_1["chinaTotal"]["confirm"], data_1["chinaTotal"]["suspect"],
  115.                  data_1["chinaTotal"]["dead"], data_1["chinaTotal"]["heal"],
  116.                  data_1["chinaAdd"]["confirm"], lastUpdateTime]
  117.     writer.writerow(chinaData)
  118.     # 再写入其他国家地区的数据
  119.     for i in range(len(global_data)):
  120.         country = global_data[i]["name"]  # 国家或地区
  121.         date = global_data[i]["date"]  # 日期
  122.         total_confirm = global_data[i]["confirm"]  # 总确诊病例
  123.         total_suspect = global_data[i]["suspect"]  # 总疑似病例
  124.         total_dead = global_data[i]["dead"]  # 总死亡病例
  125.         total_heal = global_data[i]["heal"]  # 总治愈病例
  126.         today_confirm = global_data[i]["confirmAdd"]  # 今日确诊病例
  127.         data_row = [country, date, total_confirm, total_suspect, total_dead, total_heal, today_confirm, lastUpdateTime]
  128.         writer.writerow(data_row)
  129. ## 出于需要,转换一下英文名
  130. ## 世界各国中英文对照Chinese_to_English.xlsx下载于百度百科,自己添加了“日本本土”和“钻石号邮轮”的英文名,不然merge不出来。
  131. world_name = pd.read_excel("Chinese_to_English.xlsx", sep='\t', encoding="utf-8")
  132. globaldata = pd.read_csv(filename, encoding="utf_8_sig")
  133. globaldata = pd.merge(globaldata, world_name, left_on="country", right_on="中文", how="inner")
  134. header = ["country", "英文", "date", "total_confirm", "total_suspect", "total_dead", "total_heal",
  135.           "today_confirm", "lastUpdateTime"]
  136. globaldata = globaldata[header]
  137. globaldata.to_csv(filename, encoding="utf_8_sig", index=False)
  138. ## 获取全球历史数据(除中国以外的总量)
  139. globalDailyHistory = data_2["globalDailyHistory"]
  140. filename = directory + lastUpdateTime.split(' ')[0] + "_globalDailyHistory.csv"
  141. with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  142.     writer = csv.writer(csv_file)
  143.     header = ["date", "total_dead", "total_heal", "newAddConfirm"]
  144.     writer.writerow(header)
  145.     for i in range(len(globalDailyHistory)):
  146.         date = globalDailyHistory[i]["date"]  # 日期
  147.         total_dead = globalDailyHistory[i]["all"]["dead"]  # 总死亡病例
  148.         total_heal = globalDailyHistory[i]["all"]["heal"]  # 总治愈病例
  149.         newAddConfirm = globalDailyHistory[i]["all"]["newAddConfirm"]  # 今日确诊病例
  150.         data_row = [date, total_dead, total_heal, newAddConfirm]
  151.         writer.writerow(data_row)
  152. ## 获取全球总量实时数据(中国以外)
  153. globalNow = data_2["globalStatis"]
  154. filename = directory + lastUpdateTime.split(' ')[0] + "_globalNow.csv"
  155. with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  156.     writer = csv.writer(csv_file)
  157.     header = ["nowConfirm", "confirm", "heal", "dead", "lastUpdateTime"]
  158.     writer.writerow(header)
  159.     data_row = [globalNow["nowConfirm"], globalNow["confirm"], globalNow["heal"], globalNow["dead"], lastUpdateTime]
  160.     writer.writerow(data_row)
  161. # 获取韩国、意大利、日本本土各城市当日实时数据
  162. global_data = data_2["foreignList"]
  163. dictt = {"韩国": "Korea", "意大利": "Italy", "日本本土": "Japan"}
  164. for j in dictt.keys():
  165.     filename = directory + lastUpdateTime.split(' ')[0] + "_" + dictt[j] + "_city_data.csv"
  166.     with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  167.         writer = csv.writer(csv_file)
  168.         header = ["country", "city_name", "date", "nameMap", "total_confirm", "total_suspect", "total_dead",
  169.                   "total_heal", "confirmAdd", "lastUpdateTime"]
  170.         writer.writerow(header)
  171.         for k in range(len(global_data)):
  172.             if global_data[k]["name"] == j:
  173.                 city_list = global_data[k]["children"]  # 该国家下面城市列表
  174.                 for h in range(len(city_list)):
  175.                     city_name = city_list[h]["name"]  # 城市中文名
  176.                     date = city_list[h]["date"]  # 日期
  177.                     nameMap = city_list[h]["nameMap"]  # 城市英文名
  178.                     total_confirm = city_list[h]["confirm"]  # 总确诊病例
  179.                     total_suspect = city_list[h]["suspect"]  # 总疑似病例
  180.                     total_dead = city_list[h]["dead"]  # 总死亡病例
  181.                     total_heal = city_list[h]["heal"]  # 总治愈病例
  182.                     confirmAdd = city_list[h]["confirmAdd"]  # 新增确诊病例
  183.                     data_row = [j, city_name, date, nameMap, total_confirm, total_suspect, total_dead, total_heal,
  184.                                 confirmAdd, lastUpdateTime]
  185.                     writer.writerow(data_row)
复制代码

第110行
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2020-4-17 18:04:38 | 显示全部楼层
代码捏????
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2020-4-17 18:08:02 | 显示全部楼层
txxcat 发表于 2020-4-17 17:57
能再详细点吗?贴出代码呀!
  1. # -*- coding: utf-8 -*-
  2. # 导入模块
  3. import json
  4. import requests
  5. import pandas as pd
  6. import csv
  7. # 抓取数据
  8. ## 先把数据都爬下来,查看数据结构,明确要整理保存的数据
  9. def catch_data1():
  10.     # url_1包含中国各省市当日实时数据(也有全球数据,但是腾讯改版后好久没更新了)
  11.     url_1 = 'https://view.inews.qq.com/g2/getOnsInfo?name=disease_h5'
  12.     response = requests.get(url=url_1).json()
  13.     data_1 = json.loads(response['data'])
  14.     return data_1
  15. data_1 = catch_data1()

  16. def catch_data2():
  17.     # url_2包含全球实时数据及历史数据、中国历史数据及每日新增数据
  18.     url_2 = 'https://view.inews.qq.com/g2/getOnsInfo?name=disease_other'
  19.     data_2 = json.loads(requests.get(url=url_2).json()['data'])
  20.     return data_2
  21. data_2 = catch_data2()

  22. lastUpdateTime = data_1["lastUpdateTime"]  # 腾讯最近更新时间
  23. directory = 'D:\\x4'
  24. # 获取中国当日实时数据
  25. china_data = data_1["areaTree"][0]["children"]
  26. ## 获取中国各城市当日实时数据
  27. filename = directory + lastUpdateTime.split(' ')[0] + "_china_city_data.csv"
  28. with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  29.     writer = csv.writer(csv_file)
  30.     header = ["province", "city_name", "total_confirm", "total_suspect", "total_dead", "total_heal",
  31.               "today_confirm", "lastUpdateTime"]
  32.     writer.writerow(header)
  33.     for j in range(len(china_data)):
  34.         province = china_data[j]["name"]  # 省份
  35.         city_list = china_data[j]["children"]  # 该省份下面城市列表
  36.         for k in range(len(city_list)):
  37.             city_name = city_list[k]["name"]  # 城市名称
  38.             total_confirm = city_list[k]["total"]["confirm"]  # 总确诊病例
  39.             total_suspect = city_list[k]["total"]["suspect"]  # 总疑似病例
  40.             total_dead = city_list[k]["total"]["dead"]  # 总死亡病例
  41.             total_heal = city_list[k]["total"]["heal"]  # 总治愈病例
  42.             today_confirm = city_list[k]["today"]["confirm"]  # 今日确诊病例
  43.             data_row = [province, city_name, total_confirm, total_suspect, total_dead,
  44.                         total_heal, today_confirm, lastUpdateTime]
  45.             writer.writerow(data_row)
  46. ## 获取中国各省当日实时数据
  47. filename = directory + lastUpdateTime.split(' ')[0] + "_china_province_data.csv"
  48. with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  49.     writer = csv.writer(csv_file)
  50.     header = ["province", "total_confirm", "total_suspect", "total_dead", "total_heal",
  51.               "today_confirm", "lastUpdateTime"]
  52.     writer.writerow(header)
  53.     for i in range(len(china_data)):
  54.         province = china_data[i]["name"]  # 省份
  55.         total_confirm = china_data[i]["total"]["confirm"]  # 总确诊病例
  56.         total_suspect = china_data[i]["total"]["suspect"]  # 总疑似病例
  57.         total_dead = china_data[i]["total"]["dead"]  # 总死亡病例
  58.         total_heal = china_data[i]["total"]["heal"]  # 总治愈病例
  59.         today_confirm = china_data[i]["today"]["confirm"]  # 今日确诊病例
  60.         data_row = [province, total_confirm, total_suspect, total_dead, total_heal, today_confirm, lastUpdateTime]
  61.         writer.writerow(data_row)
  62. # 获取中国历史数据及每日新增数据
  63. chinaDayList = pd.DataFrame(data_2["chinaDayList"])  # 中国历史数据
  64. filename = directory + lastUpdateTime.split(' ')[0] + "_china_history_data.csv"
  65. header = ["date", "confirm", "suspect", "dead", "heal", "nowConfirm", "nowSevere", "deadRate", "healRate"]
  66. chinaDayList = chinaDayList[header]  # 重排数据框列的顺序
  67. chinaDayList.to_csv(filename, encoding="utf_8_sig", index=False)

  68. chinaDayAddList = pd.DataFrame(data_2["chinaDayAddList"])  # 中国每日新增数据
  69. filename = directory + lastUpdateTime.split(' ')[0] + "_china_DayAdd_data.csv"
  70. header = ["date", "confirm", "suspect", "dead", "heal", "deadRate", "healRate"]
  71. chinaDayAddList = chinaDayAddList[header]  # 重排数据框列的顺序
  72. chinaDayAddList.to_csv(filename, encoding="utf_8_sig", index=False)
  73. # 湖北与非湖北历史数据
  74. def get_data_1():
  75.     with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  76.         writer = csv.writer(csv_file)
  77.         header = ["date", "dead", "heal", "nowConfirm", "deadRate", "healRate"]  # 定义表头
  78.         writer.writerow(header)
  79.         for i in range(len(hubei_notHhubei)):
  80.             data_row = [hubei_notHhubei[i]["date"], hubei_notHhubei[i][w]["dead"], hubei_notHhubei[i][w]["heal"],
  81.                         hubei_notHhubei[i][w]["nowConfirm"], hubei_notHhubei[i][w]["deadRate"],
  82.                         hubei_notHhubei[i][w]["healRate"]]
  83.             writer.writerow(data_row)

  84. hubei_notHhubei = data_2["dailyHistory"]  # 湖北与非湖北历史数据
  85. for w in ["hubei", "notHubei"]:
  86.     filename = directory + lastUpdateTime.split(' ')[0] + "_" + w + "_history_data.csv"
  87.     get_data_1()

  88. # 获取湖北省与非湖北每日新增数据
  89. hubei_DayAdd = pd.DataFrame(data_2["dailyNewAddHistory"])  # 中国历史数据
  90. filename = directory + lastUpdateTime.split(' ')[0] + "_hubei_notHubei_DayAdd_data.csv"
  91. hubei_DayAdd.to_csv(filename, encoding="utf_8_sig", index=False)
  92. # 获取武汉与非武汉每日新增数据
  93. wuhan_DayAdd = data_2["wuhanDayList"]
  94. filename = directory + lastUpdateTime.split(' ')[0] + "_wuhan_notWuhan_DayAdd_data.csv"
  95. with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  96.     writer = csv.writer(csv_file)
  97.     header = ["date", "wuhan", "notWuhan", "notHubei"]  # 定义表头
  98.     writer.writerow(header)
  99.     for i in range(len(wuhan_DayAdd)):
  100.         data_row = [wuhan_DayAdd[i]["date"], wuhan_DayAdd[i]["wuhan"]["confirmAdd"],
  101.                     wuhan_DayAdd[i]["notWuhan"]["confirmAdd"], wuhan_DayAdd[i]["notHubei"]["confirmAdd"], ]
  102.         writer.writerow(data_row)
  103. # 全球实时数据及历史数据
  104. ## 获取全球各地区实时数据
  105. global_data = data_2['foreignList']
  106. filename = directory + lastUpdateTime.split(' ')[0] + "_global_data.csv"
  107. with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  108.     writer = csv.writer(csv_file)
  109.     header = ["country", "date", "total_confirm", "total_suspect", "total_dead", "total_heal",
  110.               "today_confirm", "lastUpdateTime"]
  111.     writer.writerow(header)
  112.     # 先写入中国的数据
  113.     chinadate = lastUpdateTime.split(' ')[0][5:10].replace('-', '.')
  114.     chinaData = ["中国", chinadate, data_1["chinaTotal"]["confirm"], data_1["chinaTotal"]["suspect"],
  115.                  data_1["chinaTotal"]["dead"], data_1["chinaTotal"]["heal"],
  116.                  data_1["chinaAdd"]["confirm"], lastUpdateTime]
  117.     writer.writerow(chinaData)
  118.     # 再写入其他国家地区的数据
  119.     for i in range(len(global_data)):
  120.         country = global_data[i]["name"]  # 国家或地区
  121.         date = global_data[i]["date"]  # 日期
  122.         total_confirm = global_data[i]["confirm"]  # 总确诊病例
  123.         total_suspect = global_data[i]["suspect"]  # 总疑似病例
  124.         total_dead = global_data[i]["dead"]  # 总死亡病例
  125.         total_heal = global_data[i]["heal"]  # 总治愈病例
  126.         today_confirm = global_data[i]["confirmAdd"]  # 今日确诊病例
  127.         data_row = [country, date, total_confirm, total_suspect, total_dead, total_heal, today_confirm, lastUpdateTime]
  128.         writer.writerow(data_row)
  129. ## 出于需要,转换一下英文名
  130. ## 世界各国中英文对照Chinese_to_English.xlsx下载于百度百科,自己添加了“日本本土”和“钻石号邮轮”的英文名,不然merge不出来。
  131. world_name = pd.read_excel("Chinese_to_English.xlsx", sep='\t', encoding="utf-8")
  132. globaldata = pd.read_csv(filename, encoding="utf_8_sig")
  133. globaldata = pd.merge(globaldata, world_name, left_on="country", right_on="中文", how="inner")
  134. header = ["country", "英文", "date", "total_confirm", "total_suspect", "total_dead", "total_heal",
  135.           "today_confirm", "lastUpdateTime"]
  136. globaldata = globaldata[header]
  137. globaldata.to_csv(filename, encoding="utf_8_sig", index=False)
  138. ## 获取全球历史数据(除中国以外的总量)
  139. globalDailyHistory = data_2["globalDailyHistory"]
  140. filename = directory + lastUpdateTime.split(' ')[0] + "_globalDailyHistory.csv"
  141. with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  142.     writer = csv.writer(csv_file)
  143.     header = ["date", "total_dead", "total_heal", "newAddConfirm"]
  144.     writer.writerow(header)
  145.     for i in range(len(globalDailyHistory)):
  146.         date = globalDailyHistory[i]["date"]  # 日期
  147.         total_dead = globalDailyHistory[i]["all"]["dead"]  # 总死亡病例
  148.         total_heal = globalDailyHistory[i]["all"]["heal"]  # 总治愈病例
  149.         newAddConfirm = globalDailyHistory[i]["all"]["newAddConfirm"]  # 今日确诊病例
  150.         data_row = [date, total_dead, total_heal, newAddConfirm]
  151.         writer.writerow(data_row)
  152. ## 获取全球总量实时数据(中国以外)
  153. globalNow = data_2["globalStatis"]
  154. filename = directory + lastUpdateTime.split(' ')[0] + "_globalNow.csv"
  155. with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  156.     writer = csv.writer(csv_file)
  157.     header = ["nowConfirm", "confirm", "heal", "dead", "lastUpdateTime"]
  158.     writer.writerow(header)
  159.     data_row = [globalNow["nowConfirm"], globalNow["confirm"], globalNow["heal"], globalNow["dead"], lastUpdateTime]
  160.     writer.writerow(data_row)
  161. # 获取韩国、意大利、日本本土各城市当日实时数据
  162. global_data = data_2["foreignList"]
  163. dictt = {"韩国": "Korea", "意大利": "Italy", "日本本土": "Japan"}
  164. for j in dictt.keys():
  165.     filename = directory + lastUpdateTime.split(' ')[0] + "_" + dictt[j] + "_city_data.csv"
  166.     with open(filename, "w+", encoding="utf_8_sig", newline="") as csv_file:
  167.         writer = csv.writer(csv_file)
  168.         header = ["country", "city_name", "date", "nameMap", "total_confirm", "total_suspect", "total_dead",
  169.                   "total_heal", "confirmAdd", "lastUpdateTime"]
  170.         writer.writerow(header)
  171.         for k in range(len(global_data)):
  172.             if global_data[k]["name"] == j:
  173.                 city_list = global_data[k]["children"]  # 该国家下面城市列表
  174.                 for h in range(len(city_list)):
  175.                     city_name = city_list[h]["name"]  # 城市中文名
  176.                     date = city_list[h]["date"]  # 日期
  177.                     nameMap = city_list[h]["nameMap"]  # 城市英文名
  178.                     total_confirm = city_list[h]["confirm"]  # 总确诊病例
  179.                     total_suspect = city_list[h]["suspect"]  # 总疑似病例
  180.                     total_dead = city_list[h]["dead"]  # 总死亡病例
  181.                     total_heal = city_list[h]["heal"]  # 总治愈病例
  182.                     confirmAdd = city_list[h]["confirmAdd"]  # 新增确诊病例
  183.                     data_row = [j, city_name, date, nameMap, total_confirm, total_suspect, total_dead, total_heal,
  184.                                 confirmAdd, lastUpdateTime]
  185.                     writer.writerow(data_row)
复制代码
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2020-4-17 18:09:46 | 显示全部楼层

在审核
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2020-4-17 18:11:04 | 显示全部楼层
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2020-4-17 18:17:04 | 显示全部楼层

就只报了这个错嘛
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2020-4-17 18:37:05 | 显示全部楼层    本楼为最佳答案   
data_2只包含:
chinaDayList
chinaDayAddList
dailyNewAddHistory
dailyHistory
wuhanDayList
articleList
provinceCompare
cityStatis
nowConfirmStatis
没有'foreignList',所以报错。
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2020-4-17 18:58:49 | 显示全部楼层
txxcat 发表于 2020-4-17 18:37
data_2只包含:
chinaDayList
chinaDayAddList

那具体怎么修改呢
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2020-4-17 21:37:36 | 显示全部楼层
xiaosi4081 发表于 2020-4-17 18:58
那具体怎么修改呢

这个是网站的问题了,恐怕只有删掉没有的部分的代码才能运行,后面还有几个项目也没有,貌似都是国外的部分。
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

小黑屋|手机版|Archiver|鱼C工作室 ( 粤ICP备18085999号-1 | 粤公网安备 44051102000585号)

GMT+8, 2025-9-21 06:57

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表