爬取存到Redis数据库
请问大佬们,这个爬取到的想存到Redis中该怎么写呢import requests
import logging
index_url = 'https://spa1.scrape.center/api/movie/?limit=10&offset={}'
detail_url = 'https://spa1.scrape.center/api/movie/{}/'
header = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
}
logging.basicConfig(level=logging.INFO,format = '%(asctime)s - %(levelname)s: %(message)s')
#发起请求
def scrapy_api(index_url):
logging.info('正在爬取{}...'.format(index_url))
try:
response = requests.get(url=index_url,headers=header,verify=False)
if response.status_code == 200:
return response.json()
logging.error('爬取代码为{},爬取{}失败'.format(response.status_code,index_url))
except requests.RequestException:
logging.error('爬取{}失败'.format(index_url),exc_info=True)
#爬取列表页
def scrape_index(page):
page_url = index_url.format(10*(page-1))
return scrapy_api(page_url)
#拼接url并发起请求
def scrape_detail(id):
url = detail_url.format(id)
return scrapy_api(url)
def main():
for page in range(1,3):
index_data = scrape_index(page)
for item in index_data.get('results'):
id = item.get('id')
detail_data = scrape_detail(id)
logging.info('detail data {}'.format(detail_data))
if __name__ == '__main__':
main() 在您的代码中,您需要导入redis模块,并创建一个redis数据库连接。然后在循环中将爬取到的数据存储到redis数据库中。
首先,您需要安装python的redis库,可以通过以下命令安装:
pip install redis
以下是一个简单的示例代码:
import requests
import logging
import redis
index_url = 'https://spa1.scrape.center/api/movie/?limit=10&offset={}'
detail_url = 'https://spa1.scrape.center/api/movie/{}/'
header = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
}
logging.basicConfig(level=logging.INFO,format = '%(asctime)s - %(levelname)s: %(message)s')
# 创建redis连接
r = redis.Redis(host='localhost', port=6379, db=0)
def scrapy_api(index_url):
logging.info('正在爬取{}...'.format(index_url))
try:
response = requests.get(url=index_url,headers=header,verify=False)
if response.status_code == 200:
return response.json()
logging.error('爬取代码为{},爬取{}失败'.format(response.status_code,index_url))
except requests.RequestException:
logging.error('爬取{}失败'.format(index_url),exc_info=True)
def scrape_index(page):
page_url = index_url.format(10*(page-1))
return scrapy_api(page_url)
def scrape_detail(id):
url = detail_url.format(id)
return scrapy_api(url)
def main():
for page in range(1,3):
index_data = scrape_index(page)
for item in index_data.get('results'):
id = item.get('id')
detail_data = scrape_detail(id)
logging.info('detail data {}'.format(detail_data))
# 将数据保存到redis
r.set(f'movie:{id}', str(detail_data))
if __name__ == '__main__':
main()
这里,我们将每一部电影的详细信息存储在redis中,key是 movie:{id} ,value是电影的详细信息。我们使用 str() 将字典转换为字符串,因为redis无法直接存储字典对象。
这是一个非常基础的示例,根据您的实际需要,您可能需要对数据进行进一步的处理,例如序列化和反序列化,以及错误处理等。 isdkz 发表于 2023-6-4 16:34
在您的代码中,您需要导入redis模块,并创建一个redis数据库连接。然后在循环中将爬取到的数据存储到redis ...
好的,感谢感谢
页:
[1]