|
发表于 2024-7-21 11:49:38
|
显示全部楼层
嘿,鱼油!看你的代码,简直就像个士兵在执行任务,可以爬取百度搜索结果,但是它出现了一些“小错误”,导致结果显示为0。咱们来逐步分析并修复它,好吗?
1. **问题归类**:
- 缺少初始化列表。
- 错误的字符串格式化。
- 百度网页结构变化,选择器选择不到元素。
- 获取真实 URL 的逻辑可能有问题。
2. **逐项解决**:
首先,我们要确保定义初始的空列表。如果没有这些空列表,代码在赋值时就会报错。
然后,把字符串格式化的问题解决一下,`format`函数用法稍有不正确。
**Here's the fixed code:**
- import requests
- from bs4 import BeautifulSoup
- import pandas as pd
- from time import sleep
- import re
- import random
- import os
- headers = {
- "User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
- "Connection":"keep-alive",
- "Accept":"text/html,application/xhtml+xml,application/xml;q=8.9,image/avif.image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
- "Accept-Encoding":"gzip,deflate",
- "Host":"www.baidu.com",
- "Cookie": "PSTM=1701005830; BIDUPSID=3847DE4A87F6BBA415E599085D83A664; BAIDUID=15153A5F7368E797908F9D80E823DA32:FG=1; BD_UPN=12314753; H_WISE_SIDS_BFESS=60360; MCITY=-360%3A; BDUSS=lrTlh4NTFOV2d2aVFDTWk0V20ySGNIZDJKT09QMFA0cHFyWkJ5QkxDZkZ1Y0ZtRVFBQUFBJCQAAAAAAAAAAAEAAAAPPSBIWW9ya19oYXBweQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMUsmmbFLJpmQ2; BDUSS_BFESS=lrTlh4NTFOV2d2aVFDTWk0V20ySGNIZDJKT09QMFA0cHFyWkJ5QkxDZkZ1Y0ZtRVFBQUFBJCQAAAAAAAAAAAEAAAAPPSBIWW9ya19oYXBweQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMUsmmbFLJpmQ2; H_PS_PSSID=60449_60359_60468_60440_60491_60501; H_WISE_SIDS=60449_60359_60468_60440_60491_60501; BAIDUID_BFESS=15153A5F7368E797908F9D80E823DA32:FG=1; BA_HECTOR=8lah8l8l05a12524ak20a5a1152kgl1j9ovqd1u; ZFY=wQT92WwXTov0LZiyOtHB0sWFAGuHJDp:BwvWiwRx018g:C; delPer=0; BD_CK_SAM=1; PSINO=1; BDRCVFR=mk3SLVN4HKm; BDORZ=FFFB88E999055A3F8A630C64834BD6D0; H_PS_645EC=41244eUtCO11J%2FCYVZbKoFXLf9Q%2B2xUUYIMe%2FpRqCMbzuDBa9I%2FRH69TQ6cCalLJyL%2F7ag"
- }
- def search(v_keyword,v_result_file,v_max_pages):
- # 定义这些变量为list类型
- kw_list = []
- page_list = []
- title_list = []
- href_list = []
- realurl_list = []
- desc_list = []
- site_list = []
- for page in range(v_max_pages):
- print("开始爬取第{}行".format(page+1))
- wait_seconds = random.uniform(1,2)
- print("开始等待{}秒".format(wait_seconds))
- sleep(wait_seconds)
- url = 'https://www.baidu.com/s?wd='+ v_keyword+ '&pn=' + str(page * 10)
- response = requests.get(url, headers=headers)
- response.encoding = 'utf-8'
- html = response.text
- print("响应码是: {}".format(response.status_code))
- soup = BeautifulSoup(response.text, 'html.parser')
-
- result_list = soup.find_all('div', class_='c-container')
- print("正在爬取{},共查询到{}个结果".format(url,len(result_list)))
- for result in result_list:
- title = result.find("a").text
- print("标题是",title)
- href = result.find("a").get('href')
- realurl = get_real_url(v_url=href)
- try:
- desc = result.find(class_='c-abstract').text
- except:
- desc = ""
- try:
- site = result.find(class_='c-showurl').text
- except:
- site = ""
-
- kw_list.append(v_keyword)
- page_list.append(page+1)
- title_list.append(title)
- href_list.append(href)
- realurl_list.append(realurl)
- desc_list.append(desc)
- site_list.append(site)
-
- df = pd.DataFrame(
- {
- '关键词':kw_list,
- '页码':page_list,
- '标题':title_list,
- '百度链接':href_list,
- '真实链接':realurl_list,
- '简介':desc_list,
- '网站名称':site_list,
- }
- )
- if os.path.exists(v_result_file):
- header = None
- else:
- header = True
- df.to_csv(v_result_file,mode="a+",index=False,header=header,encoding='utf_8_sig')
- print("数据保存成功:{}".format(v_result_file))
- def get_real_url(v_url):
- r = requests.get(v_url,headers=headers,allow_redirects=False)
- if r.status_code == 302:
- real_url = r.headers.get("location")
- else:
- real_url = re.findall("URL = '(.*?)'",r.text)
- print("real_url is :",real_url)
- return real_url
- if __name__ == '__main__':
- keyword = input("请输入关键词:")
- max_pages = 5
- result_file = "爬取{}前{}页.csv".format(keyword,max_pages)
- if os.path.exists(result_file):
- os.remove(result_file)
- print("该爬取关键词存在已被删除")
-
- search(v_keyword=keyword,v_result_file=result_file,v_max_pages=max_pages)
复制代码
### 主要修正点:
1. **初始化列表**:确保 `kw_list`、`page_list` 等在函数内部开始处被定义。
2. **格式化字符串**:修正了 `.format()` 用法。
3. **选择器适配**:百度页面的 HTML 结构可能变了,所以你需要检查使用的选择器。
4. **获取真实 URL**:修正了应对重定向的状态码,将 `382` 修改为 `302`。
试试这个修改后的代码,应该能有所改进。如果还有问题,欢迎继续提问!祝你爬虫愉快,别让百度认为你是个“怪蜀黍”哦!
以上回复来自 FishC AI 小助理,如未能正确解答您的问题,请继续追问。 |
|