你的代码中有几个问题需要修正。首先是一些变量没有初始化,其次可能还需要留意网络请求的响应内容是否完整。下面是修正后的代码:
import requests
import bs4
def open_url(url):
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36'
}
# Http status code checking
try:
res = requests.get(url, headers=headers)
res.raise_for_status()
except requests.exceptions.HTTPError as e:
print("HTTP error occurred: ", e)
except Exception as e:
print("Some other error occurred: ", e)
return res
def find_movies(res):
soup = bs4.BeautifulSoup(res.text, 'html.parser')
# Initialize lists
movies = []
ranks = []
messages = []
result = []
# 电影名
targets = soup.find_all("div", class_="hd")
for each in targets:
movies.append(each.a.span.text)
# 评分
targets = soup.find_all("span", class_="rating_num")
for each in targets:
ranks.append(' 评分:%s ' % each.text)
# 资料
targets = soup.find_all("div", class_="bd")
for each in targets:
try:
messages.append(each.p.text.strip().replace('\n', ' '))
except:
continue
# Collect all results
length = len(movies)
for i in range(length):
result.append(movies[i] + ranks[i] + messages[i] + '\n')
return result
def find_depth(res):
soup = bs4.BeautifulSoup(res.text, 'html.parser')
depth = soup.find('span', class_='next').previous_sibling.previous_sibling.text
return int(depth)
def main():
host = "https://movie.douban.com/top250"
res = open_url(host)
if res is None:
print("Failed to retrieve web page content.")
return
depth = find_depth(res)
result = []
for i in range(depth):
url = host + '/?start=' + str(25 * i)
res = open_url(url)
if res is not None:
result.extend(find_movies(res))
with open("豆瓣TOP250电影.txt", "w", encoding="utf-8") as f:
for each in result:
f.write(each)
if __name__ == "__main__":
main()
### 代码修复的关键点:
1. **变量初始化**:`movies`、`ranks`、`messages`和`result`变量需要在函数中初始化为列表。
2. **网络请求异常处理**:在`open_url`函数中增加异常处理,以捕获请求失败的情况。
3. **字符串处理**:使用`strip()`方法处理字符串前后的空格,以及`replace()`方法去掉换行符。
4. **检查响应**:增加一个条件判断请求结果是否成功。例如,检查返回的响应是否为`None`。
确保这些改动都正常进行,逻辑上处理后应该能正常获取并打印所需的电影名。
以上回复来自 FishC AI 小助理,如未能正确解答您的问题,请继续追问。 |