鱼C论坛

 找回密码
 立即注册
查看: 800|回复: 6

[已解决]Python爬虫08(朋友实训项目)

[复制链接]
发表于 2024-1-3 19:05:48 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能^_^

您需要 登录 才可以下载或查看,没有账号?立即注册

x
在豆瓣音乐标签: 流行里面爬取标题,表演者,发行时间,链接
import requests
from bs4 import BeautifulSoup
import xlwt
url = "https://music.douban.com/tag/%E6%B5%81%E8%A1%8C?start=260&type=T"
headers = {'User-Agent':
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36','Cookie':
'll="118254"; bid=hUyLZc0IQ-0; __utmc=30149280; __utmz=30149280.1704177535.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); _pk_id.100001.afe6=e3aa424645194a40.1704178112.; __yadk_uid=vQ2OnDmCyRJ7MgONjIYqxroh3kmJOC0n; _vwo_uuid_v2=DC4889CAE9BF69AB947F1C739A359D3B1|c7a477b87f67bc643d65031d1e233036; dbcl2="276885079:IyaMMU84wwk"; ck=y-tF; push_noty_num=0; push_doumail_num=0; __utmv=30149280.27688; ap_v=0,6.0; _pk_ref.100001.afe6=%5B%22%22%2C%22%22%2C1704181059%2C%22https%3A%2F%2Fwww.douban.com%2F%22%5D; _pk_ses.100001.afe6=1; __utma=30149280.678212503.1704177535.1704177535.1704181060.2; __utmt=1;__utmb=30149280.3.9.1704181060'}
r=requests.get(url,headers=headers)
content=r.content.decode('utf-8')
with open('douban.html','wb')as file:
    file.write(r.content)
title_list=[]
href_list=[]
for i in range(0,400,20):
    url = 'https://music.douban.com/tag/%E6%B5%81%E8%A1%8C?start={i}&type=T'
    soup=BeautifulSoup(content,'lxml')
    divs=soup.find_all('div',class_="pl2")
    for div in divs:
        href=div.find('a')['href']
        href_list.append(href)
        title=div.find('a').text
        title_list.append(title)


print(href_list)
print(title_list)
wb=xlwt.Workbook()
sheet1=wb.add_sheet('豆瓣音乐')
sheet1.write(0,0,'标题')
sheet1.write(0,1,'链接')

for row in range(1,len(title_list)+1,1):
    sheet1.write(row,0,title_list[row-1])
    sheet1.write(row,1,href_list[row-1])

wb.save(r'C:\Users\梦呓\Desktop\34qc.xls')
最佳答案
2024-1-3 19:07:19
代码中有几个问题需要修复:

1. 缺少缩进:在for循环之前,需要在title_list和href_list的定义语句后面添加一个缩进。

2. 数据获取问题:在循环中,应该使用新的url来发送请求并获得内容,而不是一直使用初始的content变量。所以,在循环开始的时候,应该加上r=requests.get(url,headers=headers)这一行,然后再调用content=r.content.decode('utf-8')。

3. 写入Excel文件问题:sheet1.write(row,0,title_list)和sheet1.write(row,1,href_list)这两行,应该将title_list[row-1]改为title_list[row-1],将href_list[row-1]改为href_list[row-1]。

修改后的代码如下:
import requests
from bs4 import BeautifulSoup
import xlwt

url = "https://music.douban.com/tag/%E6%B5%81%E8%A1%8C?start=260&type=T"
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36','Cookie': 'll="118254"; bid=hUyLZc0IQ-0; __utmc=30149280; __utmz=30149280.1704177535.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); _pk_id.100001.afe6=e3aa424645194a40.1704178112.; __yadk_uid=vQ2OnDmCyRJ7MgONjIYqxroh3kmJOC0n; _vwo_uuid_v2=DC4889CAE9BF69AB947F1C739A359D3B1|c7a477b87f67bc643d65031d1e233036; dbcl2="276885079:IyaMMU84wwk"; ck=y-tF; push_noty_num=0; push_doumail_num=0; __utmv=30149280.27688; ap_v=0,6.0; _pk_ref.100001.afe6=%5B%22%22%2C%22%22%2C1704181059%2C%22https%3A%2F%2Fwww.douban.com%2F%22%5D; _pk_ses.100001.afe6=1; __utma=30149280.678212503.1704177535.1704177535.1704181060.2; __utmt=1;__utmb=30149280.3.9.1704181060'}
r = requests.get(url,headers=headers)
content = r.content.decode('utf-8')

with open('douban.html','wb') as file:
    file.write(r.content)

title_list = []
href_list = []

for i in range(0,400,20):
    url = f'https://music.douban.com/tag/%E6%B5%81%E8%A1%8C?start={i}&type=T'
    r = requests.get(url,headers=headers)
    content = r.content.decode('utf-8')
    soup = BeautifulSoup(content,'lxml')
    divs = soup.find_all('div',class_="pl2")
    for div in divs:
        href = div.find('a')
        href_list.append(href)
        title = div.find('a').text
        title_list.append(title)

print(href_list)
print(title_list)

wb = xlwt.Workbook()
sheet1 = wb.add_sheet('豆瓣音乐')
sheet1.write(0,0,'标题')
sheet1.write(0,1,'链接')

for row in range(1,len(title_list)+1,1):
    sheet1.write(row,0,title_list[row-1])
    sheet1.write(row,1,href_list[row-1])

wb.save(r'C:\Users\梦呓\Desktop\34qc.xls')

这样,代码就可以正常运行并将爬取到的数据写入Excel文件中了。

以上回复来自 FishC AI 小助理,如未能正确解答您的问题,请继续追问。

本帖被以下淘专辑推荐:

想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复

使用道具 举报

发表于 2024-1-3 19:07:19 | 显示全部楼层    本楼为最佳答案   
代码中有几个问题需要修复:

1. 缺少缩进:在for循环之前,需要在title_list和href_list的定义语句后面添加一个缩进。

2. 数据获取问题:在循环中,应该使用新的url来发送请求并获得内容,而不是一直使用初始的content变量。所以,在循环开始的时候,应该加上r=requests.get(url,headers=headers)这一行,然后再调用content=r.content.decode('utf-8')。

3. 写入Excel文件问题:sheet1.write(row,0,title_list)和sheet1.write(row,1,href_list)这两行,应该将title_list[row-1]改为title_list[row-1],将href_list[row-1]改为href_list[row-1]。

修改后的代码如下:
import requests
from bs4 import BeautifulSoup
import xlwt

url = "https://music.douban.com/tag/%E6%B5%81%E8%A1%8C?start=260&type=T"
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36','Cookie': 'll="118254"; bid=hUyLZc0IQ-0; __utmc=30149280; __utmz=30149280.1704177535.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); _pk_id.100001.afe6=e3aa424645194a40.1704178112.; __yadk_uid=vQ2OnDmCyRJ7MgONjIYqxroh3kmJOC0n; _vwo_uuid_v2=DC4889CAE9BF69AB947F1C739A359D3B1|c7a477b87f67bc643d65031d1e233036; dbcl2="276885079:IyaMMU84wwk"; ck=y-tF; push_noty_num=0; push_doumail_num=0; __utmv=30149280.27688; ap_v=0,6.0; _pk_ref.100001.afe6=%5B%22%22%2C%22%22%2C1704181059%2C%22https%3A%2F%2Fwww.douban.com%2F%22%5D; _pk_ses.100001.afe6=1; __utma=30149280.678212503.1704177535.1704177535.1704181060.2; __utmt=1;__utmb=30149280.3.9.1704181060'}
r = requests.get(url,headers=headers)
content = r.content.decode('utf-8')

with open('douban.html','wb') as file:
    file.write(r.content)

title_list = []
href_list = []

for i in range(0,400,20):
    url = f'https://music.douban.com/tag/%E6%B5%81%E8%A1%8C?start={i}&type=T'
    r = requests.get(url,headers=headers)
    content = r.content.decode('utf-8')
    soup = BeautifulSoup(content,'lxml')
    divs = soup.find_all('div',class_="pl2")
    for div in divs:
        href = div.find('a')
        href_list.append(href)
        title = div.find('a').text
        title_list.append(title)

print(href_list)
print(title_list)

wb = xlwt.Workbook()
sheet1 = wb.add_sheet('豆瓣音乐')
sheet1.write(0,0,'标题')
sheet1.write(0,1,'链接')

for row in range(1,len(title_list)+1,1):
    sheet1.write(row,0,title_list[row-1])
    sheet1.write(row,1,href_list[row-1])

wb.save(r'C:\Users\梦呓\Desktop\34qc.xls')

这样,代码就可以正常运行并将爬取到的数据写入Excel文件中了。

以上回复来自 FishC AI 小助理,如未能正确解答您的问题,请继续追问。
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2024-1-3 19:14:12 | 显示全部楼层
Traceback (most recent call last):
  File "D:\Pycharm\PythonProject\2.py", line 38, in <module>
    sheet1.write(row,1,href_list[row-1])
  File "D:\Pycharm\PythonProject\venv\Lib\site-packages\xlwt\Worksheet.py", line 1088, in write
    self.row(r).write(c, label, style)
  File "D:\Pycharm\PythonProject\venv\Lib\site-packages\xlwt\Row.py", line 254, in write
    raise Exception("Unexpected data type %r" % type(label))
Exception: Unexpected data type <class 'bs4.element.Tag'>
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2024-1-3 19:14:41 | 显示全部楼层
Traceback (most recent call last):
  File "D:\Pycharm\PythonProject\2.py", line 38, in <module>
    sheet1.write(row,1,href_list[row-1])
  File "D:\Pycharm\PythonProject\venv\Lib\site-packages\xlwt\Worksheet.py", line 1088, in write
    self.row(r).write(c, label, style)
  File "D:\Pycharm\PythonProject\venv\Lib\site-packages\xlwt\Row.py", line 254, in write
    raise Exception("Unexpected data type %r" % type(label))
Exception: Unexpected data type <class 'bs4.element.Tag'>
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2024-1-3 19:35:08 | 显示全部楼层
FishC 发表于 2024-1-3 19:07
代码中有几个问题需要修复:

1. 缺少缩进:在for循环之前,需要在title_list和href_list的定义语句后面 ...


Traceback (most recent call last):
  File "D:\Pycharm\PythonProject\2.py", line 38, in <module>
    sheet1.write(row,1,href_list[row-1])
  File "D:\Pycharm\PythonProject\venv\Lib\site-packages\xlwt\Worksheet.py", line 1088, in write
    self.row(r).write(c, label, style)
  File "D:\Pycharm\PythonProject\venv\Lib\site-packages\xlwt\Row.py", line 254, in write
    raise Exception("Unexpected data type %r" % type(label))
Exception: Unexpected data type <class 'bs4.element.Tag'>
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2024-1-3 19:35:31 | 显示全部楼层
liyifeng295641 发表于 2024-01-03 19:35
Traceback (most recent call last):
  File "D:\Pyc ...


这个问题是由于xlwt库对于数据类型的限制引起的。xlwt库只能处理一些基本的数据类型,比如字符串、整数、浮点数等,但不能直接处理列表类型。要解决这个问题,你需要将href_list中的列表转换为字符串,然后才能将其写入Excel中。你可以使用join方法将列表元素连接成一个字符串,并用逗号或其他分隔符分隔开。修改代码如下:import requestsfrom bs4 import BeautifulSoupimport xlwturl = "https://music.douban.com/tag/%E6%B5%81%E8%A1%8C?start=260&type=T"headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36','Cookie': 'll="118254"; bid=hUyLZc0IQ-0; __utmc=30149280; __utmz=30149280.1704177535.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); _pk_id.100001.afe6=e3aa424645194a40.1704178112.; __yadk_uid=vQ2OnDmCyRJ7MgONjIYqxroh3kmJOC0n; _vwo_uuid_v2=DC4889CAE9BF69AB947F1C739A359D3B1|c7a477b87f67bc643d65031d1e233036; dbcl2="276885079:IyaMMU84wwk"; ck=y-tF; push_noty_num=0; push_doumail_num=0; __utmv=30149280.27688; ap_v=0,6.0; _pk_ref.100001.afe6=%5B%22%22%2C%22%22%2C1704181059%2C%22https%3A%2F%2Fwww.douban.com%2F%22%5D; _pk_ses.100001.afe6=1; __utma=30149280.678212503.1704177535.1704177535.1704181060.2; __utmt=1;__utmb=30149280.3.9.1704181060'}r = requests.get(url,headers=headers)content = r.content.decode('utf-8')with open('douban.html','wb') as file:    file.write(r.content)title_list = href_list = for i in range(0,400,20):    url = f'[url=https://music.douban.com/tag/%E6%B5%81%E8%A1%8C?start={i}&type=T']https://music.douban.com/tag/%E6%B5%81%E8%A1%8C?start={i}&type=T'[/url]    r = requests.get(url,headers=headers)    content = r.content.decode('utf-8')    soup = BeautifulSoup(content,'lxml')    divs = soup.find_all('div',class_="pl2")    for div in divs:        href = div.find('a')        href_list.append(href)        title = div.find('a').text        title_list.append(title)print(href_list)print(title_list)wb = xlwt.Workbook()sheet1 = wb.add_sheet('豆瓣音乐')sheet1.write(0,0,'标题')sheet1.write(0,1,'链接')for row in range(1,len(title_list)+1,1):    sheet1.write(row,0,title_list[row-1])    sheet1.write(row,1,','.join(href_list[row-1]))wb.save(r'C:\Users\梦呓\Desktop\34qc.xls')这样,代码就可以正常运行并将爬取到的数据写入Excel文件中了。请注意,为了保持代码的可读性,我使用了[row-1]来索引列表元素,因为在循环中row从1开始。

以上回复来自 FishC AI 小助理,如未能正确解答您的问题,请继续追问。
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2024-4-25 23:19:29 | 显示全部楼层
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

小黑屋|手机版|Archiver|鱼C工作室 ( 粤ICP备18085999号-1 | 粤公网安备 44051102000585号)

GMT+8, 2024-11-15 17:26

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表