鱼C论坛

 找回密码
 立即注册
查看: 1363|回复: 4

[已解决]为什么写入文件总是出问题?

[复制链接]
发表于 2022-6-11 20:13:30 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能^_^

您需要 登录 才可以下载或查看,没有账号?立即注册

x
源代码为:
import requests
from bs4 import BeautifulSoup
from lxml import etree
import csv
i = open("introduction.csv",mode='w',encoding='utf-8',newline='')
csvwriter_introduction = csv.writer(i)

def download_one_page(url):
    #拿到页面源代码
    resp = requests.get(url)
    resp.encoding = 'utf-8'#处理乱码
    html = etree.HTML(resp.text)
    cla = html.xpath('/html/body/div[5]/div/div')[0]#此处需要加上[0],因为默认是列表
    names = cla.xpath('./div[@class = "sea_a23 clearfix"]')
    contents = cla.xpath('./div[@class = "sea_a23 clearfix pt25"]')
    #拿到每个title和content
    for title in names:
        txt1 = title.xpath('./h2/a/text()')
        #csvwriter_title.writerow(txt1)
    for content in contents:
        feature = content.xpath('./div/h3/text()')
        introduction = content.xpath('./div/p//text()')
        #对数据进行简单的处理:\n\t,\n,空格,【】,\xa0,›,去掉
        intro = (item.replace("\n\t","").replace(", ","").replace("›","").replace("\n","").replace(" ","").replace("【厂家】","厂家").replace("\xa0","").replace("【产品分类】","产品分类") for item in introduction)
        introduction = (x.strip() for x in intro if x.strip()!='')
        #把数据存放在文件中
        #csvwriter_introduction.writerows(feature)
        introduction = ''.join(introduction)
        csvwriter_introduction.writerows(introduction)
        #csvwriter_picture.writerow(picture)
        #print(feature)
        #print(introduction)
        #print(''.join(introduction))


if __name__ == '__main__':
    download_one_page('http://www.c-denkei.cn/index.php?d=home&c=goods&m=search&s=%E7%94%B5%E6%BA%90&c1=0&c2=0&c3=0&page=')
直接用print打印出来都是好好的,每次一写入就出问题
最佳答案
2022-6-12 22:41:12
writerows 会把可迭代对象迭代出的每一个元素写入多行,而字符串的每一个元素是一个字符,

所以造成了一个字符一行的情况,故对你的代码修改如下:
import requests
from bs4 import BeautifulSoup
from lxml import etree
import csv
i = open("introduction.csv",mode='w',encoding='utf-8',newline='')
csvwriter_introduction = csv.writer(i)

def download_one_page(url):
    #拿到页面源代码
    resp = requests.get(url)
    resp.encoding = 'utf-8'#处理乱码
    html = etree.HTML(resp.text)
    cla = html.xpath('/html/body/div[5]/div/div')[0]#此处需要加上[0],因为默认是列表
    names = cla.xpath('./div[@class = "sea_a23 clearfix"]')
    contents = cla.xpath('./div[@class = "sea_a23 clearfix pt25"]')
    #拿到每个title和content
    for title in names:
        txt1 = title.xpath('./h2/a/text()')
        #csvwriter_title.writerow(txt1)
    for content in contents:
        feature = content.xpath('./div/h3/text()')
        introduction = content.xpath('./div/p//text()')
        #对数据进行简单的处理:\n\t,\n,空格,【】,\xa0,›,去掉
        intro = (item.replace("\n\t","").replace(", ","").replace("›","").replace("\n","").replace(" ","").replace("【厂家】","厂家").replace("\xa0","").replace("【产品分类】","产品分类") for item in introduction)
        introduction = (x.strip() for x in intro if x.strip()!='')
        #把数据存放在文件中
        #csvwriter_introduction.writerows(feature)
        introduction = ''.join(introduction)
        csvwriter_introduction.writerow([introduction])              # 修改了这里
        #csvwriter_picture.writerow(picture)
        #print(feature)
        #print(introduction)
        #print(''.join(introduction))


if __name__ == '__main__':
    download_one_page('http://www.c-denkei.cn/index.php?d=home&c=goods&m=search&s=%E7%94%B5%E6%BA%90&c1=0&c2=0&c3=0&page=')
$SDPY9~FGWU(G0S8KF([PTA.png
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复

使用道具 举报

发表于 2022-6-11 23:38:22 | 显示全部楼层
这样子可以吗?
import requests
from bs4 import BeautifulSoup
from lxml import etree
import csv
i = open("introduction.csv",mode='w',encoding='utf-8',newline='')
csvwriter_introduction = csv.writer(i)

def download_one_page(url):
    #拿到页面源代码
    resp = requests.get(url)
    resp.encoding = 'utf-8'#处理乱码
    html = etree.HTML(resp.text)
    cla = html.xpath('/html/body/div[5]/div/div')[0]#此处需要加上[0],因为默认是列表
    names = cla.xpath('./div[@class = "sea_a23 clearfix"]')
    contents = cla.xpath('./div[@class = "sea_a23 clearfix pt25"]')
    #拿到每个title和content
    for title in names:
        txt1 = title.xpath('./h2/a/text()')
        #csvwriter_title.writerow(txt1)
    for content in contents:
        feature = content.xpath('./div/h3/text()')
        introduction = content.xpath('./div/p//text()')
        #对数据进行简单的处理:\n\t,\n,空格,【】,\xa0,›,去掉
        intro = (item.replace("\n\t","").replace(",","").replace("›","").replace("\n","").replace(" ","").replace("【厂家】","厂家").replace("\xa0","").replace("【产品分类】","产品分类") for item in introduction)
        introduction = (x.strip() for x in intro if x.strip()!='')
        #把数据存放在文件中
        #csvwriter_introduction.writerows(feature)
        introduction = ''.join(introduction)
        i.write(introduction)  # 改成python内置函数
        #csvwriter_picture.writerow(picture)
        #print(feature)
        #print(introduction)
        #print(''.join(introduction))


if __name__ == '__main__':
    download_one_page('http://www.c-denkei.cn/index.php?d=home&c=goods&m=search&s=%E7%94%B5%E6%BA%90&c1=0&c2=0&c3=0&page=')
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2022-6-12 18:41:26 | 显示全部楼层

你这全在一行上呀,我想要分行的那种
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2022-6-12 22:41:12 | 显示全部楼层    本楼为最佳答案   
writerows 会把可迭代对象迭代出的每一个元素写入多行,而字符串的每一个元素是一个字符,

所以造成了一个字符一行的情况,故对你的代码修改如下:
import requests
from bs4 import BeautifulSoup
from lxml import etree
import csv
i = open("introduction.csv",mode='w',encoding='utf-8',newline='')
csvwriter_introduction = csv.writer(i)

def download_one_page(url):
    #拿到页面源代码
    resp = requests.get(url)
    resp.encoding = 'utf-8'#处理乱码
    html = etree.HTML(resp.text)
    cla = html.xpath('/html/body/div[5]/div/div')[0]#此处需要加上[0],因为默认是列表
    names = cla.xpath('./div[@class = "sea_a23 clearfix"]')
    contents = cla.xpath('./div[@class = "sea_a23 clearfix pt25"]')
    #拿到每个title和content
    for title in names:
        txt1 = title.xpath('./h2/a/text()')
        #csvwriter_title.writerow(txt1)
    for content in contents:
        feature = content.xpath('./div/h3/text()')
        introduction = content.xpath('./div/p//text()')
        #对数据进行简单的处理:\n\t,\n,空格,【】,\xa0,›,去掉
        intro = (item.replace("\n\t","").replace(", ","").replace("›","").replace("\n","").replace(" ","").replace("【厂家】","厂家").replace("\xa0","").replace("【产品分类】","产品分类") for item in introduction)
        introduction = (x.strip() for x in intro if x.strip()!='')
        #把数据存放在文件中
        #csvwriter_introduction.writerows(feature)
        introduction = ''.join(introduction)
        csvwriter_introduction.writerow([introduction])              # 修改了这里
        #csvwriter_picture.writerow(picture)
        #print(feature)
        #print(introduction)
        #print(''.join(introduction))


if __name__ == '__main__':
    download_one_page('http://www.c-denkei.cn/index.php?d=home&c=goods&m=search&s=%E7%94%B5%E6%BA%90&c1=0&c2=0&c3=0&page=')
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2022-6-13 19:17:45 | 显示全部楼层
isdkz 发表于 2022-6-12 22:41
writerows 会把可迭代对象迭代出的每一个元素写入多行,而字符串的每一个元素是一个字符,

所以造成了一 ...

多谢大佬
想知道小甲鱼最近在做啥?请访问 -> ilovefishc.com
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

小黑屋|手机版|Archiver|鱼C工作室 ( 粤ICP备18085999号-1 | 粤公网安备 44051102000585号)

GMT+8, 2024-11-17 21:24

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表