python爬虫
本帖最后由 c皮皮o 于 2021-7-17 20:36 编辑list1=["<p>滚滚长江东逝水,浪花淘尽英雄。是非成败转头空。</p>","<p>青山依旧在,几度夕阳红。</p>"]
list1是一个<class 'bs4.element.ResultSet'>类型,
我用text,get_text(),string,都用了,但提示我这是标签集合没有上述的方法。
想请教大佬们,如何将这种类型的用字符串的形式打印出来(就是把 p标签给去掉,直接输出字符串)
附上代码:
import urllib.request
from bs4 import BeautifulSoup
url1="http://www.newxue.com/gkmz/sgyy/"
objects=urllib.request.Request(url1)
objects.add_header("User-Agent","Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36 Edg/91.0.864.67")
response=urllib.request.urlopen(objects)
html_content=response.read().decode("gbk")
soup1=BeautifulSoup(html_content,"lxml")
file=soup1.select(".xslttext>ul>li")
del file
for each in file:
each_chapter=each.string
url2 = each.a["href"]
each_content = urllib.request.Request(url2)
each_content.add_header("User-Agent",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36 Edg/91.0.864.67")
each_content1 = urllib.request.urlopen(each_content)
each_content1 = each_content1.read().decode("gbk")
soup2 = BeautifulSoup(each_content1, "lxml")
p = soup2.select("#dashu_text>p")
print(type(p))
本帖最后由 qq1151985918 于 2021-7-17 20:59 编辑
你这个就用替换吧,也省事。
list1 = ["<p>滚滚长江东逝水,浪花淘尽英雄。是非成败转头空。</p>", "<p>青山依旧在,几度夕阳红。</p>"]
for each in list1:
print(each.replace("<p>", "").replace("</p>", ""))
你刚刚没有代码吧,你的方法用的是不对的,text, get_text(), string不是对 p 这个列表用,而是对里面的元素用
看例子
for each in p:
print(each.text)
for each in p:
print(each.get_text())
for each in p:
print(each.string) 遍历list1,得到每一个p标签,p.text获取p标签的文本内容 import urllib.request
from bs4 import BeautifulSoup
url1="http://www.newxue.com/gkmz/sgyy/"
objects=urllib.request.Request(url1)
objects.add_header("User-Agent","Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36 Edg/91.0.864.67")
response=urllib.request.urlopen(objects)
html_content=response.read().decode("gbk")
soup1=BeautifulSoup(html_content,"lxml")
file=soup1.select(".xslttext>ul>li")
del file
for each in file:
each_chapter=each.string
url2 = each.a["href"]
each_content = urllib.request.Request(url2)
each_content.add_header("User-Agent",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36 Edg/91.0.864.67")
each_content1 = urllib.request.urlopen(each_content)
each_content1 = each_content1.read().decode("gbk")
soup2 = BeautifulSoup(each_content1, "lxml")
soup2
p = soup2.select("#dashu_text>p")
str = ""
for each_p in p:
# print(each_p.text)
str += each_p.text
str += '\n'# 让结果好看些
print(str)
King丨小义 发表于 2021-7-17 20:50
遍历list1,得到每一个p标签,p.text获取p标签的文本内容
牛批啊,我这脑子太蠢了{:10_266:} 本帖最后由 qq1151985918 于 2021-7-17 21:00 编辑
你刚刚没有代码吧,你的方法用的是不对的,text, get_text(), string不是对 p 这个列表用,而是对里面的元素用
看例子
for each in p:
print(each.text)
for each in p:
print(each.get_text())
for each in p:
print(each.string)
页:
[1]