bs4如何提取所需网址内容
下面的我想提取href中的内容下面的列表保存在targets中试过了for each in targets:each.get('hrref')不行[<td class="classicLook0"><a href="detail.jsp?id=339884">
A0096
</a></td>, <td class="classicLook0">问答题</td>, <td class="classicLook0">-</td>, <td class="classicLook0">-</td>, <td class="classicLook0">-</td>, <td class="classicLook0"><a href="detail.jsp?id=339876">
A0088
</a></td>, <td class="classicLook0">问答题</td>, <td class="classicLook0">-</td>, <td class="classicLook0">-</td>, <td class="classicLook0">-</td>, <td class="classicLook0"><a href="detail.jsp?id=339868">
A0080
</a></td>, <td class="classicLook0">问答题</td>, <td class="classicLook0">-</td>, <td class="classicLook0">-</td>, <td class="classicLook0">-</td>, <td class="classicLook0"><a href="detail.jsp?id=339860">
A0072
</a></td>, <td class="classicLook0">问答题</td>, <td class="classicLook0">-</td>, <td class="classicLook0">-</td>, <td class="classicLook0">-</td>]
[<td class="classicLook1"><a href="detail.jsp?id=339883">
A0095
</a></td>, <td class="classicLook1">问答题</td>,, ]
发下代码吧,方便的话
数据提取-Beautiful Soup
本帖最后由 PYthofreeze 于 2020-6-9 10:03 编辑
Twilight6 发表于 2020-6-9 09:46
发下代码吧,方便的话
数据提取-Beautiful Soup
import requests
import bs4
target_url='http://wlkc.jluzh.com/meol/common/question/questionbank/student/list.jsp?tagbug=client&cateId=27948&perm=3840&status=0&strStyle=new03'
tar_res =session.get(target_url,headers=headers)
soup = bs4.BeautifulSoup(tar_res.text,'html.parser')
for i in range(8):
targets = soup.find_all("td", class_="classicLook"+str(i))
#print(targets)
for each in targets:
print(each.a)
本帖最后由 Twilight6 于 2020-6-9 10:21 编辑
PYthofreeze 发表于 2020-6-9 09:59
import requests
import bs4
。。。代码不全吧{:10_250:}
话说你这里循环了 不是就等于只有提取出 第七个时候的数据了
for i in range(8):
targets = soup.find_all("td", class_="classicLook"+str(i))
#print(targets)
for each in targets:
print(each.a)
应该改成:
targets = []
for i in range(8):
targets += soup.find_all("td", class_="classicLook"+str(i))
#print(targets)
for each in targets:
print(each.a) Twilight6 发表于 2020-6-9 10:19
。。。代码不全吧
话说你这里循环了 不是就等于只有提取出 第七个时候的数据了
import requests
import bs4
proxies = {"http": "127.0.0.1:1080", "https": "127.0.0.1:1080"}
headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3756.400 QQBrowser/10.5.4039.400',
'Cookie': 'JSESSIONID=B43CB1A6DA8EB51A0192BEED03A959D8.TM4; DWRSESSIONID=GdCgKTLhs$Pz*W*VaM0i2Fsyman; uudid=cms5e5b099a-6147-27a0-a5dd-4be724a0b523; SF_cookie_2=67313298; radius=180.91.162.174'}
session = requests.Session()
res = session.get('http://wlkc.jluzh.com/meol/personal.do?menuId=0',headers=headers)
print(res)
target_url='http://wlkc.jluzh.com/meol/common/question/questionbank/student/list.jsp?tagbug=client&cateId=27948&perm=3840&status=0&strStyle=new03'
tar_res =session.get(target_url,headers=headers)
soup = bs4.BeautifulSoup(tar_res.text,'html.parser')
for i in range(8):
targets = soup.find_all("td", class_="classicLook"+str(i))
#print(targets)
for each in targets:
print(each.a) Twilight6 发表于 2020-6-9 10:19
。。。代码不全吧
话说你这里循环了 不是就等于只有提取出 第七个时候的数据了
那个循环只到7是因为 我看过网页的代码了只有0-7是符合的 Twilight6 发表于 2020-6-9 10:19
。。。代码不全吧
话说你这里循环了 不是就等于只有提取出 第七个时候的数据了
我需要提取的是href的网址不知道怎么提 PYthofreeze 发表于 2020-6-9 10:45
我需要提取的是href的网址不知道怎么提
import requests
import bs4
proxies = {"http": "127.0.0.1:1080", "https": "127.0.0.1:1080"}
headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3756.400 QQBrowser/10.5.4039.400',
'Cookie': 'JSESSIONID=B43CB1A6DA8EB51A0192BEED03A959D8.TM4; DWRSESSIONID=GdCgKTLhs$Pz*W*VaM0i2Fsyman; uudid=cms5e5b099a-6147-27a0-a5dd-4be724a0b523; SF_cookie_2=67313298; radius=180.91.162.174'}
session = requests.Session()
res = session.get('http://wlkc.jluzh.com/meol/personal.do?menuId=0',headers=headers)
print(res)
target_url='http://wlkc.jluzh.com/meol/common/question/questionbank/student/list.jsp?tagbug=client&cateId=27948&perm=3840&status=0&strStyle=new03'
tar_res =session.get(target_url,headers=headers)
soup = bs4.BeautifulSoup(tar_res.text,'html.parser')
targets = []
for i in range(8):
targets += soup.find_all("td", class_="classicLook"+str(i))
for each in targets:
if each.a == None:
continue
print(each.a.get('href')) Twilight6 发表于 2020-6-9 10:49
请问为什么有那么多的NONE不应该阿 这一页 应该有三十个左右的href PYthofreeze 发表于 2020-6-9 10:53
请问为什么有那么多的NONE不应该阿 这一页 应该有三十个左右的href
这个我不清楚了可能有些节点特殊,你自己看看网站审核元素找找规律 Twilight6 发表于 2020-6-9 10:55
这个我不清楚了可能有些节点特殊,你自己看看网站审核元素找找规律
嗯嗯 谢谢了 PYthofreeze 发表于 2020-6-9 11:02
嗯嗯 谢谢了
没事~帮助到你记得给个最佳~
页:
[1]