爬取
大佬们,这种情况是不是就不能爬取了 对的,没有权限的资源自然也就不能爬取 isdkz 发表于 2023-3-4 17:42对的,没有权限的资源自然也就不能爬取
好的,感谢感谢 isdkz 发表于 2023-3-4 17:42
对的,没有权限的资源自然也就不能爬取
可以帮我看一下这个代码是哪里出问题了吗,怎么只能爬取十个,后面怎么爬取不到了呢,感谢感谢import requests
import re
from time import sleep
from lxml import html
etree = html.etree
header = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
}
#解析标题和详情页的url
url_01 = 'https://www.ting456.com/book/d45049.html'
response = requests.get(url=url_01,headers=header).text
tree = etree.HTML(response)
li_list = tree.xpath('//*[@id="xima"]/div/li')
for li in li_list:
title = li.xpath('./a/text()')
href = 'https://www.ting456.com' + li.xpath('./a/@href')
print(title,href)
#print(href)
#解析动态参数
hre = li.xpath('./a/@href')
pattern = re.compile('(/play/\d+?-\d-)(\d)(\.html)')
for i in pattern.findall(hre):
don_02 = f"{i}{int(i) + 1}{i}"
# print(title,href)
#解析动态参数
url_02 = href
response = requests.get(url=url_02, headers=header).text
tree = etree.HTML(response)
page = tree.xpath('//*[@id="player"]/script/text()')
ex = 'now="(.+?)"'
don_01 = re.findall(ex, page, re.S)
#解析音乐的url
url_03 = 'https://www.ting456.com/js/player/play.php'
parms = {
"url": don_01,
"from": "xima",
"s": "undefined",
"x": don_02
}
# s = "/ play / 45049 - 0 - {}.html".format(i)
# print(s)
#sleep(1)
result = requests.get(url=url_03, headers=header, params=parms).text
ex = 'mp3:"(.*?)"'
url_04 = re.findall(ex, result, re.S)
#print(url_04)
#
# #请求音乐数据并保存
con = requests.get(url=url_04, headers=header).content
pattern = re.compile(r'[\\/:*?"<>|]')
title = pattern.sub("", title)
name = title + '.mp3'
f = open(name, 'wb')
f.write(con)
print(title,"保存完毕") 哈岁NB 发表于 2023-3-4 18:03
可以帮我看一下这个代码是哪里出问题了吗,怎么只能爬取十个,后面怎么爬取不到了呢,感谢感谢
有没有报错? isdkz 发表于 2023-3-4 20:33
有没有报错?
没有报错,直接就程序结束了 isdkz 发表于 2023-3-4 20:33
有没有报错?
就是这样,最后面的没有权限不爬取,但是十一十二这些还能爬,我这个程序不知道怎么就爬到十就结束了,就是因为url_04只输出10个,不知道为什么 isdkz 发表于 2023-3-4 20:33
有没有报错?
找到原因了,是31行写进那个for循环了 哈岁NB 发表于 2023-3-4 20:57
找到原因了,是31行写进那个for循环了
ok,我这里没跑出结果,也不知道网速太慢还是怎样 isdkz 发表于 2023-3-4 20:59
ok,我这里没跑出结果,也不知道网速太慢还是怎样
好的,请问这个报错是什么原因呀,我查的是访问太频繁码Traceback (most recent call last):
File "D:\技能\python\venv\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "D:\技能\python\venv\lib\site-packages\urllib3\connectionpool.py", line 449, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "D:\技能\python\venv\lib\site-packages\urllib3\connectionpool.py", line 444, in _make_request
httplib_response = conn.getresponse()
File "D:\Program Files\python\lib\http\client.py", line 1322, in getresponse
response.begin()
File "D:\Program Files\python\lib\http\client.py", line 303, in begin
version, status, reason = self._read_status()
File "D:\Program Files\python\lib\http\client.py", line 272, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\技能\python\venv\lib\site-packages\requests\adapters.py", line 489, in send
resp = conn.urlopen(
File "D:\技能\python\venv\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "D:\技能\python\venv\lib\site-packages\urllib3\util\retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
File "D:\技能\python\venv\lib\site-packages\urllib3\packages\six.py", line 769, in reraise
raise value.with_traceback(tb)
File "D:\技能\python\venv\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "D:\技能\python\venv\lib\site-packages\urllib3\connectionpool.py", line 449, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "D:\技能\python\venv\lib\site-packages\urllib3\connectionpool.py", line 444, in _make_request
httplib_response = conn.getresponse()
File "D:\Program Files\python\lib\http\client.py", line 1322, in getresponse
response.begin()
File "D:\Program Files\python\lib\http\client.py", line 303, in begin
version, status, reason = self._read_status()
File "D:\Program Files\python\lib\http\client.py", line 272, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\技能\python\shengy\tunshi.py", line 34, in <module>
response = requests.get(url=url_02, headers=header).text
File "D:\技能\python\venv\lib\site-packages\requests\api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "D:\技能\python\venv\lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "D:\技能\python\venv\lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "D:\技能\python\venv\lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "D:\技能\python\venv\lib\site-packages\requests\adapters.py", line 547, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) 哈岁NB 发表于 2023-3-4 21:04
好的,请问这个报错是什么原因呀,我查的是访问太频繁码
确实是,应该等一会儿就好 isdkz 发表于 2023-3-4 21:09
确实是,应该等一会儿就好
好的,感谢感谢 isdkz 发表于 2023-3-4 21:09
确实是,应该等一会儿就好
请问一下大佬,我现在基础爬虫学完了,下一步该学什么呢 哈岁NB 发表于 2023-3-5 09:11
请问一下大佬,我现在基础爬虫学完了,下一步该学什么呢
进阶的话可以深入学一下http协议的知识,还有 js 也可以学一下 isdkz 发表于 2023-3-5 09:45
进阶的话可以深入学一下http协议的知识,还有 js 也可以学一下
好的,感谢感谢 isdkz 发表于 2023-3-5 09:45
进阶的话可以深入学一下http协议的知识,还有 js 也可以学一下
想问一下,如果自己想找项目练手,该去哪里找呢,GitHub吗 哈岁NB 发表于 2023-3-5 09:50
想问一下,如果自己想找项目练手,该去哪里找呢,GitHub吗
对的,基本是去 github 上找,
这里有一些不错的爬虫项目:
https://www.zhihu.com/question/58151047
https://zhuanlan.zhihu.com/p/425241599 isdkz 发表于 2023-3-5 09:57
对的,基本是去 github 上找,
这里有一些不错的爬虫项目:
好的,感谢感谢
页:
[1]