|
马上注册,结交更多好友,享用更多功能^_^
您需要 登录 才可以下载或查看,没有账号?立即注册
x
刚从基础接触到爬虫这里,尝试简单的爬了一下链家的首页每个租房信息的链接,但是每次都是显示空列表,用len查看也是为0,请问这个是被墙了么?有什么解决办法呀,因为刚接触到所以好多都不明白,只能这样简单的稍微试一试,非常感谢
import requests
from bs4 import BeautifulSoup
def get_links(url):
responce = requests.get(url)
soup = BeautifulSoup(responce.text,'lxml')
links_div = soup.find_all('div',class_ = 'pic-panel')
links = [div.a.get('href') for div in links_div]
return links
url = 'https://bj.lianjia.com/zufang/'
get_links(url)
本帖最后由 Twilight6 于 2020-4-24 11:20 编辑
我帮你改了改,爬成功了
- import requests
- from bs4 import BeautifulSoup
- def get_links(url):
- headers = {'User-Agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0"}
- responce = requests.get(url,headers=headers)
- soup = BeautifulSoup(responce.text,'lxml')
- links_div = soup.find_all('p',class_="content__list--item--title twoline")
- url_list = []
- for i in links_div:
- url_list.append('https://bj.lianjia.com/'+i.a.get('href'))
- return url_list
- url = 'https://bj.lianjia.com/zufang/'
- if __name__ =='__main__':
- print(get_links(url))
复制代码
我也是小白,和你一样刚刚学到这个,刚刚爬了半小时才成功
爬虫建议弄个UA头,最基本反爬的伪装成浏览器要吧
|
|