L嘉 发表于 2020-8-4 10:34:57

爬虫大神快来围观,菜鸟又遇到进阶问题了

本帖最后由 L嘉 于 2020-8-4 11:20 编辑

现在我只能爬到标题这些,但是我要爬取子页面的内容应该怎么写了呢?如下图,我要爬取子页面的栋数和户数,但是我这个写好的代码爬取没有反应怎么一回事啊


代码在最后




# -*- coding: utf-8 -*-
"""
Created on Tue Aug4 09:24:02 2020

@author: Administrator
"""

from lxml import etree
import requests
import csv
from multiprocessing.dummy import Pool as pl    #导入线程池

def towrite(item):
    with open('balk.csv','a',encoding='utf-8') as csvfile:
      writer = csv.writer(csvfile)
      try:
            writer.writerow(item)
      except:
            print('write error!')
            
            
def spider(url):
    htm = requests.get(url, headers = headers)
    response=etree.HTML(htm.text)
   
    mingcheng = response.xpath('div/div/a/text()')
   
    zaishou = response.xpath('div/div/a/span/text()')
   
    junjia = response.xpath('div/div/div/span/text()')
   
    dongshu = response.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()')
   
    hushu = response.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()')
   
    xiaoqu_item =
    towrite(xiaoqu_item)
    print('正在爬取小区:',mingcheng)
   
   
    if __name__ == '__main__':
       headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.3'}
   
    start_url = 'https://cd.ke.com/xiaoqu/damian/pg'
    pool=pl(4)
    all_url = []
    for x in range(1,4):
      html = requests.get(start_url +str(x), headers=headers)
      slector = etree.HTML(html.text)
      xiaoqulist = slector.xpath('//*[@id="beike"]/div/div/div/div/ul/li')
      for xiaoqu in xiaoqulist:
            xiaoqu_url_houduan = xiaoqu.xpath('//*[@id="beike"]/div/div/div/div/ul/li/div/div/a')
            all_url.append(xiaoqu_url_houduan)
    pool.map(spider,all_url)
    pool.close()
    pool.join()

1q23w31 发表于 2020-8-4 10:53:12

用代码编辑器传上来,复制的可能格式不对

liuzhengyuan 发表于 2020-8-4 11:05:11

在 spider 函数里

global headers要把 headers 变成全局变量

L嘉 发表于 2020-8-4 11:20:33

1q23w31 发表于 2020-8-4 10:53
用代码编辑器传上来,复制的可能格式不对

好的,你在帮忙看看

L嘉 发表于 2020-8-4 11:22:32

liuzhengyuan 发表于 2020-8-4 11:05
在 spider 函数里

要把 headers 变成全局变量

还是没反应,空的{:5_100:}

Twilight6 发表于 2020-8-4 12:20:37

L嘉 发表于 2020-8-4 11:22
还是没反应,空的

你的 xpath 全部都是错误的呀,你要爬取哪些内容

zaishou 是什么

junjia 是什么

dongshu

hushu

L嘉 发表于 2020-8-4 13:03:05

Twilight6 发表于 2020-8-4 12:20
你的 xpath 全部都是错误的呀,你要爬取哪些内容

zaishou 是什么


主要是子页面的爬取不会{:10_284:}

Twilight6 发表于 2020-8-4 14:10:38

L嘉 发表于 2020-8-4 13:03
主要是子页面的爬取不会

晚点帮你写吧,现在有点事哈~

L嘉 发表于 2020-8-4 14:32:29

Twilight6 发表于 2020-8-4 14:10
晚点帮你写吧,现在有点事哈~

谢谢

johnnyb 发表于 2020-8-4 15:01:41

本帖最后由 johnnyb 于 2020-8-4 15:17 编辑

用我写的规则.. 亲测 没有任何问题.

import requests
from lxml import etree

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36'
}
url = 'https://cd.ke.com/xiaoqu/1611057941003/?fb_expo_id=343036232051900416'
r = requests.get(url=url, headers=headers, timeout=3)

tree = etree.HTML(r.text)
money = tree.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()')
WY = tree.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()').lstrip().rstrip()
WYGS = tree.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()')
KFS = tree.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()')
tag = tree.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()')
tag2 = tree.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()')
print('参考价格:{0}元/平'.format(money))
print('物业费:{0}'.format(WY))
print('物业公司:{0}'.format(WYGS))
print('开发商:{0}'.format(KFS))
print('楼栋总数:{0}'.format(tag))
print('房屋总数:{0}'.format(tag2))


参考价格:17235元/平
物业费:1.29至5.2元/平米/月
物业公司:成都安达祥和置业有限公司
开发商:成都志达房地产开发有限公司
楼栋总数:69栋
房屋总数:5067户

L嘉 发表于 2020-8-4 17:16:36

johnnyb 发表于 2020-8-4 15:01
用我写的规则.. 亲测 没有任何问题.

那我要爬取每个小区的呢怎么写呢

1q23w31 发表于 2020-8-4 17:26:19

本帖最后由 1q23w31 于 2020-8-4 17:37 编辑

L嘉 发表于 2020-8-4 17:16
那我要爬取每个小区的呢怎么写呢

# -*- coding: utf-8 -*-
"""
Created on Tue Aug4 09:24:02 2020

@author: Administrator
"""

from lxml import etree
import requests
import csv
from multiprocessing.dummy import Pool as pl    #导入线程池

def towrite(item):
    with open('balk.csv','a') as csvfile:
      writer = csv.writer(csvfile)
      try:
            writer.writerow(item)
      except:
            print('write error!')
            
            
def spider(url):
    htm = requests.get(url, headers = headers)
    response=etree.HTML(htm.text)
   
    mingcheng = response.xpath('//*[@id="beike"]/div/div/div/div/div/div/h1/@title')
   
   
    junjia = response.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()')

    dongshu = response.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()')
   
    hushu = response.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()')
   
    xiaoqu_item =
    towrite(xiaoqu_item)
    print('正在爬取小区:',mingcheng)
   
   
if __name__ == '__main__':
   headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.3'}

start_url = 'https://cd.ke.com/xiaoqu/damian/pg'
pool=pl(4)
all_url = []
zaishou = []
for x in range(1,4):
    html = requests.get(start_url +str(x), headers=headers)
    slector = etree.HTML(html.text)
    xiaoqulist = slector.xpath('//*[@id="beike"]/div/div/div/div/ul')


    for xiaoqu in xiaoqulist:
      xiaoqu_url_houduan = xiaoqu.xpath('//*[@id="beike"]/div/div/div/div/ul/li/a/@href')
      price = xiaoqu.xpath('//*[@id="beike"]/div/div/div/div/ul/li/div/div/a/span/text()')
      all_url.extend(xiaoqu_url_houduan)
      zaishou.extend(price)

pool.map(spider,all_url)
pool.close()
pool.join()

不是很完美,除了在售,其他的都能写出

1q23w31 发表于 2020-8-4 18:17:42

本帖最后由 1q23w31 于 2020-8-4 18:20 编辑

L嘉 发表于 2020-8-4 17:16
那我要爬取每个小区的呢怎么写呢

# -*- coding: utf-8 -*-
"""
Created on Tue Aug4 09:24:02 2020

@author: Administrator
"""

from lxml import etree
import requests
import csv
from multiprocessing.dummy import Pool as pl #导入线程池
import re

def towrite(item):
    with open('balk.csv','a') as csvfile:
      writer = csv.writer(csvfile)
      try:
            writer.writerow(item)
      except:
            print('write error!')
            
            
def spider(url):
    htm = requests.get(url, headers = headers)
    response=etree.HTML(htm.text)
   
    mingcheng = response.xpath('//*[@id="beike"]/div/div/div/div/div/div/h1/@title')
   
    zaishou = response.xpath('/html/head/meta/@content')
    zaishou = re.findall('在售二手房源.*?套',zaishou)

    junjia = response.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()')

    dongshu = response.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()')
   
    hushu = response.xpath('//*[@id="beike"]/div/div/div/div/div/div/span/text()')
   
    xiaoqu_item =
    towrite(xiaoqu_item)
    print('正在爬取小区:',mingcheng)
   
   
if __name__ == '__main__':
   headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.3'}

start_url = 'https://cd.ke.com/xiaoqu/damian/pg'
pool=pl(4)
all_url = []

for x in range(1,4):
    html = requests.get(start_url +str(x), headers=headers)
    slector = etree.HTML(html.text)
    xiaoqulist = slector.xpath('//*[@id="beike"]/div/div/div/div/ul')


    for xiaoqu in xiaoqulist:
      xiaoqu_url_houduan = xiaoqu.xpath('//*[@id="beike"]/div/div/div/div/ul/li/a/@href')
      price = xiaoqu.xpath('//*[@id="beike"]/div/div/div/div/ul/li/div/div/a/span/text()')
      all_url.extend(xiaoqu_url_houduan)


pool.map(spider,all_url)
pool.close()
pool.join()

static/image/hrline/line3.png

完美解决{:9_219:}

johnnyb 发表于 2020-8-4 19:45:21

L嘉 发表于 2020-8-4 17:16
那我要爬取每个小区的呢怎么写呢

哥们你不是来做伸手党的. 要举一反三.都给你写完了 你还能学到啥?   再说你都会用Pool了 xpath规则还不明白呢?不会吧.

Twilight6 发表于 2020-8-4 20:58:18



帮你改完了,应该能达到你的目的了:



# -*- coding: utf-8 -*-
"""
Created on Tue Aug4 09:24:02 2020

@author: Administrator
"""

from lxml import etree
import requests
import csv
from multiprocessing.dummy import Pool as pl# 导入线程池


def towrite(item):
    with open('balk.csv', 'a', encoding='utf-8') as csvfile:
      writer = csv.writer(csvfile)
      try:
            writer.writerow(item)
      except:
            print('write error!')


def spider(url):
    htm = requests.get(url, headers=headers)
    response = etree.HTML(htm.text)

    mingcheng = response.xpath('//div[@class="title"]/h1/text()').strip()

    dongshu = response.xpath('//span[@class="xiaoquInfoContent"]/text()')

    hushu = response.xpath('//span[@class="xiaoquInfoContent"]/text()')

    for i in xiaoquname:
      if mingcheng in i:
            idx = i.index(mingcheng)
            zaishou = i
            junjia = i
            break

    xiaoqu_item =
    towrite(xiaoqu_item)
    print('正在爬取小区:', mingcheng)


if __name__ == '__main__':
    headers = {
      'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.3'}

    start_url = 'https://cd.ke.com/xiaoqu/damian/pg'
    pool = pl(4)
    all_url = []
    xiaoquname = []
    for x in range(1, 4):
      html = requests.get(start_url + str(x), headers=headers)
      slector = etree.HTML(html.text)
      xiaoqulist = slector.xpath('//div[@class="info"]/div[@class="title"]/a/@href')
      name = slector.xpath("//a[@class='maidian-detail']/text()")
      jiage = slector.xpath("//div[@class='totalPrice']/span/text()")
      zaishous = slector.xpath("//a[@class='totalSellCount']/span/text()")
      xiaoquname.append()
      for xiaoqu in xiaoqulist:
            all_url.append(xiaoqu)
    pool.map(spider, all_url)
    pool.close()
    pool.join()
页: [1]
查看完整版本: 爬虫大神快来围观,菜鸟又遇到进阶问题了