私はり 发表于 2021-9-25 13:02:13

爬虫

我爬取boss直聘,返回的是空列表,不知道怎么回事,还有就是我想要将第一个页面到第十个页面分别写入十个csv文件该怎么写?import urllib.request
import urllib.parse
from lxml import etree
def creat_request(my_page):
    base_url = 'https://www.zhipin.com/c101270100/?'
    headers = {
      'Accept': ' text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
      # 'Accept-Encoding': ' gzip, deflate, br',
      'Accept-Language': ' zh,en-US;q=0.9,en;q=0.8,zh-CN;q=0.7',
      'Connection': ' keep-alive',
      'Cookie': ' acw_tc=0bcb2f0716324609378892612e49ac132e87df9e2107cbb14b08e84e449455; __g=-; Hm_lvt_194df3105ad7148dcf2b98a91b5e727a=1632460940; lastCity=100010000; __l=l=%2Fwww.zhipin.com%2Fc101270100%2F%3Fka%3Dsel-city-101270100&r=https%3A%2F%2Fcn.bing.com%2F&g=&s=3&friend_source=0&s=3&friend_source=0; __c=1632460940; __a=34040189.1632460940..1632460940.9.1.9.9; Hm_lpvt_194df3105ad7148dcf2b98a91b5e727a=1632461742; __zp_stoken__=f68bdWDVKZBhBNRMPdCExbkphQAJsIzQwI3ROPh9gXUpCAQYhd3ENZnFRIh87IjdMJkx%2BFx06Dy43OjZAYwE9CB9LDXIeIWlAQR5kD1QNLx9LKzFwTQZrYCAHenNLDxEMZHVMP2BODQ1gdEY%3D',
      'Host': ' www.zhipin.com',
      'sec-ch-ua': ' "Chromium";v="94", "Google Chrome";v="94", ";Not A Brand";v="99"',
      'sec-ch-ua-mobile': ' ?0',
      'sec-ch-ua-platform': ' "Windows"',
      'Sec-Fetch-Dest': ' document',
      'Sec-Fetch-Mode': ' navigate',
      'Sec-Fetch-Site': ' none',
      'Sec-Fetch-User': ' ?1',
      'Upgrade-Insecure-Requests': ' 1',
      'User-Agent': ' Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
    }
    data = {
      'page' : my_page,
      'ka' : 'page-'+ str(my_page),
    }
    data = urllib.parse.urlencode(data)
    url = base_url + data
    print(url)
    request = urllib.request.Request(url=url,headers=headers)
    return request

def get_content(request):
    response = urllib.request.urlopen(request)
    content = response.read().decode('utf-8')
    return content

def down_load(content):
    tree = etree.HTML(content)
    name_list = tree.xpath('//*[@id="main"]/div/div/ul/li//div/span/a/text()')
    place_list = tree.xpath('//*[@id="main"]/div/div/ul/li//div/span/span/text()')
    for i in range(len(name_list)):
      name = name_list
      place = place_list
    print(place_list)



if __name__ == '__main__':
    start_page = int(input("请输入起始页码:"))
    end_page = int(input("请输入结束页码:"))
    for my_page in range(start_page,end_page+1):
      request = creat_request(my_page)
      content = get_content(request)
      down_load(content)

wp231957 发表于 2021-9-30 11:00:35

cookie 是动态的 有时效性   

私はり 发表于 2021-9-30 13:35:40

wp231957 发表于 2021-9-30 11:00
cookie 是动态的 有时效性

好的
页: [1]
查看完整版本: 爬虫