fytfytf 发表于 2020-7-25 20:20:19

怎么爬取需要登录的页面

import requests
import re
from lxml import html

def open_url(keyword):
    item={'q':keyword,'sort':'sale-desc'}
    url='https://s.taobao.com/search'
    head={'user-agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 LBBROWSER',
    'referer':'https://s.taobao.com/search?q=%E8%8B%B9%E6%9E%9C&sort=sale-desc',
    'cookie':'cna=uIlSFyf4fzECAXPHSV7FUkTZ; cad=17385008a71-5802733657771572870001; cap=b1a8; cnaui=4204342242; aimx=Q9iiF5j4dTgCAbeUfzwibvYP_1595664963; cdpid=Vy64NQpQyFoeBA%253D%253D; tbsa=4d1bbc60a3b803d614638b54_1595666242_38; atpsida=e00833b49dcba05591c9044f_1595666242_51; atpsidas=2b032abfd9b96e6dd9efbcfd_1595666242_57; sca=18f4a5de; aui=4204342242'}
    res=requests.get(url,params=item)
    return res
def main():
    keyword=input('请输入搜索关键词:')
    res=open_url(keyword)
    with open('taobao.txt','w',encoding='utf-8') as f:
      f.write(res.text)
   
if __name__=='__main__':
    main()

想爬淘宝,结果不登陆爬的都是同一堆数据,怎么爬取需要登录的页面啊,求解{:10_277:}

xiaosi4081 发表于 2020-7-25 20:26:00

本帖最后由 xiaosi4081 于 2020-7-25 20:30 编辑

import requests
import re
from lxml import html

def open_url(keyword):
    item={'q':keyword,'sort':'sale-desc'}
    url='https://s.taobao.com/search'
    head={'user-agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 LBBROWSER',
    'referer':'https://s.taobao.com/search?q=%E8%8B%B9%E6%9E%9C&sort=sale-desc',
    'cookie':'cna=uIlSFyf4fzECAXPHSV7FUkTZ; cad=17385008a71-5802733657771572870001; cap=b1a8; cnaui=4204342242; aimx=Q9iiF5j4dTgCAbeUfzwibvYP_1595664963; cdpid=Vy64NQpQyFoeBA%253D%253D; tbsa=4d1bbc60a3b803d614638b54_1595666242_38; atpsida=e00833b49dcba05591c9044f_1595666242_51; atpsidas=2b032abfd9b96e6dd9efbcfd_1595666242_57; sca=18f4a5de; aui=4204342242'}
    res=requests.get(url,headers=head,data=item)
    return res
def main():
    keyword=input('请输入搜索关键词:')
    res=open_url(keyword)
    with open('taobao.txt','w',encoding='utf-8') as f:
      f.write(res.text)
   
if __name__=='__main__':
    main()

fytfytf 发表于 2020-7-25 20:30:02

xiaosi4081 发表于 2020-7-25 20:26
你cookie找错了吧
你登录后,来到要爬取的界面,之后右键>>检查(审查元素)



我cookie加了的,不过好像没什么卵用{:10_277:}

xiaosi4081 发表于 2020-7-25 20:31:34

本帖最后由 xiaosi4081 于 2020-7-25 20:34 编辑

fytfytf 发表于 2020-7-25 20:30
我cookie加了的,不过好像没什么卵用

你构造了head,但是没有用呀,你参数里面没headers

你按照我的改一下:
import requests
import re
from lxml import html

def open_url(keyword):
    item={'q':keyword,'sort':'sale-desc'}
    url='https://s.taobao.com/search'
    head={'user-agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 LBBROWSER',
    'referer':'https://s.taobao.com/search?q=%E8%8B%B9%E6%9E%9C&sort=sale-desc',
    'cookie':'cna=uIlSFyf4fzECAXPHSV7FUkTZ; cad=17385008a71-5802733657771572870001; cap=b1a8; cnaui=4204342242; aimx=Q9iiF5j4dTgCAbeUfzwibvYP_1595664963; cdpid=Vy64NQpQyFoeBA%253D%253D; tbsa=4d1bbc60a3b803d614638b54_1595666242_38; atpsida=e00833b49dcba05591c9044f_1595666242_51; atpsidas=2b032abfd9b96e6dd9efbcfd_1595666242_57; sca=18f4a5de; aui=4204342242'}
    res=requests.get(url,headers=head,data=item)
    return res
def main():
    keyword=input('请输入搜索关键词:')
    res=open_url(keyword)
    with open('taobao.txt','w',encoding='utf-8') as f:
      f.write(res.text)
   
if __name__=='__main__':
    main()

求最佳{:10_254:}

给你一个警告:
cookie可是很有用的,你要是把它附在代码里,就很容易泄露信息

fytfytf 发表于 2020-7-25 20:41:23

xiaosi4081 发表于 2020-7-25 20:31
你构造了head,但是没有用呀,你参数里面没headers

你按照我的改一下:


爬到了,大佬nb{:10_298:}
页: [1]
查看完整版本: 怎么爬取需要登录的页面