淘宝图片爬虫没结果
使用了正则提取不到任何数据import urllib.request
import re
import random
keyname="python"
uapools=[
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.79 Safari/537.36 Edge/14.14393",
"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.22 Safari/537.36 SE 2.X MetaSr 1.0",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0)",
]
def ua(uapools):
thisua=random.choice(uapools)
print(thisua)
headers=("User-Agent",thisua)
opener=urllib.request.build_opener()
opener.addheaders=
urllib.request.install_opener(opener)
for i in range(1,101):
url="https://s.taobao.com/search?q="+keyname+"&s="+str((i-1)*44)
ua(uapools)
data=urllib.request.urlopen(url).read().decode("utf-8","ignore")
pat='"pic_url":"//(.*?)"'
imglist=re.compile(pat).findall(data)
print(imglist) 本帖最后由 suchocolate 于 2020-12-2 22:30 编辑
得加cookie,不然网站回302重定向到登陆页面。
cookie可以直接复制浏览器的,添加到headers里用。 suchocolate 发表于 2020-12-2 22:21
得加cookie,不然网站回302重定向到登陆页面。
cookie可以直接复制浏览器的,添加到headers里用。
请问需要怎么操作呢,没用那个过,可以指教一下吗,谢谢 本帖最后由 suchocolate 于 2020-12-3 20:52 编辑
不能懒 发表于 2020-12-3 19:34
请问需要怎么操作呢,没用那个过,可以指教一下吗,谢谢
1.打开浏览器,按f12打开开发者模式,选择【网络network】
2.筛选【HTML】
3.点击url对应的消息
4.选择headers
5.复制其中的cookie
6.添加到代码中:
from urllib import request
headers = {'User-Agent': 'Firefox',
'cookie': 'cna=2kChF9tyHG8CAXrghaJMtGY7; isg=BEVFsn7WtgxtZ5ICe-VW1XudV4F_AvmU9-aQK0eqbHyL3mVQD1ZcZNf37IoohRFM; _m_h5_tk=8c38a31824bc2944f60aa6bc53485b5f_1606804667427; _m_h5_tk_enc=ffe57cdccf388a1c247ba670f331ace5; sgcookie=E100iK59LQVMU98z61v6O3WcgM2%2B%2BtZc58ibg0vptIcN2ecLaqHT%2Bcjp3C1mBXhL9VubWjBsBPTYf%2BsxBBqMIDN%2FWg%3D%3D; uc3=lg2=U%2BGCWk%2F75gdr5Q%3D%3D&vt3=F8dCuf2EVlTlBdvF36U%3D&nk2=EEs72J5%2BoFnMGdk%3D&id2=Uoe9bjWUagrH; lgc=suchocolate; uc4=nk4=0%40EpJ8FUSwe1%2FcF99h7%2FnAqjUeA%2FK6gQ%3D%3D&id4=0%40UO%2B7boGwLEdLUssfs7T9IG5OrsM%3D; tracknick=suchocolate; _cc_=WqG3DMC9EA%3D%3D; mt=ci=22_1; thw=cn; l=eBPX-NlPOPoJAah_BOfZ-urza77OGIOYYuPzaNbMiOCP_M5H51gPWZRtU6LMCn1Nh6KeR3R1CkJJBeYBq6CKnxv92j-la_kmn; tfstk=cCYNBbX_5V3aRabRSN_4cKmra6IOaHWcFy5PjntmJL6bTcSO3smUH62a065qQCSG.; enc=N72icSPj%2Br%2BfWT47Fyv2PZXNi%2BWKhbt6vZGyEssxkwtgkBsOPIwm%2B4dPO9yz%2BTOOoG5ZLcL0%2BBltZ2cbQ5FnGQ%3D%3D; xlly_s=1; hng=CN%7Czh-CN%7CCNY%7C156; JSESSIONID=09C8E073732AD66E6E68DE8A592C63F9; cookie2=19d1fffb60c3ac10f0f9cbd9adc86128; t=f69ac286332fa5c6bc610d6afd7606da; _tb_token_=ee38045813153; uc1=cookie14=Uoe0az9oUTduQQ%3D%3D'}
req = request.Request('http://httpbin.org/get', headers=headers)
r = request.urlopen(req)
print(r.read().decode('utf-8')) suchocolate 发表于 2020-12-3 20:49
1.打开浏览器,按f12打开开发者模式,选择【网络network】
2.筛选【HTML】
3.点击url对应的消息
可以帮忙结合一下代码吗?
页:
[1]