鱼C论坛

 找回密码
 立即注册
查看: 2282|回复: 1

豆瓣url的登录问题

[复制链接]
发表于 2022-11-20 20:37:14 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能^_^

您需要 登录 才可以下载或查看,没有账号?立即注册

x
  1. import re
  2. import urllib.request
  3. from http.cookiejar import CookieJar

  4. # 豆瓣的登录url
  5. loginurl = 'https://www.douban.com/accounts/login'
  6. cookie = CookieJar()
  7. opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor)

  8. data = {
  9.     "form_email":"your email",
  10.     "form_password":"your password",
  11.     "source":"index_nav"
  12. }
  13. data = {}
  14. data['form_email'] = '你的账号'
  15. data['form_password'] = '你的密码'
  16. data['source'] = 'index_nav'

  17. response = opener.open(loginurl, urllib.parse.urlencode(data).encode('utf-8'))

  18. #验证成功跳转至登录页
  19. if response.geturl() == "https://www.douban.com/accounts/login":
  20.     html = response.read().decode()
  21.    
  22.     #验证码图片地址
  23.     imgurl = re.search('<img id="captcha_image" src="(.+?)" alt="captcha" class="captcha_image"/>', html)
  24.     if imgurl:
  25.         url = imgurl.group(1)
  26.         # 将验证码图片保存至同目录下
  27.         res = urllib.request.urlretrieve(url, 'v.jpg')

  28.         # 获取captcha-id参数
  29.         captcha = re.search('<input type="hidden" name="captcha-id" value="(.+?)"/>' ,html)

  30.         if captcha:
  31.             vcode = input('请输入图片上的验证码:')
  32.             data["captcha-solution"] = vcode
  33.             data["captcha-id"] = captcha.group(1)
  34.             data["user_login"] = "登录"

  35.             # 提交验证码验证
  36.             response = opener.open(loginurl, urllib.parse.urlencode(data).encode('utf-8'))

  37.             # 登录成功跳转至首页 '''
  38.             if response.geturl() == "http://www.douban.com/":
  39.                 print('登录成功!')
复制代码


Traceback (most recent call last):
  File "C:/Users/Administrator/AppData/Local/Programs/Python/Python38/111.py", line 20, in <module>
    response = opener.open(loginurl, urllib.parse.urlencode(data).encode('utf-8'))
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 531, in open
    response = meth(req, response)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 640, in http_response
    response = self.parent.error(
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 569, in error
    return self._call_chain(*args)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 502, in _call_chain
    result = func(*args)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 649, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 418:
一大堆的红字
小甲鱼最新课程 -> https://ilovefishc.com
回复

使用道具 举报

发表于 2022-11-20 20:40:05 | 显示全部楼层
本帖最后由 suchocolate 于 2022-11-21 09:00 编辑

得加header。你的教程有点老了,建议用requests学爬虫。
  1. from urllib import request

  2. url = 'https://www.douban.com/accounts/login'
  3. headers = {'User-Agent': 'Firefox'}
  4. req = request.Request(url, headers=headers)
  5. r = request.urlopen(req)
  6. print(r.read().decode('utf-8'))
复制代码



  1. from urllib import request
  2. url = 'https://www.douban.com/accounts/login'
  3. opener = request.build_opener()
  4. opener.addheaders = [('User-agent', 'Firefox')]
  5. response = opener.open(url)
  6. r = response.read().decode('utf-8')
  7. print(r)
复制代码
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

小黑屋|手机版|Archiver|鱼C工作室 ( 粤ICP备18085999号-1 | 粤公网安备 44051102000585号)

GMT+8, 2025-4-25 18:24

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表