哈哈哈ha1 发表于 2021-10-30 17:05:28

关于爬虫的问题

这是代码,我想要爬取我文件中的几个网站的源代码
import urllib.request
import chardet
def main():
    i = 0
    with open('C:\\Users\\Lenovo\\Desktop\\urls.txt','r')as f:
      # 读取待访问的网址
      # 由于urls.txt每一行一个URL
      # 所以按换行符'\n'分割
      urls = f.read().splitlines()
    for each_url in urls:
      responrs = urllib.request.urlopen(urls)
      html = responrs.read()
      # 识别网页编码
      encode = chardet.detect(html)['encoding']
      if encode == 'GB2312':
            encode = 'GBK'
            i += 1
            filename = "url_%d" % i
            with open(filename,'w',encoding=encode):
                each_file.write(html.decode(encode,'ingnore'))
    if __name__ == '__main__':
      main()


main()


这是报错内容
Traceback (most recent call last):
File "C:/Users/Lenovo/PycharmProjects/pythonProject8/venv/Scripts/爬虫课后作业.py", line 27, in <module>
    main()
File "C:/Users/Lenovo/PycharmProjects/pythonProject8/venv/Scripts/爬虫课后作业.py", line 13, in main
    responrs = urllib.request.urlopen(urls)
File "C:\Program Files\Python38\lib\urllib\request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
File "C:\Program Files\Python38\lib\urllib\request.py", line 515, in open
    req.timeout = timeout
AttributeError: 'list' object has no attribute 'timeout'

求大佬帮忙指出问题所在,并且能否给一个更改si'k'l

sh1t灬 发表于 2021-10-31 12:18:11

responrs = urllib.request.urlopen(urls)
这句话有问题,urls是个待爬取列表,你在for循环里面应该使用each_url

哈哈哈ha1 发表于 2021-11-1 14:55:27

sh1t灬 发表于 2021-10-31 12:18
responrs = urllib.request.urlopen(urls)
这句话有问题,urls是个待爬取列表,你在for循环里面应该使用ea ...

我改了之后又报错了
Traceback (most recent call last):
File "C:/Users/Lenovo/PycharmProjects/pythonProject8/venv/Scripts/爬虫课后作业.py", line 27, in <module>
    main()
File "C:/Users/Lenovo/PycharmProjects/pythonProject8/venv/Scripts/爬虫课后作业.py", line 13, in main
    responrs = urllib.request.urlopen(each_url)
File "C:\Program Files\Python38\lib\urllib\request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
File "C:\Program Files\Python38\lib\urllib\request.py", line 531, in open
    response = meth(req, response)
File "C:\Program Files\Python38\lib\urllib\request.py", line 640, in http_response
    response = self.parent.error(
File "C:\Program Files\Python38\lib\urllib\request.py", line 569, in error
    return self._call_chain(*args)
File "C:\Program Files\Python38\lib\urllib\request.py", line 502, in _call_chain
    result = func(*args)
File "C:\Program Files\Python38\lib\urllib\request.py", line 649, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 418:

suchocolate 发表于 2021-11-1 15:23:59

哈哈哈ha1 发表于 2021-11-1 14:55
我改了之后又报错了
Traceback (most recent call last):
File "C:/Users/Lenovo/PycharmProjects/py ...

要改header,防止反扒。
headers = {'User-Agent': 'Mozilla'}
for each_url in urls:
    req = urllib.request.Request(each_url, headers=headers)
    responrs = urllib.request.urlopen(req)
    html = responrs.read()
页: [1]
查看完整版本: 关于爬虫的问题