紧急!!抓取一个网页的数据并且写入到Excel里 需要用到什么库和函数知识
RT我需要抓紧一个网页的一部分数据,并把这部分数据写入到Excel里,想请教一下大家,需要用到什么库和函数知识。需要抓取的网页为前程无忧 抓取的数据为公司名
网址如下:https://search.51job.com/list/030700,000000,0000,00,9,06%252C07%252C08%252C09%252C10,%2B,2,1.html?lang=c&postchannel=0000&workyear=99&cotype=99°reefrom=99&jobterm=99&companysize=99&ord_field=0&dibiaoid=0&line=&welfare=
如图:
我要把这些公司名字都写入到Excel里
请问需要怎么做呢 谢谢
本帖最后由 comeheres 于 2020-7-6 17:09 编辑
这种简单的数据,直接利用Chrome浏览器插件Instant Data Scraper,不要一分钟就能抓上千条
最新的0.2.3版本(Chrome浏览器打开chrome://extensions/,右上角打开开发者模式,将crx文件拖入浏览器窗口即可安装;新版Edge浏览器也可安装):
comeheres 发表于 2020-7-6 17:08
这种简单的数据,直接利用Chrome浏览器插件Instant Data Scraper,不要一分钟就能抓上千条
感谢,我一直在用Web Scraper爬数据,用起来还可以,也可以进行跳页面爬,这个好像没有Web Scraper功能多,不过真的快多了,我再研究研究 3个库:requests,lxml,openpyxl
import requests
from lxml import etree
from openpyxl import Workbook
def main():
url = 'https://search.51job.com/list/030700,000000,0000,00,9,06%252C07%252C08%252C09%252C10,%2B,2,1.html?lang=c&postchannel=0000&workyear=99&cotype=99°reefrom=99&jobterm=99&companysize=99&ord_field=0&dibiaoid=0&line=&welfare='
headers = {'user-agent': 'firefox'}
r = requests.get(url, headers=headers)
r.encoding = 'gbk'
# with open('r.txt', 'w') as f:
# f.write(r.text)
html = etree.HTML(r.text)
result = html.xpath('//span[@class="t2"]/a[@target="_blank"]/@title')
wb = Workbook()
ws = wb.active
for n, v in enumerate(result):
ws.cell(row=n+1, column=1, value=v)
wb.save('job51.xlsx')
if __name__ == '__main__':
main() comeheres 发表于 2020-7-6 17:08
这种简单的数据,直接利用Chrome浏览器插件Instant Data Scraper,不要一分钟就能抓上千条
谢谢插件很好用 非常感谢
但是好像只能抓取单页,不能自动抓取之后的页面吗? suchocolate 发表于 2020-7-6 21:08
3个库:requests,lxml,openpyxl
谢谢我稍微修改了一点现在可以录入200页的资料了
import requests
from lxml import etree
from openpyxl import Workbook
def main():
row_1 = 0
wb = Workbook()
ws = wb.active
for num in range(1,201):
url = 'https://search.51job.com/list/030700,000000,0000,00,9,05%252C06%252C07%252C08%252C09,%2B,2,'+str(num)+'.html?lang=c&postchannel=0000&workyear=99&cotype=99°reefrom=99&jobterm=99&companysize=99&ord_field=0&dibiaoid=0&line=&welfare='
#IP代理伪装
proxies = {"url":'http://115.221.242.206:9999'}
#浏览器伪装
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"}
r = requests.get(url, headers=headers, proxies = proxies)
r.encoding = 'gbk'
html = etree.HTML(r.text)
result = html.xpath('//span[@class="t2"]/a[@target="_blank"]/@title')
for n, v in enumerate(result):
ws.cell(row=row_1+1, column=1, value=v)
row_1 += 1
wb.save('job51.xlsx')
if __name__ == '__main__':
main()
inver11 发表于 2020-7-7 17:44
谢谢我稍微修改了一点现在可以录入200页的资料了
大佬,你的个性签名是怎么弄上颜色的{:5_92:} 人生苦短,我用Python
本帖最后由 inver11 于 2020-7-7 18:47 编辑
suchocolate 发表于 2020-7-6 21:08
3个库:requests,lxml,openpyxl
大佬 可以加个VX或者企鹅吗
想再请教一下问题
1.因为目前抓到的客户名字有可能是简称,我还需要再进入下一级链接,去识别客户的真正名称。但是我对HTML的字段识别不是很懂
2.职位的名字和薪酬可以写到同一个单元格里吗 跟客户名保存在同一个文件里吗 表格文件有两列第一列是客户名第二例是职位名字和薪酬
想请大佬指导一下思路
后续可能还想增加一些功能,比如保存下来的文件,根据客户名称自动上网搜索客户的联系方式并保存。 inver11 发表于 2020-7-7 16:04
谢谢插件很好用 非常感谢
但是好像只能抓取单页,不能自动抓取之后的页面吗?
开启之前,要先定位下一页的按钮位置
插件打开后,有个Locate"Next"button按钮,点一下后,再去网页上的下一页按钮再点一下即可 本帖最后由 inver11 于 2020-7-7 19:11 编辑
comeheres 发表于 2020-7-7 18:54
开启之前,要先定位下一页的按钮位置
插件打开后,有个Locate"Next"button按钮,点一下后,再去网页上的 ...
试过了这个,但是只能抓取到第二页的数据就停止了我截个图给你看看
inver11 发表于 2020-7-7 18:42
大佬 可以加个VX或者企鹅吗
想再请教一下问题
1.因为目前抓到的客户名字有可能是简称,我还需 ...
import requests
from lxml import etree
from openpyxl import Workbook
def main():
row = 1
wb = Workbook()
ws = wb.active
for num in range(1, 5):
url = 'https://search.51job.com/list/030700,000000,0000,00,9,05%252C06%252C07%252C08%252C09,%2B,2,' + str(
num) + '.html?lang=c&postchannel=0000&workyear=99&cotype=99°reefrom=99&jobterm=99&companysize=99&ord_field=0&dibiaoid=0&line=&welfare='
# IP代理伪装
proxies = {"url": 'http://115.221.242.206:9999'}
# 浏览器伪装
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"}
r = requests.get(url, headers=headers, proxies=proxies)
r.encoding = 'gbk'
html = etree.HTML(r.text)
result = html.xpath('//div[(@class="el") and not(@id)]')
for item in result:
# 公司
cpy = item.xpath('./span[@class="t2"]/a/@title')
ws.cell(row=row, column=1, value=cpy)
# 月薪
sly = item.xpath('./span[@class="t4"]/text()')
if sly:
ws.cell(row=row, column=2, value=sly)
# 职位
pos = item.xpath('./p/span/a/@title')
ws.cell(row=row, column=3, value=pos)
row = row + 1
wb.save('job51.xlsx')
if __name__ == '__main__':
main()
本帖最后由 inver11 于 2020-7-8 10:55 编辑
suchocolate 发表于 2020-7-7 20:11
请问大佬这一句中的 and not(@id)是啥意思 我百度了好久也没看懂呢
result = html.xpath('//div[(@class="el") and not(@id)]')
inver11 发表于 2020-7-8 09:21
请问大佬这一句中的 and not(@id)是啥意思 我百度了好久也没看懂呢
选择class为el,但不能有id属性的div节点。 suchocolate 发表于 2020-7-8 09:59
选择class为el,但不能有id属性的div节点。
明白了 谢谢大佬 suchocolate 发表于 2020-7-7 20:11
我修改了一下 增加了进入具体公司信息页面获取客户营业执照名字因为第一次进入的页面获取的公司名字有可能是品牌名
import requests
from lxml import etree
from openpyxl import Workbook
def test(url_test):
#获取营业执照名字
proxies = {"url": 'http://115.221.242.206:9999'}
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"}
r_test = requests.get(url_test, headers=headers, proxies=proxies)
r_test.encoding = 'gbk'
html = etree.HTML(r_test.text)
try:
result_test = html.xpath('//span[@class="icon_det"]/@title')
if result_test:
result_test = result_test
else:
result_test = html.xpath('//h1/@title')
result_test = result_test
except IndexError:
result_test = html.xpath('//head/title/text()')
len_1=len(str(result_test))-3
result_test = result_test[:len_1]
return result_test
def main():
row = 1
wb = Workbook()
ws = wb.active
for num in range(1, 3):
url = 'https://search.51job.com/list/030700,000000,0000,00,9,05%252C06%252C07%252C08%252C09,%2B,2,' + str(
num) + '.html?lang=c&postchannel=0000&workyear=99&cotype=99°reefrom=99&jobterm=99&companysize=99&ord_field=0&dibiaoid=0&line=&welfare='
# IP代理伪装
proxies = {"url": 'http://115.221.242.206:9999'}
# 浏览器伪装
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"}
r = requests.get(url, headers=headers, proxies=proxies)
r.encoding = 'gbk'
html = etree.HTML(r.text)
result = html.xpath('//div[(@class="el") and not(@id)]')
for item in result:
# 公司
cpy_url = item.xpath('./span[@class="t2"]/a/@href')
print(cpy_url)
cpy = test(cpy_url)
ws.cell(row=row, column=1, value=cpy)
# 月薪
sly = item.xpath('./span[@class="t4"]/text()')
if sly:
ws.cell(row=row, column=2, value=sly)
# 职位
pos = item.xpath('./p/span/a/@title')
ws.cell(row=row, column=3, value=pos)
row = row + 1
wb.save('job_51_2.xlsx')
if __name__ == '__main__':
main()
suchocolate 发表于 2020-7-7 20:11
增加了两个函数
查询客户营业执照名字
获取总页码
成功爬取了数据,但是有个问题,就是速度太慢了
import requests
from lxml import etree
from openpyxl import Workbook
#####################获取当前最新总页码######################
def num_1():
url = 'https://search.51job.com/list/030700,000000,0000,00,9,06%252C07%252C08%252C09%252C10,%2B,2,1.html?lang=c&postchannel=0000&workyear=99&cotype=99°reefrom=99&jobterm=99&companysize=99&ord_field=0&dibiaoid=0&line=&welfare='
headers = {'user-agent': 'firefox'}
r = requests.get(url, headers=headers)
r.encoding = 'gbk'
html = etree.HTML(r.text)
result = html.xpath('//div[@class="p_in"]/input[@type="hidden"]/@value')
return result
#######################获取营业执照名字####################
def test(url_test):
proxies = {"url": 'http://115.221.242.206:9999'}
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"}
r_test = requests.get(url_test, headers=headers, proxies=proxies)
r_test.encoding = 'gbk'
html = etree.HTML(r_test.text)
try:
result_test = html.xpath('//span[@class="icon_det"]/@title')
if result_test:
result_test = result_test
else:
result_test = html.xpath('//h1/@title')
result_test = result_test
except IndexError:
result_test = html.xpath('//head/title/text()')
len_1=len(str(result_test))-3
result_test = result_test[:len_1]
return result_test
def main():
row = 1
wb = Workbook()
ws = wb.active
num_2 = int(num_1())+1
for num in range(1,num_2):
url = 'https://search.51job.com/list/030700,000000,0000,00,9,05%252C06%252C07%252C08%252C09,%2B,2,' + str(
num) + '.html?lang=c&postchannel=0000&workyear=99&cotype=99°reefrom=99&jobterm=99&companysize=99&ord_field=0&dibiaoid=0&line=&welfare='
# IP代理伪装
proxies = {"url": 'http://115.221.242.206:9999'}
# 浏览器伪装
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"}
r = requests.get(url, headers=headers, proxies=proxies)
r.encoding = 'gbk'
html = etree.HTML(r.text)
result = html.xpath('//div[(@class="el") and not(@id)]')
for item in result:
# 公司
cpy_url = item.xpath('./span[@class="t2"]/a/@href')
cpy = test(cpy_url)
ws.cell(row=row, column=1, value=cpy)
# 月薪
sly = item.xpath('./span[@class="t4"]/text()')
if sly:
ws.cell(row=row, column=2, value=sly)
# 职位
pos = item.xpath('./p/span/a/@title')
ws.cell(row=row, column=3, value=pos)
row = row + 1
wb.save('job_51_2.xlsx')
if __name__ == '__main__':
main()
页:
[1]