lengyue869 发表于 2024-3-29 14:52:54

怎么爬取网页中的信息

本帖最后由 lengyue869 于 2024-3-30 15:41 编辑

这个网址里面有多组数据,怎么爬取每组数据到excel啊

FishC 发表于 2024-3-29 14:53:17

要爬取网页中的信息,可以使用Python中的requests库来发送HTTP请求获取网页内容,再使用BeautifulSoup库来解析网页内容。

首先,需要安装requests和beautifulsoup库。可以使用以下命令在命令行中安装这两个库:

pip install requests beautifulsoup4

接下来,可以使用以下代码来爬取网页的内容并解析出需要的数据:

import requests
from bs4 import BeautifulSoup
import xlwt

# 发送HTTP GET请求获取网页内容
url = 'https://xinchuangka.com//vniao/4EEBFBF5'
response = requests.get(url)
html = response.text

# 创建一个Excel文件
workbook = xlwt.Workbook(encoding='utf-8')
sheet = workbook.add_sheet('data')

# 使用BeautifulSoup解析页面
soup = BeautifulSoup(html, 'html.parser')

# 找到每组数据所在的元素
data_elements = soup.find_all('div', class_='data-group')

# 遍历每组数据
for i, data_element in enumerate(data_elements):
    # 获取每组数据的标题和内容
    title = data_element.find('div', class_='data-title').text.strip()
    content = data_element.find('div', class_='data-content').text.strip()

    # 将数据写入Excel表格
    sheet.write(i, 0, title)
    sheet.write(i, 1, content)

# 保存Excel文件
workbook.save('data.xls')

以上代码会将网页中每组数据的标题和内容写入Excel文件中。请注意,代码中假设网页中的数据是以`<div class="data-group">`为包裹的,如果实际情况有所不同,需要根据网页结构进行相应的修改。此外,代码使用了xlwt库来创建和保存Excel文件,如果需要使用xlsx格式,可以使用openpyxl库代替。

以上回复来自 FishC AI 小助理,如未能正确解答您的问题,请继续追问。

lengyue869 发表于 2024-3-29 15:55:56

各位大佬别光顾着下单,给个思路啊

FishC 发表于 2024-3-29 15:56:11

lengyue869 发表于 2024-03-29 15:55
各位大佬别光顾着下单,给个思路啊

当爬取网页中的信息时,一般可以使用以下步骤:

1. 使用Python中的requests库发送HTTP请求获取网页内容。

2. 使用合适的解析库,例如BeautifulSoup或Scrapy解析网页内容,提取目标数据。

3. 根据网页结构和目标数据的特征,编写相应的代码来提取所需的数据。

4. 将提取到的数据进行处理和保存。可以选择将数据保存到文本文件、CSV文件、Excel文件或数据库等。

下面是一个简单的示例代码,演示如何爬取一个网页中的数据:

import requests
from bs4 import BeautifulSoup

# 发送HTTP GET请求获取网页内容
url = 'https://example.com'
response = requests.get(url)
html = response.text

# 使用BeautifulSoup解析页面
soup = BeautifulSoup(html, 'html.parser')

# 找到目标数据所在的元素,并提取数据
data_elements = soup.find_all('div', class_='data')
for data_element in data_elements:
    title = data_element.find('h2').text
    content = data_element.find('p').text
   
    # 在这里可以对数据进行处理和保存
    print(f'Title: {title}')
    print(f'Content: {content}')

在实际应用中,还需要注意网页的反爬措施,例如验证码、限制访问频率等。此外,需要遵守网站的爬虫规则,避免给服务器带来过大的负担。

以上回复来自 FishC AI 小助理,如未能正确解答您的问题,请继续追问。

不二如是 发表于 2024-3-29 16:08:05

最好别乱爬

lengyue869 发表于 2024-3-29 16:14:04

不二如是 发表于 2024-3-29 16:08
最好别乱爬

这个是个人网站应该没什么影响吧{:5_109:}

lengyue869 发表于 2024-3-30 15:34:30

本帖最后由 lengyue869 于 2024-4-4 00:37 编辑

PFZQ
%%time
import requests
import json
import pandas as pd
from datetime import datetime
import winreg

current_time = datetime.now().strftime("%m%d___%H.%M")

key = winreg.OpenKey(winreg.HKEY_CURRENT_USER,
                     r'Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders')
deskPath = winreg.QueryValueEx(key, "Desktop")

url = "https://xinchuangka.com/shop/shop/getAccount"
myheaders = {
    "user-agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36"
}

data_list1 = []
data_list2 = []

lastpage = 0
page = 50
allowed_regions = ["卡拉曼达","暗影岛","征服之海","诺克萨斯","战争学院","雷瑟守备","艾欧尼亚","黑色玫瑰"]

for page_num in range(page):
    mydata = {"goodsid": "1765", "page": page_num + 1, "userid": "62", "type": "new"}
    html = requests.post(url, headers=myheaders, data=mydata).text
    text = json.loads(html)['data']

    if len(text) > 0:
      lastpage += 1

      for item in text:
            area = item['number']['2']
            if area not in allowed_regions:
                continue# Skip if region not in allowed list
            name = item['number']['3']
            detail = item['number']['4']

            lst1 =
            if not "英雄:" in lst1:
                lst1 = "英雄:" + lst1
            lst =

            data_dict = {"大区": area, "ID": name, **{item: item for item in lst}}
            data_dict['页码'] = page_num + 1# 添加页码字段

            if '最后游戏' in data_dict:
                data_dict['最后游戏'] = data_dict['最后游戏'][:10]
                last_game_date = pd.to_datetime(data_dict['最后游戏'], format="%Y-%m-%d")
                days_difference = (pd.Timestamp.now() - last_game_date).days
                if days_difference > 20:
                  data_list2.append(data_dict)
            data_list1.append(data_dict)

df1 = pd.DataFrame(data_list1)
df2 = pd.DataFrame(data_list2)

numeric_columns =
df1 = df1.apply(pd.to_numeric, errors='ignore')
df2 = df2.apply(pd.to_numeric, errors='ignore')

columns = ["页码", "大区", "ID", "等级", "英雄", "皮肤", "单", "组", "精粹", "最后游戏"]
df1 = df1.sort_values(by=["最后游戏", "皮肤", "等级"], ascending=)
df2 = df2.sort_values(by=["最后游戏", "皮肤", "等级"], ascending=)

with pd.ExcelWriter(f"{deskPath}\PF___{current_time}.xlsx", engine='xlsxwriter') as writer:
    df1.to_excel(writer, index=False, sheet_name='Sheet2')
    df2.to_excel(writer, index=False, sheet_name='Sheet1')

print(f"共{lastpage}页,数据已保存到桌面。")




All Area
%%time
import requests
import json
import pandas as pd
from datetime import datetime
import winreg
from datetime import datetime


current_time = datetime.now().strftime("%m%d___%H.%M")

key = winreg.OpenKey(winreg.HKEY_CURRENT_USER,
                     r'Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders')
deskPath = winreg.QueryValueEx(key, "Desktop")

url = "https://xinchuangka.com/shop/shop/getAccount"
myheaders = {
    "user-agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36"
}


data_list = []

lastpage = 0
page = 200
lst_goods=["1734","1736","1739","1741","1743","1744","1750","1751"]

for goodsid in lst_goods:
    for page_num in range(page):
      mydata = {"goodsid": goodsid, "page": page_num + 1, "userid": "62", "type": "new"}
      html = requests.post(url, headers=myheaders, data=mydata).text
      text = json.loads(html)['data']

      if len(text) > 0:
            lastpage += 1

            for item in text:
                area = item['number']['2']
                name = item['number']['3']
                detail = item['number']['4']

                lst1 =
                if not "英雄:" in lst1:
                  lst1 = "英雄:" + lst1
                lst =

                data_dict = {"大区": area, "ID": name, **{item: item for item in lst}}
                data_dict['页码'] = page_num + 1# 添加页码字段

                if '最后游戏' in data_dict:
                  data_dict['最后游戏'] = data_dict['最后游戏'][:10]
                  last_game_date = pd.to_datetime(data_dict['最后游戏'], format="%Y-%m-%d")
                  days_difference = (pd.Timestamp.now() - last_game_date).days
                  if days_difference > 30:
                        data_list.append(data_dict)
                  

df = pd.DataFrame(data_list)

numeric_columns =
df = df.apply(pd.to_numeric, errors='ignore')

columns = ["页码", "大区", "ID", "等级", "英雄", "皮肤", "单", "组", "精粹", "最后游戏"]
df = df.sort_values(by=["皮肤","最后游戏","等级"], ascending=)

with pd.ExcelWriter(f"{deskPath}\All___{current_time}.xlsx", engine='xlsxwriter') as writer:
    sht_name=df['大区']   
    df.to_excel(writer, index=False, sheet_name='Sheet1')

print(f"共{lastpage}页,数据已保存到桌面。")


ALL EZ
%%time
import requests
import json
import pandas as pd
from datetime import datetime
import winreg
from datetime import datetime


current_time = datetime.now().strftime("%m%d__%H.%M")

key = winreg.OpenKey(winreg.HKEY_CURRENT_USER,
                     r'Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders')
deskPath = winreg.QueryValueEx(key, "Desktop")

url = "https://luck.92.edri.mobi/shop/shop/getAccount"
myheaders = {
    "user-agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36"
}


data_list = []

lastpage = 0
page = 200
dic_goods={"67305":"1802","67310":"1807","67311":"1808","67320":"1817","67328":"1825","67321":"1818","67327":"1824","67313":"1810","67316":"1813","67318":"1815"}
# dic_goods={"67305":"1802"}
for agent_goodsid,goodsid in dic_goods.items():
    for page_num in range(page):
      mydata = {"agent_goodsid" : agent_goodsid , "goodsid": goodsid, "page": page_num + 1, "userid": "959", "type": "new"}
      html = requests.post(url, headers=myheaders, data=mydata).text
      text = json.loads(html)['data']

      if len(text) > 0:
            lastpage += 1

            for item in text:
                area = item['number']['2']
                name = item['number']['3']
                detail = item['number']['4']

                lst1 =
                if not "英雄:" in lst1:
                  lst1 = "英雄:" + lst1
                lst =

                data_dict = {"大区": area, "ID": name, **{item: item for item in lst}}
                data_dict['页码'] = page_num + 1# 添加页码字段
                data_list.append(data_dict)

df = pd.DataFrame(data_list)

numeric_columns =
df = df.apply(pd.to_numeric, errors='ignore')
df = df > 400]

columns = ["页码", "大区", "ID", "等级", "英雄", "皮肤", "单", "组"]
df = df.sort_values(by=["皮肤", "等级"], ascending=)

with pd.ExcelWriter(f"{deskPath}\EZ__All__{current_time}.xlsx", engine='xlsxwriter') as writer:
    df.to_excel(writer, index=False, sheet_name='Sheet1')

print(f"共{lastpage}页,数据已保存到桌面。")


MX_New
import requests
import json
import pandas as pd
from datetime import datetime
import winreg,sys
from datetime import datetime
import random,time


key = winreg.OpenKey(winreg.HKEY_CURRENT_USER,
                     r'Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders')
deskPath = winreg.QueryValueEx(key, "Desktop")

url = "https://xinchuangka.com/shop/shop/getAccount"

# 随机生成User-Agent
user_agents = [
    "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36",
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36",
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36",
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36",
]

data_list = []
lastpage = 0
allowed_regions = ["卡拉曼达","暗影岛","征服之海","诺克萨斯","战争学院","雷瑟守备","艾欧尼亚","黑色玫瑰"]
lst_goods=["2129","2131","2134","2136","2138","2139","2145","2146"]
# lst_goods=["2156"]#皮肤

for goodsid in lst_goods:   
    #获取每个区的总页数
    mydata = {"goodsid": goodsid, "page": "1", "userid": "62", "type": "new"}   
    myheaders = {"user-agent": random.choice(user_agents)}
    html = requests.post(url, headers=myheaders, data=mydata).text
    page = -(-json.loads(html)['count']//10)   
   
    if page==0:
      continue   
    else:
      daqu=json.loads(html)['data']['number']['2']
      print(f"{daqu},page:{page}")
   
    for page_num in range(page):
      mydata = {"goodsid": goodsid, "page": page_num + 1, "userid": "62", "type": "new"}
      myheaders = {"user-agent": random.choice(user_agents)}
      html = requests.post(url, headers=myheaders, data=mydata).text
      text = json.loads(html)['data']
      lastpage += 1

      for item in text:
            area = item['number']['2']
            name = item['number']['3']
            detail = item['number']['4']

            if area not in allowed_regions:
                continue# Skip if region not in allowed list               

            lst1 =
            if not "英雄:" in lst1:
                lst1 = "英雄:" + lst1
            lst =

            data_dict = {"大区": area, "ID": name, **{item: item for item in lst}}
            data_dict['页码'] = page_num + 1# 添加页码字段
            data_list.append(data_dict)
      # 添加延迟,避免请求过于频繁
      time.sleep(random.uniform(0.1, 0.3))            

df = pd.DataFrame(data_list)
if df.empty:
    print("没有数据,程序退出。")
    sys.exit()

numeric_columns =
df = df.apply(pd.to_numeric, errors='ignore')
df = df > 300]

columns = ["页码", "大区", "ID", "等级", "英雄", "皮肤", "单", "组", "精粹"]
df = df.sort_values(by=["皮肤", "等级"], ascending=)

exl_name = "MX__PF" if goodsid == "2156" else "MX"

current_time = datetime.now().strftime("%m%d__%H.%M")
with pd.ExcelWriter(f"{deskPath}\\{exl_name}__{current_time}.xlsx", engine='xlsxwriter') as writer:
    df.to_excel(writer, index=False, sheet_name='Sheet1')

print(f"共{lastpage}页,数据已保存到桌面。")
import winsound
winsound.Beep(440,1000)



JS
https://mx.youyoupay.com/shop/0C5BA23E
// 定义一个函数,用于连续点击链接元素
function clickLinkMultipleTimesWithDelay(numClicks, delay) {
    // 获取链接元素
    var linkElement = document.querySelector('a');
   
    // 如果链接元素存在,则执行点击操作
    if (linkElement) {
      var i = 0;
      function clickNext() {
            if (i < numClicks) {
                linkElement.click(); // 模拟点击链接元素
                i++;
                setTimeout(clickNext, delay); // 设置延迟后继续点击
            }
      }
      clickNext();
    } else {
      console.log('找不到链接元素。');
    }
}

// 调用函数,点击链接元素10次,每次点击间隔1000毫秒(1秒)
clickLinkMultipleTimesWithDelay(10, 500);


Sub ExtractGoodsInfo()
    Dim html As String
    Dim startPos As Long, endPos As Long
    Dim idStartPos As Long, idEndPos As Long
    Dim nameStartPos As Long, nameEndPos As Long
    Dim id As String, name As String
    Dim rowNum As Long
   
    ' 获取 HTML 代码
    html = Range("A1").Value
   
    ' 清空结果列
    Range("B:F").ClearContents
    Range("B:F").NumberFormatLocal = "@"
   
    ' 初始化行号
    rowNum = 2
   
    ' 提取商品信息
    startPos = 1
    Do
      ' 提取商品ID
      idStartPos = InStr(startPos, html, "value=""") + Len("value=""")
      If idStartPos = 0 Then Exit Do ' 如果找不到ID,直接退出
      idEndPos = InStr(idStartPos, html, """", vbBinaryCompare)
      id = Mid(html, idStartPos, idEndPos - idStartPos)
      
      ' 提取商品名称
      nameStartPos = InStr(idEndPos, html, ">") + 1
      nameEndPos = InStr(nameStartPos, html, "<")
      name = Trim(Mid(html, nameStartPos, nameEndPos - nameStartPos))
      
      ' 替换商品名称中的"阿九"为""
      If InStr(name, "米花") Or InStr(name, "阿九") Or InStr(name, "萌新") Or InStr(name, "白云") Then
            name = Mid(name, 5)
      End If
      
      ' 输出到表格
      Cells(rowNum, 2).Value = CStr(id)
      Cells(rowNum, 3).Value = name
      
      ' 更新行号和搜索起始位置
      rowNum = rowNum + 1
      startPos = nameEndPos
      
      ' 如果名称为"男爵领域",则跳出循环
      If InStr(name, "男爵领域") Or InStr(name, "巨龙之巢") Or InStr(name, "皮城警备") Then Exit Do
      
    Loop
   
    arr = Array("卡拉曼达", "暗影岛", "征服之海", "诺克萨斯", "战争学院", "雷瑟守备", "艾欧尼亚", "黑色玫瑰", "皮肤")
    iRow = Cells(Rows.Count, 3).End(xlUp).Row
    For i = iRow To 2 Step -1
      bYN = False
      For Each ar In arr
            If InStr(Cells(i, 3).Value, ar) Then
                bYN = True
                Exit For
            End If
      Next
      
      If bYN = False Then Rows(i).Delete
    Next
   
    iRow = Cells(Rows.Count, 3).End(xlUp).Row
   
    str1 = ""
    For i = 2 To iRow
      str1 = str1 + "," + """" + Cells(i, 2).Text + """"
    Next
   
    Range("f5").NumberFormatLocal = "@"
    Range("f5") = Mid(str1, 2)
   
End Sub

页: [1]
查看完整版本: 怎么爬取网页中的信息