鱼C论坛

 找回密码
 立即注册
查看: 444|回复: 7

bert模型训练时报错

[复制链接]
发表于 2024-11-2 23:24:52 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能^_^

您需要 登录 才可以下载或查看,没有账号?立即注册

x
  1. ---------------------------------------------------------------------------
  2. AttributeError                            Traceback (most recent call last)
  3. Cell In[50], line 4
  4.       1 print('Training Start!')
  5.       2 print('=' * 100)
  6. ----> 4 train(model,
  7.       5     device,
  8.       6     train_dataloader,
  9.       7     valid_dataloader,
  10.       8     CFG.epochs,
  11.       9     loss_fn,
  12.      10     optimizer,
  13.      11     metric)
  14.      13 del model,train_dataloader, valid_dataloader
  15.      14 gc.collect()

  16. Cell In[49], line 17, in train(model, device, train_dataloader, valid_dataloader, epochs, loss_fn, optimizer, metric)
  17.      14 train_step = 0
  18.      15 pbar = tqdm(train_dataloader)#tqdm参数是一个iterable
  19. ---> 17 for batch in pbar: # you can also write like "for batch in tqdm(train_dataloader"
  20.      18     optimizer.zero_grad() # initialize
  21.      19     train_step += 1

  22. File /opt/conda/lib/python3.10/site-packages/tqdm/notebook.py:250, in tqdm_notebook.__iter__(self)
  23.     248 try:
  24.     249     it = super().__iter__()
  25. --> 250     for obj in it:
  26.     251         # return super(tqdm...) will not catch exception
  27.     252         yield obj
  28.     253 # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt

  29. File /opt/conda/lib/python3.10/site-packages/tqdm/std.py:1181, in tqdm.__iter__(self)
  30.    1178 time = self._time
  31.    1180 try:
  32. -> 1181     for obj in iterable:
  33.    1182         yield obj
  34.    1183         # Update and possibly print the progressbar.
  35.    1184         # Note: does not call self.update(1) for speed optimisation.

  36. File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self)
  37.     627 if self._sampler_iter is None:
  38.     628     # TODO(https://github.com/pytorch/pytorch/issues/76750)
  39.     629     self._reset()  # type: ignore[call-arg]
  40. --> 630 data = self._next_data()
  41.     631 self._num_yielded += 1
  42.     632 if self._dataset_kind == _DatasetKind.Iterable and \
  43.     633         self._IterableDataset_len_called is not None and \
  44.     634         self._num_yielded > self._IterableDataset_len_called:

  45. File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:1344, in _MultiProcessingDataLoaderIter._next_data(self)
  46.    1342 else:
  47.    1343     del self._task_info[idx]
  48. -> 1344     return self._process_data(data)

  49. File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:1370, in _MultiProcessingDataLoaderIter._process_data(self, data)
  50.    1368 self._try_put_index()
  51.    1369 if isinstance(data, ExceptionWrapper):
  52. -> 1370     data.reraise()
  53.    1371 return data

  54. File /opt/conda/lib/python3.10/site-packages/torch/_utils.py:706, in ExceptionWrapper.reraise(self)
  55.     702 except TypeError:
  56.     703     # If the exception takes multiple arguments, don't try to
  57.     704     # instantiate since we don't know how to
  58.     705     raise RuntimeError(msg) from None
  59. --> 706 raise exception

  60. AttributeError: Caught AttributeError in DataLoader worker process 0.
  61. Original Traceback (most recent call last):
  62.   File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 309, in _worker_loop
  63.     data = fetcher.fetch(index)  # type: ignore[possibly-undefined]
  64.   File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
  65.     data = [self.dataset[idx] for idx in possibly_batched_index]
  66.   File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 52, in <listcomp>
  67.     data = [self.dataset[idx] for idx in possibly_batched_index]
  68.   File "/tmp/ipykernel_31/3318300818.py", line 15, in __getitem__
  69.     text = self.df.loc[idx]['text'] # extracting text from each row
  70. AttributeError: 'list' object has no attribute 'loc'
复制代码


报错如上,代码如下
  1. print('Training Start!')
  2. print('=' * 100)

  3. train(model,
  4.     device,
  5.     train_dataloader,
  6.     valid_dataloader,
  7.     CFG.epochs,
  8.     loss_fn,
  9.     optimizer,
  10.     metric)

  11. del model,train_dataloader, valid_dataloader
  12. gc.collect()
复制代码
小甲鱼最新课程 -> https://ilovefishc.com
回复

使用道具 举报

发表于 2024-11-2 23:25:01 | 显示全部楼层
根据错误信息,问题出在 `__getitem__` 方法中。具体来说,`self.df` 被当作一个 `list` 对象,而不是一个支持 `.loc` 方法的对象(如 `pandas.DataFrame`)。

要解决这个问题,请检查数据集类中 `__getitem__` 方法的实现,确保 `self.df` 是一个 `pandas.DataFrame` 对象。以下是一个可能的修正示例:

  1. class YourDatasetClass:
  2.     def __init__(self, dataframe):
  3.         self.df = dataframe

  4.     def __getitem__(self, idx):
  5.         # 确保 self.df 是一个 DataFrame
  6.         if isinstance(self.df, pd.DataFrame):
  7.             text = self.df.loc[idx, 'text_column_name']  # 替换 'text_column_name' 为实际的列名
  8.             # 其他处理逻辑
  9.             return text
  10.         else:
  11.             raise TypeError("self.df 应该是一个 pandas DataFrame")
复制代码


请确保在初始化数据集类时传入的是一个 `pandas.DataFrame` 对象,而不是一个 `list`。如果 `self.df` 是从其他地方传入的,检查传入的对象类型是否正确。

以上回复来自 FishC AI 小助理,如未能正确解答您的问题,请继续追问。
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2024-11-3 10:33:10 | 显示全部楼层
FishC 发表于 2024-11-2 23:25
根据错误信息,问题出在 `__getitem__` 方法中。具体来说,`self.df` 被当作一个 `list` 对象,而不是一个 ...
  1. Training Start!
  2. ====================================================================================================
  3. /tmp/ipykernel_31/152112840.py:32: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  4.   'token_type_ids':torch.tensor(encoded_dict['token_type_ids'],dtype=torch.long).unsqueeze(dim=0)}
  5. /tmp/ipykernel_31/152112840.py:32: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  6.   'token_type_ids':torch.tensor(encoded_dict['token_type_ids'],dtype=torch.long).unsqueeze(dim=0)}
  7. /tmp/ipykernel_31/152112840.py:32: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  8.   'token_type_ids':torch.tensor(encoded_dict['token_type_ids'],dtype=torch.long).unsqueeze(dim=0)}
  9. &#8199;&#8199;0%
  10. &#8199;0/229&#8199;[00:00<?,&#8199;?it/s]
  11. /tmp/ipykernel_31/152112840.py:32: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  12.   'token_type_ids':torch.tensor(encoded_dict['token_type_ids'],dtype=torch.long).unsqueeze(dim=0)}
  13. /tmp/ipykernel_31/152112840.py:32: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  14.   'token_type_ids':torch.tensor(encoded_dict['token_type_ids'],dtype=torch.long).unsqueeze(dim=0)}
  15. /tmp/ipykernel_31/152112840.py:32: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  16.   'token_type_ids':torch.tensor(encoded_dict['token_type_ids'],dtype=torch.long).unsqueeze(dim=0)}
  17. /tmp/ipykernel_31/152112840.py:32: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  18.   'token_type_ids':torch.tensor(encoded_dict['token_type_ids'],dtype=torch.long).unsqueeze(dim=0)}
  19. /tmp/ipykernel_31/152112840.py:32: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  20.   'token_type_ids':torch.tensor(encoded_dict['token_type_ids'],dtype=torch.long).unsqueeze(dim=0)}
  21. ---------------------------------------------------------------------------
  22. KeyError                                  Traceback (most recent call last)
  23. Cell In[58], line 4
  24.       1 print('Training Start!')
  25.       2 print('=' * 100)
  26. ----> 4 train(model,
  27.       5     device,
  28.       6     train_dataloader,
  29.       7     valid_dataloader,
  30.       8     CFG.epochs,
  31.       9     loss_fn,
  32.      10     optimizer,
  33.      11     metric)
  34.      13 del model,train_dataloader, valid_dataloader
  35.      14 gc.collect()

  36. Cell In[39], line 17, in train(model, device, train_dataloader, valid_dataloader, epochs, loss_fn, optimizer, metric)
  37.      14 train_step = 0
  38.      15 pbar = tqdm(train_dataloader)#tqdm参数是一个iterable
  39. ---> 17 for batch in pbar: # you can also write like "for batch in tqdm(train_dataloader"
  40.      18     optimizer.zero_grad() # initialize
  41.      19     train_step += 1

  42. File /opt/conda/lib/python3.10/site-packages/tqdm/notebook.py:250, in tqdm_notebook.__iter__(self)
  43.     248 try:
  44.     249     it = super().__iter__()
  45. --> 250     for obj in it:
  46.     251         # return super(tqdm...) will not catch exception
  47.     252         yield obj
  48.     253 # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt

  49. File /opt/conda/lib/python3.10/site-packages/tqdm/std.py:1181, in tqdm.__iter__(self)
  50.    1178 time = self._time
  51.    1180 try:
  52. -> 1181     for obj in iterable:
  53.    1182         yield obj
  54.    1183         # Update and possibly print the progressbar.
  55.    1184         # Note: does not call self.update(1) for speed optimisation.

  56. File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self)
  57.     627 if self._sampler_iter is None:
  58.     628     # TODO(https://github.com/pytorch/pytorch/issues/76750)
  59.     629     self._reset()  # type: ignore[call-arg]
  60. --> 630 data = self._next_data()
  61.     631 self._num_yielded += 1
  62.     632 if self._dataset_kind == _DatasetKind.Iterable and \
  63.     633         self._IterableDataset_len_called is not None and \
  64.     634         self._num_yielded > self._IterableDataset_len_called:

  65. File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:1344, in _MultiProcessingDataLoaderIter._next_data(self)
  66.    1342 else:
  67.    1343     del self._task_info[idx]
  68. -> 1344     return self._process_data(data)

  69. File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:1370, in _MultiProcessingDataLoaderIter._process_data(self, data)
  70.    1368 self._try_put_index()
  71.    1369 if isinstance(data, ExceptionWrapper):
  72. -> 1370     data.reraise()
  73.    1371 return data

  74. File /opt/conda/lib/python3.10/site-packages/torch/_utils.py:706, in ExceptionWrapper.reraise(self)
  75.     702 except TypeError:
  76.     703     # If the exception takes multiple arguments, don't try to
  77.     704     # instantiate since we don't know how to
  78.     705     raise RuntimeError(msg) from None
  79. --> 706 raise exception

  80. KeyError: Caught KeyError in DataLoader worker process 0.
  81. Original Traceback (most recent call last):
  82.   File "/opt/conda/lib/python3.10/site-packages/pandas/core/indexes/base.py", line 3805, in get_loc
  83.     return self._engine.get_loc(casted_key)
  84.   File "index.pyx", line 167, in pandas._libs.index.IndexEngine.get_loc
  85.   File "index.pyx", line 196, in pandas._libs.index.IndexEngine.get_loc
  86.   File "pandas/_libs/hashtable_class_helper.pxi", line 2606, in pandas._libs.hashtable.Int64HashTable.get_item
  87.   File "pandas/_libs/hashtable_class_helper.pxi", line 2630, in pandas._libs.hashtable.Int64HashTable.get_item
  88. KeyError: 1287

  89. The above exception was the direct cause of the following exception:

  90. Traceback (most recent call last):
  91.   File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 309, in _worker_loop
  92.     data = fetcher.fetch(index)  # type: ignore[possibly-undefined]
  93.   File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
  94.     data = [self.dataset[idx] for idx in possibly_batched_index]
  95.   File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 52, in <listcomp>
  96.     data = [self.dataset[idx] for idx in possibly_batched_index]
  97.   File "/tmp/ipykernel_31/152112840.py", line 15, in __getitem__
  98.     text = self.df.loc[idx]['cleaned'] # extracting text from each row
  99.   File "/opt/conda/lib/python3.10/site-packages/pandas/core/indexing.py", line 1191, in __getitem__
  100.     return self._getitem_axis(maybe_callable, axis=axis)
  101.   File "/opt/conda/lib/python3.10/site-packages/pandas/core/indexing.py", line 1431, in _getitem_axis
  102.     return self._get_label(key, axis=axis)
  103.   File "/opt/conda/lib/python3.10/site-packages/pandas/core/indexing.py", line 1381, in _get_label
  104.     return self.obj.xs(label, axis=axis)
  105.   File "/opt/conda/lib/python3.10/site-packages/pandas/core/generic.py", line 4301, in xs
  106.     loc = index.get_loc(key)
  107.   File "/opt/conda/lib/python3.10/site-packages/pandas/core/indexes/base.py", line 3812, in get_loc
  108.     raise KeyError(key) from err
  109. KeyError: 1287
复制代码


报错如上
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2024-11-3 10:33:19 | 显示全部楼层
Terence888 发表于 2024-11-03 10:33
Training Start!
================================= ...


根据错误信息,`KeyError: 1287` 表示在尝试访问数据集的某个索引时,索引超出了数据集的范围。这个问题通常是由于数据集的大小与索引不匹配导致的。

以下是一些可能的解决方案:

1. **检查数据集大小**:确保数据集的大小与您期望的一致。可以通过打印数据集的长度来确认:

  
  1. print(len(self.df))
复制代码


2. **检查索引范围**:在 `__getitem__` 方法中,确保索引 `idx` 在数据集的范围内。可以在方法中添加一个检查:

  
  1. def __getitem__(self, idx):
  2.        if idx >= len(self.df):
  3.            raise IndexError("Index out of range")
  4.        # 继续处理
复制代码


3. **数据集分割**:如果您在分割数据集时出现问题,可能导致某个分割为空或大小不一致。检查数据集分割的代码,确保每个分割都有数据。

4. **数据加载器**:检查 `DataLoader` 的 `batch_size` 和 `shuffle` 参数,确保它们与数据集的大小兼容。

5. **调试信息**:在 `__getitem__` 方法中添加调试信息,帮助定位问题:

  
  1. def __getitem__(self, idx):
  2.        print(f"Accessing index: {idx}")
  3.        # 继续处理
复制代码


通过这些步骤,您应该能够更好地定位和解决问题。如果问题仍然存在,请检查数据集的其他部分,确保数据的完整性和一致性。

以上回复来自 FishC AI 小助理,如未能正确解答您的问题,请继续追问。
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2024-11-3 10:38:44 | 显示全部楼层
FishC 发表于 2024-11-3 10:33
根据错误信息,`KeyError: 1287` 表示在尝试访问数据集的某个索引时,索引超出了数据集的范围。这个问 ...
  1. from torch.utils.data import Dataset
  2. import torch

  3. #定义数据集
  4. class LLMDataset(Dataset):
  5.     def __init__(self,df,is_grad,tokenizer):
  6.         self.df = df # Pandas.DataFrame
  7.         self.is_grad = is_grad # True: train,valid / False: test
  8.         self.tokenizer = tokenizer

  9.     def __len__(self):
  10.         return len(self.df) # number of samples

  11.     def __getitem__(self,idx):
  12.         text = self.df.loc[idx]['cleaned'] # extracting text from each row

  13.         encoded_dict = self.tokenizer.encode_plus(
  14.             text,
  15.             add_special_tokens=True,#自动在每个文本前后添加特殊标记(如CLS和SEP)
  16.             padding='max_length',#补0
  17.             truncation=True,#句子长度大于max_length时截断
  18.             max_length=84, # given to the max_length of tokenized text
  19.             return_tensors='pt', # PyTorch
  20.             return_attention_mask=True, # We should put it into the model,计算注意力(attention)时忽略那些paddle值
  21.         )

  22.         if self.is_grad:#训练集
  23.             
  24.             # [batch,1,max_len(84)] -> [batch,max_len]#使用squeeze降维
  25.             return {'input_ids':encoded_dict['input_ids'].squeeze(),
  26.                     'attention_mask':encoded_dict['attention_mask'].squeeze(),
  27.                     'token_type_ids':torch.tensor(encoded_dict['token_type_ids'],dtype=torch.long).unsqueeze(dim=0)}
  28.         else:#测试集
  29.             # [batch,1,max_len(84)] -> [batch,max_len]
  30.             return {'input_ids':encoded_dict['input_ids'].squeeze(),
  31.                     'attention_mask':encoded_dict['attention_mask'].squeeze()}
复制代码


上述处理数据集的代码有什么问题
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2024-11-3 10:54:58 | 显示全部楼层
Terence888 发表于 2024-11-03 10:38
from torch.utils.data import Dataset
import torch ...


在您提供的代码中,有几个问题需要注意:

1. **`self.df.loc` 的使用**:
   - 在 `__getitem__` 方法中,`text = self.df.loc` 这一行是错误的。`self.df.loc` 是一个方法,您需要使用 `self.df.loc[idx, 'column_name']` 来获取特定行和列的数据。确保替换 `'column_name'` 为实际的列名。

2. **`encoded_dict` 的使用**:
   - `encoded_dict` 是一个字典,包含多个键(如 `input_ids` 和 `attention_mask`)。在返回时,您需要分别处理这些键,而不是直接对 `encoded_dict` 使用 `squeeze`。

3. **`torch.tensor` 的使用**:
   - 在 `torch.tensor(encoded_dict, dtype=torch.long)` 中,`encoded_dict` 是一个字典,不能直接转换为张量。您需要对字典中的具体值进行转换。

以下是修正后的代码示例:

  1. from torch.utils.data import Dataset
  2. import torch

  3. class LLMDataset(Dataset):
  4.     def __init__(self, df, is_grad, tokenizer):
  5.         self.df = df
  6.         self.is_grad = is_grad
  7.         self.tokenizer = tokenizer

  8.     def __len__(self):
  9.         return len(self.df)

  10.     def __getitem__(self, idx):
  11.         # 假设 'text_column' 是包含文本数据的列名
  12.         text = self.df.loc[idx, 'text_column']

  13.         encoded_dict = self.tokenizer.encode_plus(
  14.             text,
  15.             add_special_tokens=True,
  16.             padding='max_length',
  17.             truncation=True,
  18.             max_length=84,
  19.             return_tensors='pt',
  20.             return_attention_mask=True,
  21.         )

  22.         if self.is_grad:
  23.             return {
  24.                 'input_ids': encoded_dict['input_ids'].squeeze(),
  25.                 'attention_mask': encoded_dict['attention_mask'].squeeze(),
  26.                 'token_type_ids': torch.tensor(encoded_dict['token_type_ids'], dtype=torch.long).squeeze()
  27.             }
  28.         else:
  29.             return {
  30.                 'input_ids': encoded_dict['input_ids'].squeeze(),
  31.                 'attention_mask': encoded_dict['attention_mask'].squeeze()
  32.             }
复制代码


请确保在 `__getitem__` 方法中正确访问 DataFrame 的列,并对 `encoded_dict` 中的具体键进行处理。

以上回复来自 FishC AI 小助理,如未能正确解答您的问题,请继续追问。
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

 楼主| 发表于 2024-11-3 11:54:19 | 显示全部楼层
FishC 发表于 2024-11-3 10:54
在您提供的代码中,有几个问题需要注意:

1. **`self.df.loc` 的使用**:
  1. 'input_ids': encoded_dict['input_ids'].squeeze(),
  2.                 'attention_mask': encoded_dict['attention_mask'].squeeze(),
  3.                 'token_type_ids': torch.tensor(encoded_dict['token_type_ids'], dtype=torch.long).squeeze()
复制代码


input_ids attention_mask token_type_ids分别是什么形状的列表
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

发表于 2024-11-3 11:54:28 | 显示全部楼层
Terence888 发表于 2024-11-03 11:54
'input_ids': encoded_dict.squeeze(),
             ...


In the context of the BERT model and the `transformers` library, the shapes of `input_ids`, `attention_mask`, and `token_type_ids` are typically as follows:

1. **`input_ids`**: This is a tensor containing the token IDs for the input text. Its shape is usually `(batch_size, max_length)`, where `max_length` is the maximum sequence length specified during tokenization.

2. **`attention_mask`**: This tensor indicates which tokens should be attended to and which should not (e.g., padding tokens). Its shape is also `(batch_size, max_length)`.

3. **`token_type_ids`**: This tensor is used to distinguish between different sequences in tasks like sentence pair classification. Its shape is `(batch_size, max_length)`.

In your code, it seems like there might be a misunderstanding in how `encoded_dict` is being used. The `encoded_dict` returned by `tokenizer.encode_plus` is a dictionary containing these keys, and you should access them individually. Here's a corrected version of how you might handle these tensors:

  1. {
  2.     'input_ids': encoded_dict['input_ids'].squeeze(),
  3.     'attention_mask': encoded_dict['attention_mask'].squeeze(),
  4.     'token_type_ids': encoded_dict['token_type_ids'].squeeze()
  5. }
复制代码


This assumes that `encoded_dict` contains these keys, which is typical when using `encode_plus` with `return_tensors='pt'

以上回复来自 FishC AI 小助理,如未能正确解答您的问题,请继续追问。
小甲鱼最新课程 -> https://ilovefishc.com
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

小黑屋|手机版|Archiver|鱼C工作室 ( 粤ICP备18085999号-1 | 粤公网安备 44051102000585号)

GMT+8, 2025-4-18 17:06

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表