嘴角向上 发表于 2020-4-6 20:51:06

为甚么会显示这个错误,是我电脑内存不够吗

这个是代码,这个数据样本有11万条
import pandas as pd #导入数据集
url=r"C:\Users\zmj佳佳佳\Desktop\第六步离散化测试.csv"
df = pd.read_csv(url, header = None,low_memory=False)#将数据集分为训练集和测试集
df.columns=["sub_grade","dti","delinq_2yrs","earliest_cr_line","fico_range_low","inq_last_6mths",
            "mths_since_last_delinq","pub_rec","revol_bal","revol_util","mths_since_last_major_derog",
            "tot_cur_bal","open_acc_6m","open_il_12m","open_il_24m","mths_since_rcnt_il","open_rv_12m",
            "open_rv_24m","max_bal_bc","all_util","inq_last_12m","acc_open_past_24mths","avg_cur_bal",
            "bc_open_to_buy","mo_sin_old_il_acct","mo_sin_old_rev_tl_op","mo_sin_rcnt_rev_tl_op","mo_sin_rcnt_tl",
         "mort_acc","mths_since_recent_bc_dlq","mths_since_recent_inq","mths_since_recent_revol_delinq",
            "num_accts_ever_120_pd","num_actv_bc_tl","num_actv_rev_tl","num_bc_sats","num_bc_tl",
            "num_rev_accts","num_rev_tl_bal_gt_0","num_tl_90g_dpd_24m","num_tl_op_past_12m","pct_tl_nvr_dlq",
            "pub_rec_bankruptcies"]
#将数据集分为训练集和测试集
from sklearn.model_selection import train_test_split

from sklearn.ensemble import RandomForestClassifier
x, y = df.iloc[:, 1:].values, df.iloc[:, 0].values
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3, random_state = 0)
feat_labels = df.columns
forest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)
forest.fit(x_train, y_train)
print("准确率:",forest.score(x_test,y_test))
#特征重要性评估
import numpy as np
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1]
for f in range(x_train.shape):
    print("%2d) %-*s %f" % (f + 1, 30, feat_labels], importances]))
Traceback (most recent call last):
File "C:\Users\zmj佳佳佳\Desktop\uci葡萄酒随机森林特征选择 - 副本.py", line 21, in <module>
    forest.fit(x_train, y_train)
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn\ensemble\_forest.py", line 377, in fit
    trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\joblib\parallel.py", line 1017, in __call__
    self.retrieve()
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\joblib\parallel.py", line 909, in retrieve
    self._output.extend(job.get(timeout=self.timeout))
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 768, in get
    raise self._value
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\joblib\_parallel_backends.py", line 608, in __call__
    return self.func(*args, **kwargs)
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\joblib\parallel.py", line 255, in __call__
    return [func(*args, **kwargs)
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\joblib\parallel.py", line 255, in <listcomp>
    return [func(*args, **kwargs)
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn\ensemble\_forest.py", line 165, in _parallel_build_trees
    tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn\tree\_classes.py", line 873, in fit
    super().fit(
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn\tree\_classes.py", line 367, in fit
    builder.build(self.tree_, X, y, sample_weight, X_idx_sorted)
File "sklearn\tree\_tree.pyx", line 146, in sklearn.tree._tree.DepthFirstTreeBuilder.build
File "sklearn\tree\_tree.pyx", line 244, in sklearn.tree._tree.DepthFirstTreeBuilder.build
File "sklearn\tree\_tree.pyx", line 739, in sklearn.tree._tree.Tree._add_node
File "sklearn\tree\_tree.pyx", line 711, in sklearn.tree._tree.Tree._resize_c
File "sklearn\tree\_utils.pyx", line 41, in sklearn.tree._utils.safe_realloc
MemoryError: could not allocate 33554432 bytes

qiuyouzhi 发表于 2020-4-6 20:52:03

对的,通常报MemoryError就是内存不够了

乘号 发表于 2020-4-6 20:53:35

内存不够咯

嘴角向上 发表于 2020-4-6 20:56:07

qiuyouzhi 发表于 2020-4-6 20:52
对的,通常报MemoryError就是内存不够了

内存不够要怎么办,是电脑内存不够,还是C盘内存不够,我把python安装路径放到其他盘里行不

qiuyouzhi 发表于 2020-4-6 20:57:07

嘴角向上 发表于 2020-4-6 20:56
内存不够要怎么办,是电脑内存不够,还是C盘内存不够,我把python安装路径放到其他盘里行不

盘没有内存。。。
你可以换台电脑,或者多安装几片内存条

嘴角向上 发表于 2020-4-6 20:57:11

乘号 发表于 2020-4-6 20:53
内存不够咯

我是得换电脑吗,绝望
不会这个东西,就为了写个文章

嘴角向上 发表于 2020-4-6 21:21:39

乘号 发表于 2020-4-6 20:53
内存不够咯

问同学说33554432字节才占用0.03GB呀,内存怎么就不够了,好奇怪,是我数据的问题吗

嘴角向上 发表于 2020-4-6 21:25:18

qiuyouzhi 发表于 2020-4-6 20:57
盘没有内存。。。
你可以换台电脑,或者多安装几片内存条

可是同学说33554432字节才占用0.03GB呀,不占什么内存啊,是我数据有问题吗,还是代码有问题

qiuyouzhi 发表于 2020-4-6 21:31:23

嘴角向上 发表于 2020-4-6 21:25
可是同学说33554432字节才占用0.03GB呀,不占什么内存啊,是我数据有问题吗,还是代码有问题

你确定吗?
https://baijiahao.baidu.com/s?id=1588676262181198926&wfr=spider&for=pc

嘴角向上 发表于 2020-4-6 21:42:31

qiuyouzhi 发表于 2020-4-6 21:31
你确定吗?
https://baijiahao.baidu.com/s?id=1588676262181198926&wfr=spider&for=pc

那请问下我这个11万条的数据,我现在的电脑是4G64的,我要装几个内存条才能用

qiuyouzhi 发表于 2020-4-6 21:44:42

嘴角向上 发表于 2020-4-6 21:42
那请问下我这个11万条的数据,我现在的电脑是4G64的,我要装几个内存条才能用

先装一个试试,要是不行就装满
还不行。。。就GG了
仅供参考

txxcat 发表于 2020-4-6 22:21:09

33554432/(1024**2)=32M,32M内存就报错,32位的pytho都可以处理>2G的数据,没道理说内存不够。
问题可能出在这句:
forest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)
百度了这么一段:要构造一个随机森林模型,第一步是确定森林中树的数目,通过模型的 进行调节。n_estimators越大越好,但占用的内存与训练和预测的时间也会相应增长,且边际效益是递减的,所以要在可承受的内存/时间内选取尽可能大的n_estimators。而在sklearn中,n_estimators默认为10。
建议你不妨改小n_estimators的值来试试看。
有条件换一台电脑,最好是64位的python,看看运行是否也会报这个错,进一步排除问题。

嘴角向上 发表于 2020-4-7 20:32:30

txxcat 发表于 2020-4-6 22:21
33554432/(1024**2)=32M,32M内存就报错,32位的pytho都可以处理>2G的数据,没道理说内存不够。
问题可能 ...

嗯嗯,谢谢,试了一下减少树的数量可以了
页: [1]
查看完整版本: 为甚么会显示这个错误,是我电脑内存不够吗