|
马上注册,结交更多好友,享用更多功能^_^
您需要 登录 才可以下载或查看,没有账号?立即注册
x
这个是代码,这个数据样本有11万条
import pandas as pd #导入数据集
url=r"C:\Users\zmj佳佳佳\Desktop\第六步离散化测试.csv"
df = pd.read_csv(url, header = None,low_memory=False)#将数据集分为训练集和测试集
df.columns=["sub_grade","dti","delinq_2yrs","earliest_cr_line","fico_range_low","inq_last_6mths",
"mths_since_last_delinq","pub_rec","revol_bal","revol_util","mths_since_last_major_derog",
"tot_cur_bal","open_acc_6m","open_il_12m","open_il_24m","mths_since_rcnt_il","open_rv_12m",
"open_rv_24m","max_bal_bc","all_util","inq_last_12m","acc_open_past_24mths","avg_cur_bal",
"bc_open_to_buy","mo_sin_old_il_acct","mo_sin_old_rev_tl_op","mo_sin_rcnt_rev_tl_op","mo_sin_rcnt_tl",
"mort_acc","mths_since_recent_bc_dlq","mths_since_recent_inq","mths_since_recent_revol_delinq",
"num_accts_ever_120_pd","num_actv_bc_tl","num_actv_rev_tl","num_bc_sats","num_bc_tl",
"num_rev_accts","num_rev_tl_bal_gt_0","num_tl_90g_dpd_24m","num_tl_op_past_12m","pct_tl_nvr_dlq",
"pub_rec_bankruptcies"]
#将数据集分为训练集和测试集
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
x, y = df.iloc[:, 1:].values, df.iloc[:, 0].values
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3, random_state = 0)
feat_labels = df.columns[1:]
forest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)
forest.fit(x_train, y_train)
print("准确率:",forest.score(x_test,y_test))
#特征重要性评估
import numpy as np
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1]
for f in range(x_train.shape[1]):
print("%2d) %-*s %f" % (f + 1, 30, feat_labels[indices[f]], importances[indices[f]]))
Traceback (most recent call last):
File "C:\Users\zmj佳佳佳\Desktop\uci葡萄酒随机森林特征选择 - 副本.py", line 21, in <module>
forest.fit(x_train, y_train)
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn\ensemble\_forest.py", line 377, in fit
trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\joblib\parallel.py", line 1017, in __call__
self.retrieve()
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\joblib\parallel.py", line 909, in retrieve
self._output.extend(job.get(timeout=self.timeout))
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 768, in get
raise self._value
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\joblib\_parallel_backends.py", line 608, in __call__
return self.func(*args, **kwargs)
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\joblib\parallel.py", line 255, in __call__
return [func(*args, **kwargs)
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\joblib\parallel.py", line 255, in <listcomp>
return [func(*args, **kwargs)
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn\ensemble\_forest.py", line 165, in _parallel_build_trees
tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn\tree\_classes.py", line 873, in fit
super().fit(
File "C:\Users\zmj佳佳佳\AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn\tree\_classes.py", line 367, in fit
builder.build(self.tree_, X, y, sample_weight, X_idx_sorted)
File "sklearn\tree\_tree.pyx", line 146, in sklearn.tree._tree.DepthFirstTreeBuilder.build
File "sklearn\tree\_tree.pyx", line 244, in sklearn.tree._tree.DepthFirstTreeBuilder.build
File "sklearn\tree\_tree.pyx", line 739, in sklearn.tree._tree.Tree._add_node
File "sklearn\tree\_tree.pyx", line 711, in sklearn.tree._tree.Tree._resize_c
File "sklearn\tree\_utils.pyx", line 41, in sklearn.tree._utils.safe_realloc
MemoryError: could not allocate 33554432 bytes
33554432/(1024**2)=32M,32M内存就报错,32位的pytho都可以处理>2G的数据,没道理说内存不够。
问题可能出在这句: forest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)
百度了这么一段:要构造一个随机森林模型,第一步是确定森林中树的数目,通过模型的 进行调节。n_estimators越大越好,但占用的内存与训练和预测的时间也会相应增长,且边际效益是递减的,所以要在可承受的内存/时间内选取尽可能大的n_estimators。而在sklearn中,n_estimators默认为10。
建议你不妨改小n_estimators的值来试试看。
有条件换一台电脑,最好是64位的python,看看运行是否也会报这个错,进一步排除问题。
|
|