【求助】pandas导入40G的csv运行内存不足问题
我写的代码:
trips = pd.read_csv('D:\BA302\data source\Trips.csv')
用的是juypter notebook。
关于大数据的处理,,我有了解到pandas库的IO工具和用for循环写的,无奈本人水平有限,网上的看不太懂。
想求助大佬看看,怎么写代码能导入这40G的数据呢
下面是报错的内容。
---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
<ipython-input-3-f832ad24d0d6> in <module>
----> 1 trips = pd.read_csv('D:\BA302\data source\Trips.csv')
D:\software\anaconda\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
683 )
684
--> 685 return _read(filepath_or_buffer, kwds)
686
687 parser_f.__name__ = name
D:\software\anaconda\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
461
462 try:
--> 463 data = parser.read(nrows)
464 finally:
465 parser.close()
D:\software\anaconda\lib\site-packages\pandas\io\parsers.py in read(self, nrows)
1152 def read(self, nrows=None):
1153 nrows = _validate_integer("nrows", nrows)
-> 1154 ret = self._engine.read(nrows)
1155
1156 # May alter columns / col_dict
D:\software\anaconda\lib\site-packages\pandas\io\parsers.py in read(self, nrows)
2057 def read(self, nrows=None):
2058 try:
-> 2059 data = self._reader.read(nrows)
2060 except StopIteration:
2061 if self._first_chunk:
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader.read()
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_rows()
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._convert_column_data()
pandas\_libs\parsers.pyx in pandas._libs.parsers._maybe_upcast()
MemoryError: 我的天40G的数据,不懂帮顶,感觉是有点难度啊 {:10_266:} 电脑内存不足?这个似乎也没办法。 用数据库吧{:10_277:} {:10_269:} 我也想知道怎么读取这么大的文件,可以试一试nrows 这个参数,按行数读取 本帖最后由 Stubborn 于 2021-1-16 12:57 编辑
可以自己写一个上下文管理器方便点,或者用自带的with去逐行读取(读取到一定数量进行任务操作)
页:
[1]