問(wèn)題描述
我做了大量的統(tǒng)計(jì)工作,并使用 Python 作為我的主要語(yǔ)言.雖然我使用的一些數(shù)據(jù)集可能占用 20GB 的內(nèi)存,這使得使用 numpy、scipy 和 PyIMSL 中的內(nèi)存函數(shù)對(duì)它們進(jìn)行操作幾乎是不可能的.統(tǒng)計(jì)分析語(yǔ)言 SAS 在這里有一個(gè)很大的優(yōu)勢(shì),它可以對(duì)來(lái)自硬盤(pán)的數(shù)據(jù)進(jìn)行操作,而不是嚴(yán)格的內(nèi)存處理.但是,我想避免在 SAS 中編寫(xiě)大量代碼(出于各種原因),因此我試圖確定我對(duì) Python 有哪些選擇(除了購(gòu)買(mǎi)更多的硬件和內(nèi)存).
I do a lot of statistical work and use Python as my main language. Some of the data sets I work with though can take 20GB of memory, which makes operating on them using in-memory functions in numpy, scipy, and PyIMSL nearly impossible. The statistical analysis language SAS has a big advantage here in that it can operate on data from hard disk as opposed to strictly in-memory processing. But, I want to avoid having to write a lot of code in SAS (for a variety of reasons) and am therefore trying to determine what options I have with Python (besides buying more hardware and memory).
我應(yīng)該澄清一下,像 map-reduce 這樣的方法對(duì)我的大部分工作沒(méi)有幫助,因?yàn)槲倚枰獙?duì) 完整 組數(shù)據(jù)進(jìn)行操作(例如計(jì)算分位數(shù)或擬合邏輯回歸模型).
I should clarify that approaches like map-reduce will not help in much of my work because I need to operate on complete sets of data (e.g. computing quantiles or fitting a logistic regression model).
最近我開(kāi)始玩 h5py 并認(rèn)為這是我找到的最佳選擇允許 Python 像 SAS 一樣操作并操作磁盤(pán)中的數(shù)據(jù)(通過(guò) hdf5 文件),同時(shí)仍然能夠利用 numpy/scipy/matplotlib 等.我想知道是否有人在類(lèi)似的設(shè)置中使用 Python 和 h5py 的經(jīng)驗(yàn)和他們發(fā)現(xiàn)了什么.有沒(méi)有人能夠在迄今為止由 SAS 主導(dǎo)的大數(shù)據(jù)"設(shè)置中使用 Python?
Recently I started playing with h5py and think it is the best option I have found for allowing Python to act like SAS and operate on data from disk (via hdf5 files), while still being able to leverage numpy/scipy/matplotlib, etc. I would like to hear if anyone has experience using Python and h5py in a similar setting and what they have found. Has anyone been able to use Python in "big data" settings heretofore dominated by SAS?
購(gòu)買(mǎi)更多的硬件/內(nèi)存肯定會(huì)有所幫助,但從 IT 的角度來(lái)看,我很難將 Python 賣(mài)給需要分析大量數(shù)據(jù)集的組織,而 Python(或 R,或 MATLAB 等)需要持有內(nèi)存中的數(shù)據(jù).SAS 在這方面繼續(xù)擁有強(qiáng)大的賣(mài)點(diǎn),因?yàn)殡m然基于磁盤(pán)的分析可能較慢,但您可以自信地處理龐大的數(shù)據(jù)集.所以,我希望 Stackoverflow 能幫助我弄清楚如何降低使用 Python 作為主要大數(shù)據(jù)分析語(yǔ)言的風(fēng)險(xiǎn).
Buying more hardware/memory certainly can help, but from an IT perspective it is hard for me to sell Python to an organization that needs to analyze huge data sets when Python (or R, or MATLAB etc) need to hold data in memory. SAS continues to have a strong selling point here because while disk-based analytics may be slower, you can confidently deal with huge data sets. So, I am hoping that Stackoverflow-ers can help me figure out how to reduce the perceived risk around using Python as a mainstay big-data analytics language.
推薦答案
我們使用Python結(jié)合h5py、numpy/scipy和boost::python來(lái)做數(shù)據(jù)分析.我們的典型數(shù)據(jù)集大小可達(dá)數(shù)百 GB.
We use Python in conjunction with h5py, numpy/scipy and boost::python to do data analysis. Our typical datasets have sizes of up to a few hundred GBs.
HDF5 的優(yōu)勢(shì):
- 可以使用 h5view 應(yīng)用程序、h5py/ipython 和 h5* 命令行工具方便地檢查數(shù)據(jù)
- API 可用于不同的平臺(tái)和語(yǔ)言
- 使用組構(gòu)造數(shù)據(jù)
- 使用屬性注釋數(shù)據(jù)
- 無(wú)憂的內(nèi)置數(shù)據(jù)壓縮
- 單個(gè)數(shù)據(jù)集上的 io 速度很快
HDF5 陷阱:
- 如果 h5 文件包含太多數(shù)據(jù)集/組 (> 1000),性能會(huì)下降,因?yàn)楸闅v它們非常慢.另一方面,io 對(duì)于一些大型數(shù)據(jù)集來(lái)說(shuō)速度很快.
- 高級(jí)數(shù)據(jù)查詢(xún)(類(lèi)似 SQL)實(shí)施起來(lái)笨拙且速度慢(在這種情況下考慮 SQLite)
- HDF5 并非在所有情況下都是線程安全的:必須確保使用正確的選項(xiàng)編譯庫(kù)
- 更改 h5 數(shù)據(jù)集(調(diào)整大小、刪除等)會(huì)增大文件大小(在最好的情況下)或不可能(在最壞的情況下)(必須復(fù)制整個(gè) h5 文件以再次展平)
這篇關(guān)于有使用 h5py 在 Python 中對(duì)大數(shù)據(jù)進(jìn)行分析工作的經(jīng)驗(yàn)嗎?的文章就介紹到這了,希望我們推薦的答案對(duì)大家有所幫助,也希望大家多多支持html5模板網(wǎng)!