I am doing a DBSCAN on a dataset of 400K data points. Here is what i get as the error:
Traceback (most recent call last):
File "/myproject/DBSCAN_section.py", line 498, in perform_dbscan_on_data
db = DBSCAN(eps=2, min_samples=5).fit(data)
File "/usr/local/Python/2.7.13/lib/python2.7/site-packages/sklearn/cluster/dbscan_.py", line 266, in fit
**self.get_params())
File "/usr/local/Python/2.7.13/lib/python2.7/site-packages/sklearn/cluster/dbscan_.py", line 138, in dbscan
return_distance=False)
File "/usr/local/Python/2.7.13/lib/python2.7/site-packages/sklearn/neighbors/base.py", line 621, in radius_neighbors
return_distance=return_distance)
File "sklearn/neighbors/binary_tree.pxi", line 1491, in sklearn.neighbors.kd_tree.BinaryTree.query_radius (sklearn/neighbors/kd_tree.c:13013)
MemoryError
How can I fix this? is there any limit to DBSCAN to process the big number of data?
my source of example is from: http://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html
my data is in X, Y coordinates format:
11.342276,11.163416
11.050597,10.745579
10.798838,10.559784
11.249279,11.445535
11.385767,10.989214
10.825875,10.530120
10.598493,11.236947
10.571042,10.830799
11.454966,11.295484
11.431454,11.200208
10.774908,11.102601
10.602692,11.395169
11.324441,11.088243
10.731538,10.695864
10.537385,10.923226
11.215886,11.391537
should I convert my data to sparse CSR? how?