I just started working on text clustering in Japanese through Python2. However, when I created the dictionary based on these Japanese words/terms, the dictionary keys become unicode instead of Japanese. The codes are as follows:
# load data
allWrdMat10 = pd.read_csv("../../data/allWrdMat10.csv.gz",
encoding='CP932')
## Set X as CSR Sparse Matrix
X = np.array(allWrdMat10)
X = sp.csr_matrix(X)
## create dictionary
dict_index = {t:i for i,t in enumerate(allWrdMat10.columns)}
freqrank = np.array(dict_index.values()).argsort()
X_transform = X[:, freqrank < 1000].transpose().toarray()
The results of allWrdMat10.columns
are still Japanese as follows:
Index([u'?', u'.', u'・', u'%', u'0', u'1', u'10月', u'11月', u'12
月', u'1つ',
...
u'瀋陽', u'疆', u'盧', u'籠', u'絆', u'胚', u'諫早', u'趙', u'鉉', u'鎔
基'],dtype='object', length=8655)
However, the results of dict_index.keys()
are as:
[u'\u77ed\u9283',
u'\u5efa\u3066',
u'\u4f0a',
u'\u5e73\u5b89',
u'\u6025\u9a30',
u'\u897f\u65e5\u672c',
u'\u5e03\u9663',
...]
Is there any way I can keep the Japanese words/terms in the dictionary keys? Or is there any way I can convert the unicodes back to Japanese words/terms? Thanks.