I try to convert an heavy .txt which is the string of a dicionary like (a part):
"{'A45171': {'Gen_n': 'Putative uncharacterized protein', 'Srce': 'UniProtKB', 'Ref': 'GO_REF:0000033', 'Tax': 'NCBITaxon:684364', 'Gen_bl': 'BATDEDRAFT_15336', 'Gen_id': 'UniProtKB:F4NTD6', 'Ev_n': 'IBA', 'GO_n': 'ergosterol biosynthetic process', 'GO': 'GO:0006696', 'Org': 'Batrachochytrium dendrobatidis JAM81', 'Type': 'protein', 'Ev_e': 'ECO:0000318', 'Con': 'GO_Central'}, 'A43886': {'Gen_n': 'Uncharacterized protein', 'Srce': 'UniProtKB', 'Ref': 'GO_REF:0000002', 'Tax': 'NCBITaxon:9823', 'Gen_bl': 'RDH8', 'Gen_id': 'UniProtKB:F1S3H8', 'Ev_n': 'IEA', 'GO_n': 'estrogen biosynthetic process', 'GO': 'GO:0006703', 'Org': 'Sus scrofa', 'Type': 'protein', 'Ev_e': 'ECO:0000501', 'Con': 'InterPro'}}"
I've tryed ast
module:
import ast
dic_gene_definitions = open("Gene_Ontology/output_data/dic_gene_definitions.txt", "r")
dic_gene_definitions = dic_gene_definitions.read()
dic_gene_definitions = ast.literal_eval(dic_gene_definitions)
Which weight 22Mb and when don't crush, it runs so slow.
I really wants to open an 500 Mb files...
I've look json
module which can open so faster, but in heavy dictionary string it crash also (not with short examples).
Any solution...?
Thank you so much.