I have a list of dicts as follows:
[{'text': ['The', 'Fulton', 'County', 'Grand', ...], 'tags': ['AT', 'NP-TL', 'NN-TL', 'JJ-TL', ...]},
{'text': ['The', 'jury', 'further', 'said', ...], 'tags': ['AT', 'NN', 'RBR', 'VBD', ...]},
...]
Each value of each dict is a list of sentence-words/tags. This is directly from the Brown corpus of the NLTK dataset, loaded using:
from nltk.corpus import brown
data = brown.tagged_sents()
data = {'text': [[word for word, tag in sent] for sent in data], 'tags': [[tag for word, tag in sent] for sent in data]}
import pandas as pd
df = pd.DataFrame(training_data, columns=["text", "tags"])
from sklearn.model_selection import train_test_split
train, val = train_test_split(df, test_size=0.2)
train.to_json("train.json", orient='records')
val.to_json("val.json", orient='records')
I want to load this json into a torchtext.data.TabularDataset using:
TEXT = data.Field(lower=True)
TAGS = data.Field(unk_token=None)
data_fields = [('text', TEXT), ('tags', TAGS)]
train, val = data.TabularDataset.splits(path='./', train='train.json', validation='val.json', format='json', fields=data_fields)
But it gives me this error:
/usr/local/lib/python3.6/dist-packages/torchtext/data/example.py in fromdict(cls, data, fields)
17 def fromdict(cls, data, fields):
18 ex = cls()
---> 19 for key, vals in fields.items():
20 if key not in data:
21 raise ValueError("Specified key {} was not found in "
AttributeError: 'list' object has no attribute 'items'
Note that I don't want TabularDataset to tokenize the sentence for me as it is already tokenized by nltk. How do I approach this? (I cannot switch corpuses to something I can directly load from torchtext.dataset, I have to use the Brown Corpus)