I'm adding the text contained in the second column of a number of csv files into one list to later perform sentiment analysis on each item in the list. My code is fully working for large csv files at the moment, but the sentiment analysis I'm performing on the items in the list takes too long which is why I want to only read the first 200 rows per csv file. The code looks as follows:
import nltk, string, lumpy
import math
import glob
from collections import defaultdict
columns = defaultdict(list)
from nltk.corpus import stopwords
import math
import sentiment_mod as s
import glob
lijst = glob.glob('21cf/*.csv')
tweets1 = []
for item in lijst:
stopwords_set = set(stopwords.words("english"))
with open(item, encoding = 'latin-1') as d:
reader1=csv.reader(d)
next(reader1)
for row in reader1:
tweets1.extend([row[2]])
words_cleaned = [" ".join([words for words in sentence.split() if 'http' not in words and not words.startswith('@')]) for sentence in tweets1]
words_filtered = [e.lower() for e in words_cleaned]
words_without_stopwords = [word for word in words_filtered if not word in stopwords_set]
tweets1 = words_without_stopwords
tweets1 = list(filter(None, tweets1))
How do I make sure to only read over the first 200 rows per csv file with the csv reader?