I'm trying to fetch the full text of tweets using the Tweepy library in Python with the Twitter API v2. I'm attempting to query using tweet.full_text, but I'm getting a 403 Forbidden error.
According to the Twitter documentation, this may be due to lack of permission to access full API functionality. But I use free version of API and that one should be enough
Error
Forbidden Traceback (most recent call last)
<ipython-input-50-c4cf22e33ad4> in <cell line: 19>()
17 tweets = tweepy.Cursor(api.search_tweets, q="React", tweet_mode="extended", lang="en", result_type="popular").items(25)
18
---> 19 tweets_list = [tweet.full_text for tweet in tweets]
20
21
8 frames
/usr/local/lib/python3.9/dist-packages/tweepy/api.py in request(self, method, endpoint, endpoint_parameters, params, headers, json_payload, parser, payload_list, payload_type, post_data, files, require_auth, return_cursors, upload_api, use_cache, **kwargs)
263 raise Unauthorized(resp)
264 if resp.status_code == 403:
--> 265 raise Forbidden(resp)
266 if resp.status_code == 404:
267 raise NotFound(resp)
Forbidden: 403 Forbidden
453 - You currently have Essential access which includes access to Twitter API v2 endpoints only. If you need access to this endpoint, you’ll need to apply for Elevated access via the Developer Portal. You can learn more here: https://developer.twitter.com/en/docs/twitter-api/getting-started/about-twitter-api#v2-access-leve
Code
import tweepy
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from scipy.special import softmax
consumer_key = "xxx"
consumer_secret = "xxx"
access_token = "xx"
access_token_secret = "xxx"
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
tweets = tweepy.Cursor(api.search_tweets, q="React", tweet_mode="extended", lang="en", result_type="popular").items(25)
tweets_list = [tweet.full_text for tweet in tweets]
roberta = "cardiffnlp/twitter-roberta-base-sentiment"
model = AutoModelForSequenceClassification.from_pretrained(roberta)
tokenizer = AutoTokenizer.from_pretrained(roberta)
labels = ['Negative', 'Neutral', 'Positive']
for tweet in tweets_list:
tweet_words = []
for word in tweet.split(' '):
if word.startswith('@') and len(word) > 1:
word = '@user'
elif word.startswith('http'):
word = "http"
tweet_words.append(word)
tweet_proc = " ".join(tweet_words)
encoded_tweet = tokenizer(tweet_proc, return_tensors='pt')
output = model(**encoded_tweet)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
print(f"Tweet: {tweet}")
for i in range(len(scores)):
l = labels[i]
s = scores[i]
print(f"{l}: {s:.2f}")
print("\n")
I thought it was a problem with the keys and generated new ones, but unfortunately, it did not help