Although twitter and its developer teams are coping with a go-no-go deal, they continue to improve a world class product. One that is actively engaged with by politicians, press and programmers alike.
Its crisp 280 character limit ensures quality over quantity. A lesser knows tool from twitter is their developer API. Available for all, yes anyone can access it by signing up for a twitter developer account.
Once signed in you’ll notice the essential subscription is free and starts with a whopping 500,000 tweet limit. Implying you can fetch and run inference on half a million tweets per month. This is sufficient for beginner data needs, at the same time adequate for senior professionals looking at use cases specific to their industry, domain or product.
With this perspective let us dive into the the code base, and run a personal query to fetch data from the twitter API.
I have created a repository with clear instructions in the README.md.
However to untangle things a notch further, here are some preliminary steps followed by an explanation on api_token and custom inference (sentiment analysis via key word count). Once you come through with a virtual environment folder visible in your working directory, activate it.
Steps to create and activate a virtual environment with python, ensure you are in your project directory [folder].
$ python -m venv env
$ source env/bin/activate-> [for windows users activate.bat]
Now clone this GitHub repo with the following command :
(env) $ git clone https://github.com/zora89/twitter_api_v2_testing.git
Now open the GitHub repository in another tab and read the steps highlighted in README.md. Please read the steps on GitHub related to installing requirements. Use the following command:
(env) $ pip install -r requirements.txt
Another key step is to add your API credentials to the repository. One can do this by adding a specific variable to a particular file in the cloned repository:
touch api_token.py
Go ahead open this file in your code editor, now add the following line:
BEARER_TOKEN = “<YOUR_TOKEN_HERE>”
Remember to save api_token.py with the correct API key within double quotes “ “. Ensure the variable name remains BEARER_TOKEN.
It is time to run our first python script and fetch live data from Twitter.
(env) $ python3 tw_core_app.py
Incase of errors, double check all of the steps above. Especially the following:
- env is set
- env is activated
- requirements installed
- twitter developer account logged in
- api_token from twitter developer added to api_token.py
- BEARER_TOKEN is a string value
With this you should be able to run the tw_core_app.py in your terminal. The results will look like this :
Research Query: Ather
Tweets Scanned 625
'love' count is 21
'hate' count is 3
<<<<<<<<------------ TWEET VOLUME IN PAST 7 DAYS
Tweet Volume 48
Tweet Volume 64
Tweet Volume 66
Tweet Volume 98
Tweet Volume 100
Tweet Volume 102
Tweet Volume 114
Tweet Volume 35
Let us grasp the query parameter in more detail.
query = ‘Ather -is:retweet -#eximbank -btc -eth -nft -crypto -donation -donating -donate lang:en ‘
The query 👆 starts with main query term. This can be a word or words searched per tweet. To match exact statements or phrases you can use :
query = ‘ "Ather vs Ola" ‘
The remaining query params are filters. For example:
-is:retweet -> this ensures no retweet is accepted in data
btc -> ensures no ‘btc’ related product or tweet is in data
A combination of filters and terms need to be experimented with, so as to arrive at the apt query term for your specific use case.
Once a query is set, you can play with the inference words in the tw_core_app.py file:
word_check_1 = “love”
word_check_2 = “hate”
The above variables take in a string of word/s . This enables you to check and count the presence of these variables in each tweet that was successfully fetched. Use cases could be:
- A brand looking for how often negative/positive words are used with their product .
- Could be an investor looking for popularity or sentiment behind a service or product.
- Market researchers can use it to anticipate user experience & behaviour.
- Brand research
- Policy statistics
Now go ahead and dive into the repository, create your environment, clone the repo and run it. Play with the query terms and use your analytics for the benefit of your organisation or any other positive use case that you can think of.
Do follow and share, it’ll help drive such content ahead. Thanks for taking the time to read through. Do comment below if you have any issues, happy to assist. If all worked do star and follow the github repository.
References:
- Twitter Developer Blogs
- Suhem Parack | Developer Advocate @ Twitter
- PyDelhi Community

Leave a comment