The reason BS4 says that the element does not exist is that it is rendered by JavaScript and requests doesn't make XHR requests for you or emulate a real browser with JS support. When you first open the page, it shows you a loading screen.
You should use selenium with headless chrome/firefox to scrape JS Pages with python. If you want to use selenium, you can do something like this (example, you might need to use webdriverwait):
from selenium import webdriver
import urllib.request
import lxml
import html5lib
import time
from bs4 import BeautifulSoup
# Set the URL you want to webscrape from
url = 'https://tokcount.com/?user=mrsam993'
# Define options
options = webdriver.ChromeOptions()
options.add_argument("--headless")
# Connect to the URL
browser = webdriver.Chrome(options = options)
browser.get(url)
# Parse HTML and save to BeautifulSoup object
soup = BeautifulSoup(browser.page_source, "html.parser")
browser.quit()
# for i in range(10):
links = soup.findAll('span', class_= 'odometer-value') #[i]
print(links)
If you insist on using requests, go to the Network tab and inspect the XHR requests made and make them yourself with requests. If you're going with this approach and using firefox, I recommend you install Firebug to help out with this stuff.
Here's what it looks like for your website:

Another thing worth mentioning is requests-html. Read the docs. Example using requests html:
from requests_html import HTMLSession
import urllib.request
import lxml
import html5lib
import time
from bs4 import BeautifulSoup
# Set the URL you want to webscrape from
url = 'https://tokcount.com/?user=mrsam993'
# Connect to the URL
session = HTMLSession()
r = session.get(url)
# Parse HTML and save to BeautifulSoup object
soup = BeautifulSoup(r.html, "html.parser")
# for i in range(10):
links = soup.findAll('span', class_= 'odometer-value') #[i]
print(links)
Please refer to this: Web-scraping JavaScript page with Python
And this too: Scrape javascript-rendered content with Python