(Ammended answer in light of clarifying comment: in the while true cycle i want to check, if a user's id is in 'userlist' dictionary (as a key) and if not, add it to this dictionary with value 1. Then i want to rewrite the file with a new dictionary. the file is opened as soon as the program is launched, before the cycle):
For robustly using data on disk as though it were a dictionary you should consider either one of the dbm modules or just using the SQLite3 support.
A dbm file is simply a set of keys and values stored with transparently maintained and used indexing. Once you've opened your dbm file you simply use it exactly like you would any other Python dictionary (with strings as keys). Any changes can simply be flushed and written before closing the file. This is very simple though it offers no special features for locking (or managing consistency in the case where you might have multiple processes writing to the file concurrently) and so on.
On the other hand the incredibly powerful SQLite subsystem, which has been included in the Python standard libraries for many years, allows you to easily treat a set of local file as an SQL database management system ... with all of the features you'd expect from a client/server based system (foreign keys, data type and referential integrity constraint management, views and triggers, indexes, etc).
In your case you could simply have a single table containing a single column. Binding to that database (by its filename) would allow you to query for a user's name with SELECT and add the user's name with INSERT. As your application grows and changes you could add other columns to track when the account was created and when it was most recently used or checked (a couple of time/date stamp columns) and you could create other tables with related data (selected using JOINs, for example).
(Original answer):
In general the processing of storing any internal data structure as a file, or transmitting it over a network connection, is referred to a "serialization." The complementary process of loading or receiving such data and instantiating its contents into a new data structure is referred to (unsurprisingly) as "deserialization."
That's true of all programming languages.
There are many ways to serialize and deserialize data in Python. In particular we have the native (standard library) pickle module which produces files (or strings) which are only intended or use with other processes running Python or we can, as you said, use JSON ... the JavaScript Object Notation which has become the de facto cross-language data structure serialization standard. (There are others such as YAML and XML ... but JSON has come to predominate).
The caveat about using JSON vs. Pickle is that JavaScript (and a number of other programming and scripting languages, uses different semantics for some sorts of "dictionary" (associative array) keys than Python. In particular Python (and Ruby and Lua) treats keys such as "1" (a string containing the digit "one") and 1 or 1.0 (numeric values equal to one) as distinct keys. JavaScript, Perl and some others treats the keys as "scalar" values in which strings like "1" and the the number 1 will evaluate into the same key.
There are some other nuances which can affect the fidelity of your serialization. But that's the easiest to understand. Dictionaries with strings as keys are fine ... mixtures of numeric and string keys are the most likely cause of any troubles you'll encounter using JSON serialization/deserialization in lieu of pickling.