In comments I wrote
"Hello" and "World" are two separate tokens. That's a lexical analysis consideration. When they appear as consecutive tokens, they represent two parts of a single string literal. That's a semantic consideration -- i.e. what that combination of tokens means in C source code.
That describes a view of the question in terms of conventional, generic compiler construction. For example, the distinction is between what might be represented in a lex
scanner definition and what would be handled in a yacc
parser description (to put it in terms of the traditional tools).
In practice, C defines a larger and more detailed set of "translation phases" for building an executable program from C sources (C99 5.1.1.2). In C's particular model of the process, the "Hello"
and "World"
are separate preprocessing tokens, identified in translation phase 3. These are concatenated into a single token at translation phase 6. All (remaining) preprocessing tokens are converted to straight-up tokens at transalation phase 7. The resulting tokens are then the input to the semantic analysis (also part of phase 7).
C does not require implementations to actually implement translation (compilation) according to the given model, with all its separate phases, and many do not. C just requires that the end result be as if the implementation behaved according to the model. In that sense, your question can only be answered "it depends". As far as a non C-specific conceptualization of the inferred question "what is a token", however, I will maintain that my original, short, description provides a useful mental model.