As commented and explained (by the two other answers by John Zwinck and by user1118321), it is impossible without an external tool. It looks you are dreaming of some (yet inexistent) preprocessor directive #include_verbatim
such that #include_verbatim "myfile.inc"
expands to a long literal string constant containing the contents of myfile.inc
.
This does not exist yet. You might perhaps customize a compiler (e.g. using MELT if compiling with a recent GCC...) which for example would process #pragma MAKE_VERBATIM_LITERAL_FROM_FILE_CONTENT(MYCONTENT,myfile.inc)
to define the preprocessor symbol MYCONTENT
be the literal content of myfile.inc
; but this would need a significant effort and would be compiler specific.
The most pragmatic solution is to accept using some external tool (e.g. a simple make
rule to transform myfile.inc
into myfile.inc.data
such that you could #include "myfile.inc.data"
appropriately). This would take a few minutes of your development time (e.g. with m4
, awk
, hexdump
, reswrap
from FOX toolkit ...)
If you don't want to depend from some external tool, make it internal to your project by coding an autonomous transform_to_hex_string.cpp
program compiled to transform_to_hex_string.bin
inside your project and add make
rules handling it - i.e. building transform_to_hex_string.bin
from transform_to_hex_string.cpp
on one side, and running transform_to_hex_string.bin < myfile.inc > myfile.inc.data
in another make
rule; but it is still external to the compiler!
Customizing a compiler (be it GCC or LLVM) is compiler specific (and probably version-specific) and would take much more efforts (perhaps a week).
You might try to lobby some C++ standardization committee member to have such a language feature included in some future (post C++17) standard.
Remember however that the C++ standard could be read even on hypothetical implementations not even having files or directories (the C++11 compiler is required to process "translation units", not "source files" in the operating system sense, in the standard wording; a compiler handling source code from some database filled by some IDE would be standard compliant - and there have existed such compilers in the previous century, perhaps VisualAge from IBM)
From the latest C++11 draft specification (n3337 §2.1 Separate translation)
The text of the program is kept in units called source files in this International Standard. A source file
together with all the headers (17.6.1.2) and source files included (16.2) via the preprocessing directive #include, less any source lines skipped by any of the conditional inclusion (16.1) preprocessing directives, is
called a translation unit. [ Note: A C ++ program need not all be translated at the same time. — end note ]
[ Note: Previously translated translation units and instantiation units can be preserved individually or in
libraries. The separate translation units of a program communicate (3.5) by (for example) calls to functions
whose identifiers have external linkage, manipulation of objects whose identifiers have external linkage, or
manipulation of data files. Translation units can be separately translated and then later linked to produce
an executable program (3.5). — end note ]
Read also the §2.2 Phases of translation of the C++11 standard, notably:
The precedence among the syntax rules of translation is specified by the following phases.
Physical source file characters are mapped, in an implementation-defined manner, to the basic source
character set [....]
Each instance of a backslash character () immediately followed by a new-line character is deleted,
splicing physical source lines to form logical source lines. [....]
The source file is decomposed into preprocessing tokens (2.5) and sequences of white-space characters
(including comments). A source file shall not end in a partial preprocessing token or in a partial com-
ment. 12 Each comment is replaced by one space character. New-line characters are retained. Whether
each nonempty sequence of white-space characters other than new-line is retained or replaced by one
space character is unspecified. The process of dividing a source file’s characters into preprocessing to-
kens is context-dependent. [ Example: see the handling of < within a #include
preprocessing directive.
— end example ]
Preprocessing directives are executed, macro invocations are expanded, and _Pragma
unary operator
expressions are executed. If a character sequence that matches the syntax of a universal-character-name is produced by token concatenation (16.3.3), the behavior is undefined. A #include
preprocessing directive causes the named header or source file to be processed from phase 1 through phase 4, recursively.
All preprocessing directives are then deleted.
See also wikipage on Quine (computing)
BTW, generating C++ code from external sources with external tools is very common practice: Yacc (or GNU bison) & Lex (or Flex) & ANTLR & MOC from Qt are very well known examples (and MELT is translated to C++).