Historically, in C language strings were just a memory areas filled with characters. Consequently, when a string was passed to a function, it was passed as a pointer to its very first character, of type char *
, for mutable strings, or char const *
, if the function had no intent to modify string's contents. Such strings were delimited with a zero-character ((char)0
a.k.a. '\0'
) at the end, so for a string of length 3 you had to allocate at least four bytes of memory (three characters of the string itself plus the zero terminator); and if you only had a pointer to a string's start, to know the size of the string you'd have to iterate it to find how far is the zero-char (the standard function strlen
did it). Some standard functions accepted en extra parameter for a string size if you knew it in advance (those starting with strn
or, more primitive and effective, those starting with mem
), others did not. To concatenate two strings you first had to allocate a sufficient buffer to contain the result etc.
The standard functions that process char pointers can still be found in STL, under the <cstring>
header: https://en.cppreference.com/w/cpp/header/cstring, and std::string
has synonymous methods c_str()
and data()
that return char pointers to its contents, should you need it.
When you write a program in C++, its main
function has the header of int main(int argc, char *argv[])
, where argv
is the array of char pointers that contains any command-line arguments your program was run with.
Ineffective as it is, this scheme could still be regarded as an advantage over strings of limited capacity or plain fixed-size character arrays, for instance in mid-nineties, when Borland introduced the PChar
type in Turbo Pascal and added a unit that exported Pascal implementations of functions from C's string.h.