We can split the text line-by-line, then process each line individually, looking for a #
at the start of the line & modifying the output in those cases.
The preg_replace_callback
approach is better than this one, IMO.
$text = file_get_contents(...);
$lines = explode("\n", $text); // split by unix-style line break
$out = []; // For storing the modified lines
$i=0;
foreach ($lines as $line){
if (substr($line,0,1)=='#'){
//replace the first character & increment the counter if line starts with #
$i++;
$line = $i.substr($line,1);
}
//append the SOMETIMES modified line to the output
$out[] = $line;
}
// convert array of lines to string
$fixedText = implode("\n", $out);
See https://stackoverflow.com/a/28725803/802469 for a more robust version of the explode("\n", $text)
line.
Personally, I like the $out = []; ...; implode(...)
approach. I find it easier to work with & to visualize in my mind. Using an $out = ""; $out .= $line."\n";
approach would work just as well. I suspect any performance increase from going with string would be negligible, but might lower your carbon footprint a small amount.
If you have serious performance/resource concerns (extremely large files) then the fopen()/while fgets()
approach would be better, and fopen/fwrite()
could be used to append each output line to a file without using up memory for an output string.