Suddenly I have to work with this database filled with lots of data and the whole system is full of encoding problems. Some tables are Latin1, others UTF8, mixed collations, and utf8_encode() PHP conversions all spread randomly throughout the files. It's a nightmare, and I'm trying to solve it.
Ok, I've created a PHP script to update the charset and collation of the whole schmere.
<?php
include("vars.php");
$servername = DB_HOST;
$username = DB_USER;
$password = DB_PASS;
$dbname = DB_NAME;
$charset = "utf8";
$collation = "utf8_general_ci";
$conn = new mysqli($servername, $username, $password, $dbname);
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
$sql = "SELECT CONCAT('ALTER TABLE `', TABLE_NAME,'` CONVERT TO CHARACTER SET ".$charset." COLLATE ".$collation.";')
AS MYSQL FROM INFORMATION_SCHEMA.TABLES;";
$result = $conn->query($sql);
if ($result->num_rows > 0) {
while ($row = $result->fetch_row()) {
$conn->prepare($row[0])->execute();
echo $row[0] . " EXECUTED </br>";
}
echo "FINISHED.";
} else {
echo "0 results";
}
$conn->close();
?>
The problem here is that lots of varchars are stored like this: "OBRAGEN ENGENHARIA E CONSTRUÇÕES LTDA", and the previous developer used utf8_encode() on multiple files because of that. In order to achieve a standard, I've removed all utf8_encode() conversions in order to rely only on the database. How can I "utf8_encodely" convert ALL varchars on the database? Is that possible?