Have a Windows program (MFC, MBCS) that uses a RichEdit 2.0 controls (as basically a CRichEditView. When I set the text of the control for a string on Windows 10, the string renders correctly, but on lesser versions, say Server 2016, there is a character that does not render correctly. Impossible to really provide a complete example, but you can get the gist from a few lines.
// m_rich is a CRichEditCtrl for this example...
CHARFORMAT cf = { sizeof(cf), CFM_FACE | CFM_SIZE, 0, 16 * 20 };// 16 pt font easy to see
_tcscpy_s(cf.szFaceName, _countof(cf.szFaceName), _T("Courier New"));
m_rich.SetDefaultCharFormat(cf);
SETTEXTEX st = { 0, 1200 };
WCHAR wsz[] = L"This is the symbol \u26a0 in the middle";
::SendMessage(m_rich.GetSafeHwnd(), EM_SETTEXTEX, (WPARAM) &st, (LPARAM) wsz);
On Windows 10, the rich edit displays the warning symbol (⚠). But on Server 2016, it just renders it as an empty box. Supposedly, that is because the character is not supported with the font.
After examining things, I found that what happens on Windows 10, is that the rich edit control is smart and automatically converts the font face for the warning symbol character to "Segoe UI Symbol". On Server 2016, it doesn't do the automatic conversion and the font keeps its "Courier New" font.
Now, a user can put an HTML formatted character into a MBCS string field something like "⚠". We then convert it to a unicode string and put it in a rich edit.
I'd like to be able to detect if one of these characters is not supported by the chosen font, and if not, change the font for the character.
Easy way to do this?