You haven't specified what you want to do with characters that are not ASCII and are not emojis, such as "á", "≥", and "カ".
If you want non-ASCII characters to be treated the same as emojis (so you're detecting whether the character is ASCII or not):
function isAscii(char) {
return char.charCodeAt(0) < 128;
}
console.log(
isAscii('a'), // true
isAscii('ç'), // false
isAscii(''), // false
)
But what you probably want is to treat non-emoji characters the same as ASCII characters (so you're detecting whether the character is an emoji or not). To do this, you can use a unicode property escape as described in this answer:
function isEmoji(char) {
return /\p{Extended_Pictographic}/u.test(char)
}
console.log(
isEmoji('a'), // false
isEmoji('ç'), // false
isEmoji(''), // true
)
If as your question states, you are sure that the character can only be one of the two, then either approach is equivalent.
But keep in mind the following cases:
- Are you sure that none of your users ever write anything in a foreign script, like "fiancée"?
é
is not ASCII, though you could get by with the Latin-1 Supplement: char.charCodeAt(0) < 255
.
- Are you sure none of your users use macOS or iOS? Those platforms automatically convert quotation marks into "smart" quotes
“”
‘’
as the user types. These aren't in ASCII or the Latin-1 Supplement.
- And many more cases!
So it's best to assume there will be non-emoji and non-ASCII characters and choose accordingly. Probably by treating non-ASCII and ASCII characters the same, and emojis differently, but I don't know your use case enough to tell for sure.