0

I need to do following sql comparison in spark.Sql Col1 COLLATE SQL_Latin1_General_CP1_CS_AS <> col2

How can i acheive the same in spark. Sql without affecting the meaning of SQL_Latin1_General_CP1_CS_AS

I tried using upper but accent sensitive i dont know how to do that

I don't want to change the meaning of this statement in sql

  • Why were you using that in the first place? That prevented your SQL Server query from using indexes, making this a *bad* query from the start. Were you trying to force the codepage? (very bad, don't repeat the bug). Or force case-sensitive comparisons? (use another computed and indexed column). Are you sure Spark SQL isn't case-sensitive by default? – Panagiotis Kanavos Jul 05 '23 at 07:39
  • 1
    Does this answer your question? [How to make SQL Spark Case Insensitive with field values](https://stackoverflow.com/questions/73941623/how-to-make-sql-spark-case-insensitive-with-field-values) – Panagiotis Kanavos Jul 05 '23 at 07:40
  • Spark SQL is case-sensitive. You need to do extra work to make it *in*sensitive. – Panagiotis Kanavos Jul 05 '23 at 07:45

0 Answers0