I'm using PL/JSON to parse data from MongoDB to Oracle DB. The Packages work fine with latin characters. However, whenver there is json_value with chinese characters, the resulted value in oracle is totally corrupted (question marks, symbole ... etc). As an example, I use the following line to parse:
Remarks := json_ext.get_string(json(l_list.get(i)),'Remarks');
I'm aware the type json and get_string function are using varchar and not nvarchar. More importantly, Oracle DB instance supports chinese (I can see when I insert directly into table, no corruption occurs. Only when I parse chinese json file it gets corrupted.) So my question is, Does PL/JSON software support chinesecharacters? Do I need to update the package on my own to accommodate chinese characters? Where exactly needs to be fixed?
In adidtion, I ran the following:
select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
and it returns : AL32UTF8.
does not mean my DB is configured for UTF-8? or there are other things need to be checked for this pupose?
Update
Inspired by @jsumners' comment, I tried to track the corruption source and I found it from the beginning. That is, the snippet below is run before using PL/SJON to receive the htp response:
BEGIN LOOP UTL_HTTP.read_text(l_http_response, buf); DBMS_OUTPUT.PUT_LINE(buf); l_response_text := l_response_text || buf; END LOOP; EXCEPTION WHEN UTL_HTTP.end_of_body THEN NULL; END;
The line DBMS_OUTPUT.PUT_LINE(buf);
returns corrupted fields of json. I'm not sure if this is related to the way I send and receive htp requests. Noteworthy, buf is nvarchar2.