This is the Zoned decimal Format used by some Ascii Cobol compilers (Fujitsu, GNU-Cobol etc).
In a recent answer I provided java code for both mainframe and PC cobol compilers.
basically the last digit holds the sign and
character values Hex
positive @A => I +0 => +9 40 => 49
negative PQ => Y -0 => -9 50 => 59 or is it 70 => 79 (lower case)
unsigned 01 => 9 0 => 9 30 => 39
The last or low nyble (4 bits) always give the decimal value, the first or high nyble give the sign.
I am not sure wether the negative numbers are represented by upper or lower case letters, would need to test.
Some compilers might not used the positive values.
Also in Cobol Zoned decimal, the decimal point is not stored, it is assumed
if the Cobol Copybook is
03 fld pic s99999.
121 is stored as 0012A
if the Cobol Copybook is (no sign)
03 fld pic 99999.
121 is stored as 00121
but if the copybook is (v stands for assumed decimal point)
03 fld pic s999v99.
then 123 is stored as 1230@
So you Must have the COBOL copybook; It would be better to decode in Cobol or use a commercial package.
Most Open-Source Cobol interface packages concentrate on Mainframe EBCDIC Cobol. If java is an option
- JRecord can handle the format (You need to specify one of the PC Dialects Fujitsu or OpenCobol). It also has an example program Cobol2Csv to convert from Cobol to Csv. The program Cobol2Csv is not Production ready.
Note: I am the author of JRecord.
To decode the number the code could be
lastDigit = uppercase(lastDigit);
switch (lastDigit) {
case 'A' -> 'I': sign = ''; lastDigit = lastDigit - 'A' + '0'; break;
case 'P' -> 'Y': sign = '-'; lastDigit = lastDigit - 'P' + '0'; break;
case '0' -> '9': sign = '';
}
alternatively
digit = digit & x'0f';
always converts a character to a number in the range of 0 to 10 for both ebcdic or ascii numbers / zoned-decimal