0

I have many pdfs, which when copied render incorrect text, due to bad

fontmap to unicodemapping

Something like this : Original - निर्वाचक

Rendered when copied : ननरररचक

I have gone through various answers :

Unable to copy exact hindi content from pdf

Read PDF using itextsharp where PDF language is non-English

Parsing a pdf(Devanagari script) using PDFminer gives incorrect output [duplicate]

I followed this answer and used qpdf to get the desired pagecontentstream, and I got the cmaps, when read using Vim.

But, now I'm facing the problem that every pdf has its own different /font to /toUnicode map, and I would like to have something like a global map for the cmap for a particular script( such as devanagari).

I am also thinking of replacing the stream of !&# characters which map to unicode in a global way.

Would love some ideas on how to solve this problem.

Cmaps for two different pdfs is given below:

For pdf 1 :

/CIDInit/ProcSet findresource begin
12 dict begin
begincmap
/CIDSystemInfo<<
/Registry (Adobe)
/Ordering (UCS)
/Supplement 0
>> def
/CMapName/Adobe-Identity-UCS def
/CMapType 2 def
1 begincodespacerange
<00> <FF>
endcodespacerange
68 beginbfchar
<01> <096D>
<02> <0967>
<03> <0917>
<04> <091C>
<05> <091C093E>
<06> <0928>
<07> <0020>
<08> <0938>
<09> <0935>
<0A> <091C>
<0B> <092E>
<0C> <0932>
<0D> <0915>
<0E> <092A>
<0F> <092A0942>
<10> <0930>
<11> <092A094D0930>
<12> <092E0941>
<13> <0916>
<14> <092A>
<15> <091A>
<16> <0924>
<17> <0925>
<18> <0928094D>
<19> <092F>
<1A> <0926>
<1B> <0921>
<1C> <0927>
<1D> <0927093F>
<1E> <0930>
<1F> <002E>
<20> <092B>
<21> <092B>
<22> <0915>
<23> <002F>
<24> <0968>
<25> <0966>
<26> <096E>
<27> <0928>
<28> <0936>
<29> <0923>
<2A> <0906>
<2B> <09260947>
<2C> <092A094B>
<2D> <0938094D>
<2E> <091F>
<2F> <0905>
<30> <002C>
<31> <0939>
<32> <0028>
<33> <0029>
<34> <002D>
<35> <0927094D>
<36> <091D>
<37> <0932094D>
<38> <0924094D>
<39> <09300941>
<3A> <0915094D0937>
<3B> <092D>
<3C> <0924>
<3D> <0915094D0924>
<3E> <0923094D>
<3F> <092C>
<40> <0926094D0926>
<41> <092A094D0924>
<42> <0936>
<43> <2013>
<44> <096A>
endbfchar
endcmap
CMapName currentdict /CMap defineresource pop
end
end
endstream
endobj

For pdf 2:

/CIDInit/ProcSet findresource begin
12 dict begin
begincmap
/CIDSystemInfo<<
/Registry (Adobe)
/Ordering (UCS)
/Supplement 0
>> def
/CMapName/Adobe-Identity-UCS def
/CMapType 2 def
1 begincodespacerange
<00> <FF>
endcodespacerange
100 beginbfchar
<01> <0020>
<02> <0938>
<03> <002E>
<04> <092B>
<05> <092B>
<06> <0916>
<07> <0915>
<08> <0915>
<09> <096F>
<0A> <096C>
<0B> <0967>
<0C> <002F>
<0D> <0968>
<0E> <0966>
<0F> <096D>
<10> <0928>
<11> <0928>
<12> <0932>
<13> <0917>
<14> <0924>
<15> <0928094D>
<16> <092F>
<17> <092F093E>
<18> <0930>
<19> <0930>
<1A> <0028>
<1B> <0918>
<1C> <0918094B>
<1D> <0937093F>
<1E> <0926093F>
<1F> <0915>
<20> <002D>
<21> <0029>
<22> <0930>
<23> <092A>
<24> <0915>
<25> <091A094D>
<26> <0906>
<27> <09320947>
<28> <0932094D>
<29> <092A0941>
<2A> <0935094D>
<2B> <0935>
<2C> <09300941>
<2D> <09320940>
<2E> <092E>
<2F> <0926>
<30> <0905>
<31> <0930>
<32> <0909>
<33> <0938>
<34> <0938>
<35> <091D>
<36> <0924094D>
<37> <091A>
<38> <0939>
<39> <0937094D>
<3A> <092A094D0930>
<3B> <096E>
<3C> <091C>
<3D> <0924>
<3E> <0921>
<3F> <0926094D092F>
<40> <092C094D>
<41> <091F>
<42> <092E094D>
<43> <090F>
<44> <0969>
<45> <0927>
<46> <0930>
<47> <002C>
<48> <092D093F>
<49> <0936>
<4A> <092C>
<4B> <003F>
<4C> <0933>
<4D> <0917094D0930>
<4E> <0936094D>
<4F> <0927094D>
<50> <0915>
<51> <096A>
<52> <0920093F>
<53> <0923094D>
<54> <091C>
<55> <092A>
<56> <0930094D0926094B>
<57> <096B>
<58> <091C>
<59> <0908>
<5A> <0910>
<5B> <0922>
<5C> <0930>
<5D> <09150940>
<5E> <0936>
<5F> <0927094B0902>
<60> <0917094D>
<61> <09350948>
<62> <0915>
<63> <091C094D>
<64> <092C094D0930093F>
endbfchar
13 beginbfchar
<65> <0905>
<66> <0926>
<67> <0907>
<68> <0915094D>
<69> <091F>
<6A> <09150949>
<6B> <0916094D>
<6C> <0924093E0903>
<6D> <0924094D0924>
<6E> <0930094D09250940>
<6F> <0027>
<70> <2013>
<71> <0939>
endbfchar
endcmap
CMapName currentdict /CMap defineresource pop
end
end
endstream
endobj

I've also seen questions/answers such as :

Extract toUnicode map from One PDF and use in another

PDF text extraction returns wrong characters due to ToUnicode map

Also, the byte content stream for the pdfs are something like this:

stream
true^@^L^@<80>^@^C^@@cmapFgVB^@^@^@Ì^@^@^A^Rcvt <8c>C<84>^G^@^@^Aà^@^@^A$fpgm<98>\Ü¢^@^@^C^D^@^@^@dglyfÍ<92>3²^@^@^Ch^@^@'<84>head©Ò('^@^@*ì^@^@^@6hheaCþ
^D^@^@+$^@^@^@$hmtxNé(y^@^@+H^@^@^A^TlocaóLÛ5^@^@,\^@^@^@<8c>maxpö^@Î^@^@^@,è^@^@^@ nameÆîN~^@^@-^H^@^@^K£post^Z^@gþ^@^@8¬^@^@^@ prep^@_1R^@^@8Ì^@^@^@^K^@^@^@^A^@^A^@^@^@^@^@^L^@^@^A^F^@^@^@^A^B^C^D^E^F^G^H  
^K^L^M^N^O^P^Q^R^S^T^U^V^W^X^Y^Z^[^\^]^^^_ !"#$%&'()*+,-./0123456789:;<=>?@ABCD^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@2^B¼ÿÎ^@d^B+^Aó^B<91>ÿÿ^Aö^Bvÿ´^B!^B*ÿö^B£ÿ[^Bì^A[^Aï^@^O^@Kÿ<9c>^A^S^@Ü^B<99>ÿy^B^H^B^X^B"^@^Eÿõ^B^Q^Am^A@^Aä^@^N^AE^@s^BS^B^_ÿÐ^BÆÿ8^Bbÿ<9f>^Ai^@hÿ;ÿ^H^BLÿn^Bîÿ^K^Bfÿ<88>^A2^@È^B\ÿ¢^A¤^@F^AÛ^@^T^Bé^B:^B·^Bq^A<9a>^@Pÿ¡^@
^@^YÿÝ^B^Cÿìÿû^Aþ^@7ÿñÿçÿº^Aê^@^D^@^^ÿ3^@#ÿB^B5^Bä^AùÿÉÿ.^@^Vþ<9d>þ}^A©^B^M^B^Rÿ<92>^B^Wÿ¿ÿ<83>^@û^@ÿ^@Ò^Bl^A³^B{^@-^@<96>ÿØ^@ª^@U^@ZÿGÿL^@}ÿt^@¯^@<9b>^@_^BË^@n^@<91>^@¾^@(^BÐ^BÁ^BÚÿ^Aÿ^FÿÓÿ^Zþ÷ÿ°ÿ¦ÿ<97>ÿ~ÿ<8d>ÿ`ÿÄÿ«ÿµÿ÷@^E^E^D^C^B^@,vE °^C%E#ah^X#h`D-,E °^C%E#ah#h`D-,  ¸ÿÀ8^R±@^A68-,  °@8^R°^A6¸ÿÀ8-,^A°Fv Gh^X#Fah X °^C%#8°^B%^R°^A6e8Y-^@^B^@<85>^@^@^C¤^C¤^@^D^@   ^@oA$^@^G^@^F^@^D^@^C^@^B^@^C^@ ^@^H^@^B^@^E^@^B^@^D^@^A^@^B^@^@^@^E^@^H^@^G^@^D^@^B^@^A^@^C^@  ^@^F^@^B^@^E^@^B^@^D^@^C^@^B^@^@^@^A^@F^@^A^@^@^@Fv/7^Xv^@?^W<ý^W<?<ý<^A?^W<ý^W<?<ý<10º^@
^@^@^@^E+3^Q!^Q!7!^Q!^Q<85>^C^_üáC^B<9a>ýf^C¤ü\C^C^_üá^@^B^@W^@^@^BC^B ^@^\^@)^@/A^L^@^G^@[^@^\^@^[^@^A^@^C^@^@^@^A^@F^@^A^@^A^@Fv/7^Xv^@?^W<?^A10º^@*^@^A^@^E+!"&546?^A^N^A^U^T^V3267"&54632^V^U^T^F#7>^A54&#"^F^U^T^V3^ATj<93>        ^KI^N^MmN,Q^USuS9@Y<82>m<9f>^G^G3$!/Y@Î<93>!x^?'K Ty¬I<<81>[Pq<8c>d¦Èð%@^YKiO6He^@^@^B^@<99>ÿß^B^A^B»^@^O^@^\^@5A^N^@^O^@^N^@^B^@^@^@Q^@^K^@
^@^B^@  ^@b^@F^@^A^@^A^@Fv/7^Xv^@?^W<?^W<^A10º^@^]^@^A^@^E+^E^Q^N^A#"&54632^V^U^Q#^C2654&#"^F^U^T^V3^A¾^W>#HeeHUfCx/BB/,>>,!^A£^V^XiJKim[ýì^A°G14KK41G^@^@^@^Aÿó^@^@^B^S^B<9a>^@^X^@)A  ^@^F^@^E^@^A^@^B^@^A^@F^@^A^@^A^@Fv/7^Xv^@/<?<^A10º^@^Y^@^A^@^E+^C5!^U#^Q#^Q#^Q^T^F#"&546;^A26=^A#^M^B qB§^T^M^Yl^P
/^K^P<84>^B^<<ý¢^B^þº^P^X}^\
^P^T^N<99>^@^@^Aÿó^@^@^Bæ^B<9a>^@^^^@)A ^@^V^@^U^@^A^@^R^@^Q^@F^@^A^@^A^@Fv/7^Xv^@/<?<^A10º^@^_^@^A^@^E+%"&'7^^^A32654&'!5!5!^U#^Q#^Q#^^^A^U^T^F#^A5H<9c>:<3|3!/;6^A^^ýÁ^BóqC ^[^ZU=PÁ ^Z<8e>«3$9y7<8c><<ý¢^A<8f>%U,@Y^@^@^Aÿó^@^@^A^X^B<9a>^@^H^@)A    ^@^F^@^E^@^A^@^B^@^A^@F^@^A^@^A^@Fv/7^Xv^@/<?<^A10º^@   ^@^A^@^E+^C5!^U#^Q#^Q#^M^A%rBq^B^<<ý¢^B^^@^@^@^Aÿó^@^@^BW^B<9a>^@^X^@)A ^@^L^@^K^@^A^@^H^@^G^@F^@^A^@^D^@Fv/7^Xv^@/<?<^A10º^@^Y^@^D^@^E+^S463!5!5!^U#^Q#^Q#"^F^]^A^T^F#"&5^T^\^S^A`þP^BdqCÁ^P^X^T^M^Wn^A<8f>^N^T­<<ý¢^An^P
C^P^X<8b>^[^@^Aÿóÿù^BÓ^B<9a>^@^_^@.A^K^@^U^@M^@^F^@^E^@^A^@^B^@^A^@F^@^A^@^A^@Fv/7^Xv^@/<?<?^A10º^@ ^@^A^@^E+^C5!^U#^Q#^Q#'35#^U^T^F#"&'^S^G^A7^^^A326=^A!^M^BàqC³"ÕéB/^H^O^DÈ5þÔ6^U9^\^\'þÿ^B^<<ý¢^A@BÜ´4J^D^Bþé"^Aª(-6&^\´^@^Aÿó^@^@^B-^B<9a>^@"^@)A  ^@^F^@^E^@^A^@^B^@^A^@F^@^A^@^A^@Fv/7^Xv^@/<?<^A10º^@#^@^A^@^E+^C5!^U#^Q#5^N^A#"&54632^V^W^G.^A#"^F^U^T^V3267^Q!^M^B:rB^[I(SuuS^X,^S"^L^Z^O7NN7'J^[þz^B^<<ý¢Õ #jJKi^M^N5        ^KF2.C5/^A%^@^Aÿ$^@^@^A^X^C±^@^\^@=A^Q^@^S^@^R^@^B^@^Q^@+^@^Z^@^Y^@^A^@^V^@^U^@^B^@^C^@^A^@F^@^A^@^G^@Fv/7^Xv^@/^W<?<?^W<^A10º^@^]^@^G^@^E+^C53.^A#"^F^U^T^V^W#.^A54632^V^W3^U#^Q#^Q#^Mq^Q[4'6^\^YI^W^X]CM<80>^UrrBq^B^<`|3$ ?^Y^W>#=U<9e>y<ý¢^B^^@^@^Bÿó^@^@^BQ^B<9a>^@^U^@^Z^@)A      ^@^F^@^E^@^A^@^B^@^A^@F^@^A^@^A^@Fv/7^Xv^@/<?<^A10º^@^[^@^A^@^E+^C5!^U#^Q#^Q#^U^T^F#"&546;^A5#^W35#^U^M^B^rBð^L^H^Wn^T^M6xºðð^B^<<ý¢^A@/^P^Xd^T^M^TÜÜÜÜ^@^Aÿóÿù^Bä^B<9a>^@&^@.A^K^@^X^@M^@^F^@^E^@^A^@^B^@^A^@F^@^A^@^A^@Fv/7^Xv^@/<?<?^A10º^@'^@^A^@^E+^C5!^U#^Q#^Q"^F^G'>^A7.^A#"^F^U^T^V^W^G.^A54632^V^W>^A35!^M^BñqB:d^V@@@   

aspiring1
  • 344
  • 1
  • 13
  • 32
  • I just wonder what would be the advantage of such a global map? Because the pdfs in question would have to be changed (internally) a lot. – mkl Sep 16 '19 at 13:08
  • @mkl : I just think that if the byte streams are changed, using a global `glyph` to unicode mapping, something like `(!) --> <01> -- > <0020>`, then I can change the `byte encoded words and sentences` and get a correct pdf. It will be way faster than ocr, then to get the correct text. – aspiring1 Sep 16 '19 at 19:13
  • But to successfully replace the current encodings and mappings with your global one, you need correct and sufficient information about which glyphs map to which unicode code to start with. If you have that information, you could get a *correct* pdf much easier, simply by repairing existing **ToUnicode** maps based on that information. No need to dabble with the content streams. – mkl Sep 16 '19 at 20:57
  • @mkl: The problem is that for every PDF, the mapping changes, the only `glyphs` mapped are the ones that appear in a particular document, and loosely follow which `glyph` is at the top and then in order. – aspiring1 Sep 16 '19 at 21:08
  • @mkl : Also, is there a way to get the text from the pdf drawing instructions as you mentioned [here](https://stackoverflow.com/a/28694708/8030107). – aspiring1 Sep 17 '19 at 05:34
  • I'm not sure what you refer to in that answer of mine; in the answer itself I pointed towards ocr, and in a comment I only talked about updating the **ToUnicode** mappings after the OP indicated he would manually determine the correct mapping. – mkl Sep 17 '19 at 07:42
  • @mkl : You had written that _The reason is that PDF as a format does not contain the text as such. It contains pointers (direct ones or via mappings) to glyph drawing instructions in embedded font programs. Using these pointers the PDF is drawn as you expect._ Can we draw the glyphs in text files using those pointers. - I don't know whether this is possible, but was just thinking about it. – aspiring1 Sep 17 '19 at 07:47
  • *"Can we draw the glyphs in text files using those pointers. - I don't know whether this is possible, but was just thinking about it."* - Well, you can get the glyph *drawing instructions*. I.e. something like "fill polygon with corners at the following coordinates". But recognizing a character from what is drawn by these instructions is a flavor of OCR. What you can consider doing, though, is going through all the glyphs in a font in a PDF, drawing it at a high enough resolution, let an OCR program (or a human) recognize that glyph, and construct a **ToUnicode** map from the recognized chars. – mkl Sep 17 '19 at 08:18
  • @mkl : Can you explain more on this I didn't get the part of what to do next after getting the **glyph** to **ToUnicode** mapping. – aspiring1 Sep 17 '19 at 08:45
  • After you have determined (using an OCR program or a human) Unicode codes for all the individual glyphs from your font, you can build a **ToUnicode** map based on those information and replace the existing map for that font with it. Having done that for all the fonts, the PDF should allow proper text extraction. – mkl Sep 17 '19 at 08:52
  • @mkl : Currently, as you can see my **ToUnicode** map is something like this `<01> <0020> <02> <0938> <03> <002E> <04> <092B> <05> <092B> <06> <0916> <07> <0915> <08> <0915>` , as also shown above - how can I directly map the **glyph to unicode** in the _pdf document_ . If this can be done, pls make an answer out of it. Thanks – aspiring1 Sep 17 '19 at 08:56
  • *how can I directly map the **glyph to unicode** in the pdf document* - what do you mean by that? The **ToUnicode** map is part of the PDF, so that is as direct as a mapping can be, can it not? – mkl Sep 17 '19 at 09:34
  • @mkl : But for each pdf as shown above the mapping is changing such as for one `<01> <096D>` and in another `<01> <0020>` – aspiring1 Sep 17 '19 at 09:44
  • Even different fonts in the same PDF can have different **ToUnicode** maps. – mkl Sep 17 '19 at 09:55
  • @mkl : So, is there any solution – aspiring1 Sep 17 '19 at 10:02
  • Different **ToUnicode** maps in a PDF file aren't a problem, so they require no solution. Defect **ToUnicode** maps in a PDF file *are* a problem which one can try and solve as outlined above, i.e. recognize the glyphs of each font in a PDF one-by-one and rebuild **ToUnicode** maps for each of them. – mkl Sep 17 '19 at 10:40
  • @mkl : I was thinking of using **cmaps**, for correcting multiple pdfs in the order of _1000s_, instead of using `tesseract`, which is rather slow, but I guess the only feasible solution is `ocr` – aspiring1 Sep 17 '19 at 10:43
  • Even for the same font (according to the font name) in different PDF font objects there may be different **ToUnicode** maps. On one hand a different **Encoding** might be used, on the other hand, if font subsets are embedded, the creation of the subsets can shift glyph ids. – mkl Sep 17 '19 at 10:45
  • @mkl : In my case the problem is of font subsets being embedded in each pdf, instead of the total font map. – aspiring1 Sep 17 '19 at 11:15
  • In that case forget the global map. Instead fix each font separately. – mkl Sep 18 '19 at 09:56
  • @mkl : But the mapping of the **ToUnicode** map in my case has just the characters which are used in the pdf, the **other characters of the font** aren't mapped, if they aren't present in the pdf. – aspiring1 Sep 18 '19 at 10:15
  • And that is completely normal. That's what font subsets are all about, including only what is needed in a PDF. – mkl Sep 18 '19 at 10:32
  • @mkl : So, I can only go with **OCR** it seems, but **OCR** is really slow, with `tesseract` at the very least, even with `multiprocessing`. – aspiring1 Sep 18 '19 at 10:55

0 Answers0