0

I'm new in NeRF. I'm trying to make view synthesis using 2d face image dataset like FFHQ.

I extracted camera pose as below from my trained model to get UV_postion map.

3.815960444873149338e-01 2.011289213814117585e-02 -2.146695841125471627e-01 1.331593756459746203e+02
4.556725709716180628e-02 4.190045369798199304e-01 1.202577357700833210e-01 -1.186529566109642815e+02
2.107396114968792533e-01 -1.270187761779554281e-01 3.627094520218327456e-01 1.925994034523564835e+01

now, i'm wonder it is camera to world parameter(c2w)? or world to camera parameter(w2c)?

I know it is need c2w camera parameter to train the NeRF model. (I also know there are several frameworks but I want to try step by step).

PRN official github (https://github.com/YadiraF/PRNet)

I thought it was c2w parameters and i tried to train several images with differnt cam pose.

but it dosen't work to me

My environment is

  • os : ubuntu
  • gpu : nvidia rtx A5000

0 Answers0