The camera chip converts a given wavelength of light into a signal by overlaying colored filters—red, green, and blue—onto subpixel sensors that are sensitive to a broad range of wavelengths. As such, the camera isn’t actually sensing the wavelength; it’s sensing the relative strength of the light at a couple of key peak wavelengths. As described in this answer, you can approximate the peak wavelength of a given RGB color by converting it to HSV (hue/saturation/value) and then interpolate from violet to red wavelengths by the hue component. You’ll find this has limitations, though: fuchsia, for instance (between red and violet), has no single wavelength associated with it, as it’s the color we perceive when seeing both reddish and bluish light at the same time.