You seem to be doing all the work already for converting an image into base64 and back:
List<int> imageBytes = pickedImage.readAsBytesSync();
String imageB64 = base64Encode(imageBytes);
Uint8List decoded = base64Decode(imageB64);
As far as using it for your FirebaseVisionImage
, I'm not sure how much I can help as I have no experience with that class (I'm assuming you're using the firebase_ml_vision library). However, looking at the source for FirebaseVisionImage
, there is a factory constructor for fromBytes
as well as fromFile
, though it's a bit more complicated to use. If you can get it to work, though, that would probably be the more appropriate constructor for your needs:
// Metadata values based on an RGBA-encoded 1920x1080 image
// You will have to change these values to fit your specific images
final planeMetadata = FirebaseVisionImagePlaneMetadata(
width: 1920,
height: 1080,
bytesPerRow: 1920 * 4,
);
final metadata = FirebaseVisionImageMetadata(
size: Size(1920, 1080),
planeData: planeMetadata,
// From https://developer.apple.com/documentation/corevideo/1563591-pixel_format_identifiers?language=objc
// kCVPixelFormatType_32RGBA
rawFormat: 'RGBA',
);
final visionImage = FirebaseVisionImage.fromBytes(decoded, metadata);
Alternatively, you could just save the bytes to a temporary file and use that:
// Assuming the source image is a PNG image
File imgFile = File('tempimage.png');
imgFile.writeAsBytesSync(decoded.ToList());
final visionImage = FirebaseVisionImage.fromFile(imgFile);