Firstly, sorry for the long title.
I've set up an api gateway to act as the s3 proxy so I can upload files by sending a PUT request to a api url. The api works fine (or at least I think it does), but it seems that I can only upload text files correctly.
For the uploaded text files (eg. Content-Type=text/plain), the sizes of the files locally are identical to the sizes of the files that are uploaded into s3 bucket. BUT, it is not the case for the binary files (eg. Content-Type=application/pdf). The files in the s3 buckets have a bigger size. When I download the binary files from s3 I cannot open them, they are corrupted.
Here is the rust code to send the request, I'm using hyper's http client:
match File::open(file.as_path()) {
Err(_) => Err("Failed to open file".to_owned()),
Ok(mut openned_file) => {
let file_mime = mime_guess::guess_mime_type(file.as_path());
let connector = HttpsConnector::new(OpensslClient::default());
let url_str = format!("https://my.api.com/upload/{}",
file.file_name().unwrap().to_str().unwrap());
let mut client =
Request::with_connector(Method::Put, Url::parse(&url_str).unwrap(), &connector)
.unwrap();
client.headers_mut().set(ContentType(file_mime.clone()));
// client.headers_mut().set(ContentLength(openned_file.metadata().unwrap().len()));
let file_mime_str = file_mime.to_string();
let mut buffer: [u8; 4096] = [0; 4096];
let mut uploaded: usize = 0;
let request = match file_mime {
Mime(TopLevel::Text, _, _) |
Mime(TopLevel::Application, SubLevel::Javascript, _) => {
let mut request = client.start().unwrap();
println!("Uploading text ...", );
while let Ok(read_count) = openned_file.read(&mut buffer) {
if read_count > 0 {
println!("Uploading {} bytes", read_count);
request.write_all(&buffer[0..read_count]);
uploaded += read_count;
} else {
request.flush();
println!("File mime: {}", file_mime_str);
println!("File size: {}, Total uploaded: {}",
openned_file.metadata().unwrap().len(),
uploaded);
break;
}
}
request
}
_ => {
// client.headers_mut()
// .set_raw("Content-Encoding", vec![b"base64".to_vec()]);
let mut request = client.start().unwrap();
let mut config = MIME;
config.line_length = None;
println!("Uploading binary ...", );
while let Ok(read_count) = openned_file.read(&mut buffer) {
if read_count > 0 {
println!("Uploading {} bytes", read_count);
request.write_all(&buffer[0..read_count]);
// let base64_str = buffer[0..read_count].to_base64(STANDARD);
// request.write_all(base64_str.into_bytes().as_slice());
uploaded += read_count;
} else {
request.flush();
println!("File mime: {}", file_mime_str);
println!("File size: {}, Total uploaded: {}",
openned_file.metadata().unwrap().len(),
uploaded);
break;
}
}
request
}
};
match request.send() {
Err(err) => Err(format!("{}", err)),
Ok(mut response) => {
let mut rep_str = String::new();
response.read_to_string(&mut rep_str);
Err(format!("{}", rep_str))
}
}
As you can see from the commented out code, I've tried using the Content-Encoding=base64 and encoding the bytes read from the file to upload. But Content-Encoding=base64 seems not a valid encoding type that s3 accepts. I failed to upload entirely (500: Internal Server Error, I can't even see the file of the wrong size on s3 bucket) whenever I set the Content-Encoding. But text file is working perfectly.
For reference: