7

"ResourceLoader" with AWS S3 works fine with these properties:

cloud:
  aws:
    s3:
        endpoint: s3.amazonaws.com     <-- custom endpoint added in spring cloud aws 2.3
    credentials:
        accessKey: XXXXXX
        secretKey: XXXXXX
    region:
        static: us-east-1
    stack:
        auto: false

However, when I bring up a localstack container locally and try to use it with these properties(as per this release doc: https://spring.io/blog/2021/03/17/spring-cloud-aws-2-3-is-now-available):

cloud:
  aws:
    s3:
        endpoint: http://localhost:4566
    credentials:
        accessKey: test
        secretKey: test
    region:
        static: us-east-1
    stack:
        auto: false

I get this exception:

17:12:12.130 [reactor-http-nio-2] ERROR org.springframework.boot.autoconfigure.web.reactive.error.AbstractErrorWebExceptionHandler - [23efd000-1] 500 Server Error for HTTP GET "/getresource/test" com.amazonaws.SdkClientException: Unable to execute HTTP request: mybucket.localhost at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207) ~[aws-java-sdk-core-1.11.951.jar:?] Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: Error has been observed at the following site(s): |_ checkpoint ⇢ org.springframework.boot.actuate.metrics.web.reactive.server.MetricsWebFilter [DefaultWebFilterChain] |_ checkpoint ⇢ HTTP GET "/getresource/test" [ExceptionHandlingWebHandler] Stack trace: at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207) ~[aws-java-sdk-core-1.11.951.jar:?]

Caused by: java.net.UnknownHostException: mybucket.localhost at java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) ~[?:?]

I can view my localstack bucket files otherwise fine in an S3 browser.

Here is the docker compose config for my localstack:

version: '3.1'
services:
localstack:
    image: localstack/localstack:latest
    environment:
        - AWS_DEFAULT_REGION=us-east-1
        - AWS_ACCESS_KEY_ID=test
        - AWS_SECRET_ACCESS_KEY=test
        - EDGE_PORT=4566
        - SERVICES=lambda,s3
    ports:
        - '4566-4583:4566-4583'
    volumes:
        - "${TEMPDIR:-/tmp/localstack}:/tmp/localstack"
        - "/var/run/docker.sock:/var/run/docker.sock"          

Here is how I am reading a text file:

public class ResourceTransferManager {

@Autowired
ResourceLoader resourceLoader;

public void resourceLoadingMethod() throws IOException {
    Resource resource = resourceLoader.getResource("s3://mybucket/index.txt");
    InputStream inputStream = resource.getInputStream();
    System.out.println("File content: " + IOUtils.toString(inputStream, StandardCharsets.UTF_8));
}}

  
ravikant
  • 415
  • 1
  • 5
  • 13
  • It starts working though when this is added to the etc/hosts file: 127.0.0.1 mybucket.localhost – ravikant Jun 21 '21 at 10:06
  • But this is not a feasible solution. If this is happening due to path style access issue, then is there an application.yml property which can be used to enable it ? – ravikant Jun 21 '21 at 10:12
  • in yaml used for docker , you can create network alias for your container , like : - .s3.localhost.localstack.cloud – Paras Patidar Jun 21 '22 at 04:47

1 Answers1

15

By default S3 client creates a path having bucket name as subdomain and this causes the issue. there are couple of ways to address this issue :

  1. In case of localstack , do not use the endpoint http://localhost:4566 , use the standard formate endpoint i.e : http://s3.localhost.localstack.cloud:4566 , this will actualy reachout to DNS and will resolve into localhost IP internally and thus this will work fine. (only caviate it , it resolve using public DNS thus it either needs internet connection or you will need to make host entries prefixing bucketname for example in host file put 127.0.0.1 <yourexpectedbucketName>.s3.localhost.localstack.cloud). OR if you are using docker then instead of making host entries , you can also create network alias for your localstack container like : <yourexpectedbucketName>.s3.localhost.localstack.cloud

  2. another better way is extension to first approach , but here instead of using aliases for each of your bucket (which may not always be feasible) , you can spin up local dns container and use wildcard dns config there. refer simplified sample at : https://gist.github.com/paraspatidar/c29e4adb172a5afc92852a57e621323d ( original reference : https://gist.github.com/NAR8789/92da076d0c35b434107fb4f4f198fd12)

Paras Patidar
  • 1,024
  • 11
  • 11
  • 2
    First option should work, I think. However the property ForcePathStyleAccess is not exposed in spring-cloud-aws. – ravikant Jan 06 '22 at 16:49
  • First option Works for me! I change the application.properties: aws.dynamodb.endpoint=http://s3.localhost.localstack.cloud:4566 aws.s3.endpoint=http://s3.localhost.localstack.cloud:4566 – Inael Rodrigues Sep 23 '22 at 12:17