5

I have a shape file and i need to read the shape file from my java code. I used below code for reading shape file.

public class App {
    public static void main(String[] args) {
        File file = new File("C:\\Test\\sample.shp");
        Map<String, Object> map = new HashMap<>();//
        try {
            map.put("url", URLs.fileToUrl(file));
            DataStore dataStore = DataStoreFinder.getDataStore(map);
            String typeName = dataStore.getTypeNames()[0];
            SimpleFeatureSource source = dataStore.getFeatureSource(typeName);
            SimpleFeatureCollection collection = source.getFeatures();

            try (FeatureIterator<SimpleFeature> features = collection.features()) {
                while (features.hasNext()) {
                    SimpleFeature feature = features.next();
                    SimpleFeatureType schema = feature.getFeatureType();
                    Class<?> geomType = schema.getGeometryDescriptor().getType().getBinding();

                    String type = "";
                    if (Polygon.class.isAssignableFrom(geomType) || MultiPolygon.class.isAssignableFrom(geomType)) {

                        MultiPolygon geom = (MultiPolygon) feature.getDefaultGeometry();
                        type = "Polygon";
                        if (geom.getNumGeometries() > 1) {
                            type = "MultiPolygon";
                        }
                    } else if (LineString.class.isAssignableFrom(geomType)
                            || MultiLineString.class.isAssignableFrom(geomType)) {
                    } else {

                    }
                    System.out.println(feature.getDefaultGeometryProperty().getValue().toString());

                }
            }
        } catch (Exception e) {
            // TODO: handle exception
        }

    }
}

I got the desired output. But my requirement is write an aws lambda function to read shape file. For this 1. I created a Lambda java project of s3 event. I wrote the same code inside the handleRequest. I uploaded the java lambda project as a lanbda function and added one trigger. When I am uploading a .shp file to as s3 bucket lmbda function will automatically invoked. But I am getting an error like below

java.lang.RuntimeException: java.io.FileNotFoundException: /sample.shp (No such file or directory)

I have sample.shp file inside my s3 bucket. I go through below link. How to write an S3 object to a file?

I am getting the same error. I tried to change my code like below

  S3Object object = s3.getObject(new GetObjectRequest(bucket, key)); 
  InputStream objectData = object.getObjectContent();
  map.put("url", objectData );

instead of

File file = new File("C:\\Test\\sample.shp"); 
 map.put("url", URLs.fileToUrl(file));

:-( Now i am getting an error like below

java.lang.NullPointerException

Also I tried the below code

DataStore dataStore = DataStoreFinder.getDataStore(objectData);

instead of

DataStore dataStore = DataStoreFinder.getDataStore(map);

the error was like below

java.lang.ClassCastException: com.amazonaws.services.s3.model.S3ObjectInputStream cannot be cast to java.util.Map

Also I tried to add key directly to the map and also as DataStore object. Everything went wrong..:-(

Is there anyone who can help me? It will be very helpful if someone can do it for me...

Ian Turton
  • 10,018
  • 1
  • 28
  • 47
ShaiNe Ram
  • 395
  • 2
  • 6
  • 19
  • You writing files on your lambda code? You cant write to / folder, you must use /tmp/ to temporary files writes – TLPNull Jan 11 '18 at 13:53
  • Thanks for ur replay.Nop.. I am writing my files to a kinesis stream using this lambda function. before putrecord to kinesis I need to read the .shp file. As a newbie to aws where should I change in my code ? – ShaiNe Ram Jan 11 '18 at 14:07
  • @TLPNull As you mentioned I tried with /tmp/ folder. I created one tmp folder inside my bucket and uploaded the shape file to the same /tmp folder. i am getting same error as before. "java.lang.RuntimeException: java.io.FileNotFoundException: /tmp/sample.shp (No such file or directory)". I not sure that I did right/not? – ShaiNe Ram Jan 11 '18 at 14:27

1 Answers1

2

The DataStoreFinder.getDataStore method in geotools requires you to provide a map containing a key/value pair with key "url". The value associated with that "url" key needs to be a file URL like "file://host/path/my.shp".

You're trying to insert a Java input stream into the map. That won't work, because it's not a file URL.

The geotools library does not accept http/https URLs (see the geotools code here and here), so you need a file:// URL. That means you will need to download the file from S3 to the local Lambda filesystem and then provide a file:// URL pointing to that local file. To do that, here's Java code that should work:

// get the shape file from S3 to local filesystem
File localshp = new File("/tmp/download.shp");
s3.getObject(new GetObjectRequest(bucket, key), localshp);

// now store file:// URL in the map
map.put("url", localshp.getURI().getURL().toString());

If the geotools library had accepted real URLs (not just file:// URLs) then you could have avoided the download and simply created a time-limited, pre-signed URL for the S3 object and put that URL into the map.

Here's an example of how to do that:

// get current time and add one hour
java.util.Date expiration = new java.util.Date();
long msec = expiration.getTime();
msec += 1000 * 60 * 60;
expiration.setTime(msec);

// request pre-signed URL that will allow bearer to GET the object
GeneratePresignedUrlRequest gpur = new GeneratePresignedUrlRequest(bucket, key);
gpur.setMethod(HttpMethod.GET);
gpur.setExpiration(expiration);

// get URL that will expire in one hour
URL url = s3.generatePresignedUrl(gpur);
jarmod
  • 71,565
  • 16
  • 115
  • 122
  • That problem solved but you know again I am getting java.lang.NullPointerException. Means the file is not triggered by the org.geotools.data.DataAccessFinder getDataStore. the errors like below "WARNING: Problem asking Shapefile if it can process request:java.lang.NullPointerException" "java.lang.NullPointerException" "at org.geotools.data.shapefile.ShapefileDataStoreFactory.canProcess(ShapefileDataStoreFactory.java:245)" – ShaiNe Ram Jan 12 '18 at 07:08
  • Looks like geotools may not support all forms of URLs, only file://, so have updated response. – jarmod Jan 12 '18 at 14:04
  • this is actually i did with my local file. it was working perfectly. But i need to read this shape file from aws lambda using java:-( – ShaiNe Ram Jan 12 '18 at 14:47
  • Yes, and that's why my updated code shows you how to download the file from S3 to the local file system in Lambda (download it to /tmp/). Once you have the file locally, the process is the same as it is outside of Lambda. – jarmod Jan 12 '18 at 14:56
  • hai, i have one more query regarding this question. As you mentioned I wrote my code and it works fine. But you know I have getting false for ' while (features.hasNext())'.But in my java code it returns true and shape file is reading. But in aws lambda function it returns false and I am still stuck here... Can you help me to get this? – ShaiNe Ram Jan 24 '18 at 07:28
  • Debug the datastore, typename, source, and collection at each step of the process and compare to what you see when run outside of Lambda. If you're using the exact same SHP file, then I'd assume you would see the same collection of features. – jarmod Jan 24 '18 at 12:15
  • Yeh..you know while I am uploading .shp file to my s3 bucket it is going into the while loop. it return the geom coordinates. I need to get the id information too.. it will be in .shx and .dbf files right? so I am uploading .shp,.shx,.dbf same time.So while I am uploading these files it will make feature.hasNext = false. if I am uploading .shp it will return true.. Do you have any idea how to solve this? I will share my code below as answer of my question – ShaiNe Ram Jan 24 '18 at 12:25