3

I have nearly 70000 entries in my Cassandra DB and want to fetch and process them in less than 1 second. I followed DATASTAX documentation and came up with following solution. But i still feel that there can be some mistakes. I highly appreciate if anyone can give me some advice and tips for optimize my code more

 public Map<String,String> loadObject(ArrayList<Integer> tradigAccountList){

        com.datastax.driver.core.Session session;
        Map<String,String> orderListMap = new HashMap<>();
        List<ResultSetFuture> futures = new ArrayList<>();
        List<ListenableFuture<ResultSet>> Future;

        try {
            session =jdbcUtils.getCassandraSession();
            PreparedStatement statement = session.prepare(
                    "SELECT * FROM omsks_v1.ordersStringV1 WHERE tradacntid = ?");

            for (Integer tradingAccount:tradigAccountList){
                futures.add(session.executeAsync(statement.bind(tradingAccount).setFetchSize(50000)));
            }
            Future = Futures.inCompletionOrder(futures);
            for (ListenableFuture<ResultSet> future : Future){
                for ( Row row : future.get()){
                    orderListMap.put(row.getString("cliordid"),row.getString("ordermsg"));
                }
            }

        }catch (Exception e){
        }finally {
        }
        return orderListMap;
    }
IsharaD
  • 322
  • 2
  • 4
  • 17

0 Answers0