1

I am working on reading .BAI2 files and processing transaction records using java. I have been exploring various options like reading and parsing .BAI2 file using plain java file IO, using spring batch etc. But i am finding the .BAI2 file structure fairly complex and not being able to get it to work correctly. Just wanted to know the opinions/thoughts if there are any standard tools or ways to read .BAI2 files using java. And if it can be achieved using spring batch. Thanks in advance.

.BAI2 is an industry standard format used by the banks. Below is the one truncated example:-

01,021000021,CST_USER,110520,1610,1627,,,2/ 
02,CST_USER,089900137,1,110509,1610,,2/ 
03,000000370053368,USD,010,782711622,,,015,7620008 12,,,040,760753198,,/ 
88,043,760000052,,,045,760010026,,,050,760000040,, ,055,760000045,,/ 
88,057,254419300,,,063,2000786,,,072,743172,,,073, 10000,,,074,1257614,,/ 
88,075,0,,,100,272765847,4,,140,288225,1,,170,1932 141,1,,230,270542100,1,/ 
88,390,3381,1,,400,293476657,478,,470,39057357,477 ,,530,254419300,1,/ 
16,165,288225,S,288225,0,0,1296942968TC,/ 
88,ORIG CO NAME= CABINET,ORIG ID=KAGIRO,DESC DATE=110509,ENTRY DESCR=G 
88,IRO CRED,ENTRY CLASS=CCD,TRACE NO=021000026942968,ENTRY DATE=110509,IND ID N 
88,O=KCAGIRO,IND NAME= CABINET 
16,175,1932141,S,123432,551095,1257614,5070689876, ,/ 
16,249,270542100,S,270542100,0,0,1262000098XN,31Y9 957018126/ 
88,REMARK=RETURN OF PRINCIPAL - END-OF-DAY SWEEP REPURCHASE AGREEMENT.

The 88 records are optional additional continuation records, that can follow 03 or 16 records. The records are grouped, you could have multiple 03's for the 02 and multiple 16's for the 03 as well as the multiple 88's.

Hadi
  • 36,233
  • 13
  • 65
  • 124
ivish
  • 572
  • 11
  • 35

1 Answers1

1

Spring batch have the capability of reading complex files. only thing is we have to write our own readers to process complex files.any file having particular pattern we can read it through spring Batch.

this is file format like your file

CUST,Warren,Q,Darrow,8272 4th Street,New York,IL,76091
TRANS,1165965,2011-01-22 00:13:29,51.43
CUST,Ann,V,Gates,9247 Infinite Loop Drive,Hollywood,NE,37612
CUST,Erica,I,Jobs,8875 Farnam Street,Aurora,IL,36314
TRANS,8116369,2011-01-21 20:40:52,-14.83
TRANS,8116369,2011-01-21 15:50:17,-45.45
TRANS,8116369,2011-01-21 16:52:46,-74.6
TRANS,8116369,2011-01-22 13:51:05,48.55
TRANS,8116369,2011-01-21 16:51:59,98.53

Custom FileReader

    import Java.util.ArrayList;
import org.springframework.batch.item.ExecutionContext;
import org.springframework.batch.item.ItemStreamException;
import org.springframework.batch.item.ItemStreamReader;
import org.springframework.batch.item.ParseException;
import org.springframework.batch.item.UnexpectedInputException;
public class CustomerFileReader implements ItemStreamReader<Object> {
private Object curItem = null;
private ItemStreamReader<Object> delegate;
public Object read() throws Exception {
if(curItem == null) {
curItem = (Customer) delegate.read();
}
Customer item = (Customer) curItem;
curItem = null;
if(item != null) {
item.setTransactions(new ArrayList<Transaction>());
while(peek() instanceof Transaction) {
curItem = null;
}
}
return item;
}
public Object peek() throws Exception, UnexpectedInputException,
ParseException {
if (curItem == null) {
curItem = delegate.read();
}
return curItem;
}
public void setDelegate(ItemStreamReader<Object> delegate) {
this.delegate = delegate;
}
public void close() throws ItemStreamException {
delegate.close();
}
public void open(ExecutionContext arg0) throws ItemStreamException {
delegate.open(arg0);
}
public void update(ExecutionContext arg0) throws ItemStreamException {
delegate.update(arg0);
}
}

Configuration

    <beans:bean id="customerFile"
class="org.springframework.core.io.FileSystemResyource" scope="step">
<beans:constructor-arg value="#{jobParameters[customerFile]}"/>
</beans:bean>
<beans:bean id="customerFileReader"
class="com.apress.springbatch.chapter7.CustomerFileReader">
<beans:property name="delegate" ref="trueCustomerFileReader"/>
</beans:bean>
<beans:bean id="trueCustomerFileReader"
class="org.springframework.batch.item.file.FlatFileItemReader">
<beans:property name="resource" ref="customerFile" />
<beans:property name="lineMapper">
<beans:bean class="org.springframework.batch.item.file.mapping.
PatternMatchingCompositeLineMapper">
<beans:property name="tokenizers">
<beans:map>
<beans:entry key="CUST*" value-ref="customerLineTokenizer"/>
<beans:entry key="TRANS*" value-ref="transactionLineTokenizer"/>
</beans:map>
</beans:property>
<beans:property name="fieldSetMappers">
<beans:map>
<beans:entry key="CUST*" value-ref="customerFieldSetMapper"/>
<beans:entry key="TRANS*" value-ref="transactionFieldSetMapper"/>
</beans:map>
</beans:property>
</beans:bean>
</beans:property>
</beans:bean>
<beans:bean id="customerLineTokenizer"
class="org.springframework.batch.item.file.transform.
DelimitedLineTokenizer">
<beans:property name="names" value="prefix,firstName,middleInitial,
lastName,address,city,state,zip"/>
<beans:property name="delimiter" value=","/>
</beans:bean>
<beans:bean id="transactionLineTokenizer"
class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<beans:property name="names"
value="prefix,accountNumber,transactionDate,amount"/>
<beans:property name="delimiter" value=","/>
</beans:bean>
<beans:bean id="customerFieldSetMapper"
class="org.springframework.batch.item.file.mapping.
BeanWrapperFieldSetMapper">
<beans:property name="prototypeBeanName" value="customer"/>
</beans:bean>
<beans:bean id="transactionFieldSetMapper"
class="com.apress.springbatch.chapter7.TransactionFieldSetMapper"/>
<beans:bean id="customer" class="com.apress.springbatch.chapter7.Customer"
scope="prototype"/>

Output Writer

<beans:bean id="outputFile"
class="org.springframework.core.io.FileSystemResyource" scope="step">
<beans:constructor-arg value="#{jobParameters[outputFile]}"/>
</beans:bean>
<beans:bean id="outputWriter"
class="org.springframework.batch.item.file.FlatFileItemWriter">
<beans:property name="resource" ref="outputFile" />
<beans:property name="lineAggregator">
<beans:bean class="org.springframework.batch.item.file.transform.
PassThroughLineAggregator"/>
</beans:property>
</beans:bean>

output will be

Warren Q. Darrow has 1 transactions.
Ann V. Gates has no transactions.
Erica I. Jobs has 5 transactions.

CustomerFieldSetMapper

import org.springframework.batch.item.file.mapping.FieldSetMapper;
import org.springframework.batch.item.file.transform.FieldSet;
import org.springframework.validation.BindException;
public class CustomerFieldSetMapper implements FieldSetMapper<Customer> {
public Customer mapFieldSet(FieldSet fieldSet) throws BindException {
Customer customer = new Customer();
customer.setAddress(fieldSet.readString("addressNumber") +
" " + fieldSet.readString("street"));
customer.setCity(fieldSet.readString("city"));
customer.setFirstName(fieldSet.readString("firstName"));
customer.setLastName(fieldSet.readString("lastName"));
customer.setMiddleInitial(fieldSet.readString("middleInitial"));
customer.setState(fieldSet.readString("state"));
customer.setZip(fieldSet.readString("zip"));
return customer;
}
}
Baji Shaik
  • 1,022
  • 2
  • 10
  • 14
  • Thanks for your reply. I have added .BAI2 file sample above. Do you have any suggestions on how should i use spring-batch to process the file? Any specific types of readers, line mappers etc. – ivish Dec 04 '14 at 14:33
  • absolutely we can process this file using custom ItemReader by which we can aggregate the records into a single record.i will give you a code snippet which is used to read these type of complex file.but i can give example program only by that idea you came to an idea how to read .BAI2 file.if any questions feel free to comment – Baji Shaik Dec 05 '14 at 06:18
  • Hello Baji, thanks a lot for the sample code and your time. It's greatly appreciated .The example code works with a little modification. I have few queries though; 1. Suppose if 'CUST' and 'TRANS' records have variable fields/lengths is it possible to use different line tokenizers and mappers depending on the record length? Because in the .BAI2 format, records of same type can have variable number of fields. 2. In case of .BAI2 file format there are total 7 types of records. Is it possible to process all these 7 record types using a single itemReader as in the example you shared. – ivish Dec 05 '14 at 14:54
  • for point 1 record starting with same token can not have different fieldset mappers i think.but we can solve this problem by taking care of field length of record in custom fieldset mapper. for point 2.yes.we can process all type of records in a single reader as i shared the example.i am using this code in my project with out hassle. – Baji Shaik Dec 05 '14 at 15:32