First note that Druid ingests timeseries data, so each row of your data will have to have a timestamp. If that's possible, read on.
Output your data to CSV or TSV. Those are two of the formats supported for batch ingestion. So your data will look something like this:
2013-08-31T01:02:33Z,"someData","true","true","false","false",57,200,-143
2013-08-31T03:32:45Z,"moreData","false","true","true","false",459,129,330
...
Then you can create an index task which has a firehose section where you specify the location of the file, format, and columns:
"firehose" : {
"type" : "local",
"baseDir" : "my/directory/",
"filter" : "my.csv",
"parser" : {
"timestampSpec" : {
"column" : "timestamp"
},
"data" : {
"type" : "csv",
"columns" : ["timestamp","data1","data2","data3",...,"datan"],
"dimensions" : ["data1","data2","data3",...,"datan"]
}
}
}
Note the special handling given to the timestamp column.
Now run the indexing service (the Druid docs contain info on how to start the cluster you'll need) and feed the task to it as described in the section Batch Ingestion Using the Indexing Service. The data will be ingested and processed into Druid segments that you can query.