0

I wanted all my data stored in Buckets (ex: /bucket/project/odate/odate_cust.txt) to be loaded in the table in Big Query which is DAY partitioned? Do I need to import one by one file and load or I can load directly into multiple partition.

**bq mk --time_partitioning_type=DAY market.cust custid:string,grp:integer,odate:string**
user3858193
  • 1,320
  • 5
  • 18
  • 50

1 Answers1

1

Currently, you would need to specify the partition (using the $ decorator syntax) for each load to put it in the corresponding partition. Otherwise, BigQuery will use the UTC time of the load job to select the partition. There's an upcoming feature that will allow partitioning by your own field (I assume you have either a TIMESTAMP or DATE field in your files that you can partition by). However, they have not rolled it out yet (it's going alpha soon). You can track its progress here.

If you are in a hurry, then there's a few workarounds (e.g. loading it all into a non-partitioned table, and then using SQL or Cloud Dataflow to partition it afterwards). Have a look here.

Finally, if your file names contain a date/day for the partition, then it would be easy enough to script something yourself that looks at the name of the file and runs multiple load jobs and stuffs the data into the corresponding partition in the table.

Graham Polley
  • 14,393
  • 4
  • 44
  • 80
  • Thanks a lot for replying back. So it means going forward there will be a feature for partition with other column present in dataset. – user3858193 Oct 04 '17 at 17:25