Currently, you would need to specify the partition (using the $ decorator syntax) for each load to put it in the corresponding partition. Otherwise, BigQuery will use the UTC time of the load job to select the partition. There's an upcoming feature that will allow partitioning by your own field (I assume you have either a TIMESTAMP or DATE field in your files that you can partition by). However, they have not rolled it out yet (it's going alpha soon). You can track its progress here.
If you are in a hurry, then there's a few workarounds (e.g. loading it all into a non-partitioned table, and then using SQL or Cloud Dataflow to partition it afterwards). Have a look here.
Finally, if your file names contain a date/day for the partition, then it would be easy enough to script something yourself that looks at the name of the file and runs multiple load jobs and stuffs the data into the corresponding partition in the table.