I have a pandas dataframe and want to create a BigQuery table from it. I understand that there are many posts asking about this question, but all the answers I can find so far require explicitly specifying the schema of every column. For example:
from google.cloud import bigquery as bq
client = bq.Client()
dataset_ref = client.dataset('my_dataset', project = 'my_project')
table_ref = dataset_ref.table('my_table')
job_config = bq.LoadJobConfig(
schema=[
bq.SchemaField("a", bq.enums.SqlTypeNames.STRING),
bq.SchemaField("b", bq.enums.SqlTypeNames.INT64),
bq.SchemaField("c", bq.enums.SqlTypeNames.FLOAT64),
]
)
client.load_table_from_dataframe(my_df, table_ref, job_config=job_config).result()
However, sometimes I have a dataframe of many columns (for example, 100 columns), it's really non-trival to specify all the columns. Is there a way to do it efficiently?
Btw, I found this post with similar question: Efficiently write a Pandas dataframe to Google BigQuery
But seems like bq.Schema.from_dataframe
does not exist:
AttributeError: module 'google.cloud.bigquery' has no attribute 'Schema'