1

I'm SUPER new to Snowflake and Snowpark, but I do have respectable SQL and Python experience. I'm trying to use Snowpark to do my data prep and eventually use it in a data science model. However, I cannot write to the database from which I'm pulling from -- I need to create all tables in a second DB.

I've created code blocks to represent both input and output DBs in their own sessions, but I'm not sure that's helpful, since I have to be in the first session in order to even get the data.

I use code similar to the following to create a new table while in the session for the "input" DB:

my_table= session.table("<SCHEMA>.<TABLE_NAME>")
my_table.toPandas()

table_info = my_table.select(col("<col_name1>"),
                             col("<col_name2>"),
                             col("<col_name3>").alias("<new_name>"),
                             col("<col_name4"),
                             col("<col_name5")             
                            )


table_info.write.mode('overwrite').saveAsTable('MAINTABLE')

I need to save the table MAINTABLE to a secondary database that is different from the one where the data was pulled from. How do I do this?

1 Answers1

1

It is possible to provide fully qualified name:

table_info.write.mode('overwrite').saveAsTable('DATABASE_NAME.SCHEMA_NAME.MAINTABLE')

DataFrameWriter.save_as_table

Parameters:

table_name – A string or list of strings that specify the table name or fully-qualified object identifier (database name, schema name, and table name).

Lukasz Szozda
  • 162,964
  • 23
  • 234
  • 275