I have been trying to deploy a Cloud Function with some private dependencies (pyodbc
) as I couldn't get it working thru requirements.txt
. Please note, I don't want to use Docker here. So all I have built here is below files,
1. main.py
2. process.py ( this one use pyodbc to connect to teradata)
3. libs (folder)
3.1 pyodbc-4.0.30.dist-info (package)
3.2 pyodbc (python extension module)
3.3 __init.py__ ( this is to make this folder as module)
4.requirements.txt
I also updated process.py file to import pyodbc module as below,
import libs.pyodbc
Please note: I used GCP docs to install the pyodbc
package and put in libs using https://cloud.google.com/functions/docs/writing/specifying-dependencies-python
On top this, I am also requirements.txt
to import as default.
But I am still getting module error as below.
Error message: Code in file main.py can't be loaded.
Did you list all required modules in requirements.txt?
Detailed stack trace: Traceback (most recent call last):
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 305, in check_or_load_user_function
_function_handler.load_user_function()
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 184, in load_user_function
spec.loader.exec_module(main)
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/user_code/main.py", line 9, in <module>
from process import process
File "/user_code/process.py", line 6, in <module>
import libs.pyodbc
ModuleNotFoundError: No module named 'libs.pyodbc'
Any leads or help from here is really appreciated. All I am trying to achieve here is, Read CSV files from GCP bucket and process it thru dataframe which loads into teradata and generate output file back into another GCP bucket. I am trying to achieve all with Cloud Functions only. Thank you