As you want to trigger the function for each record, queue storage trigger is a good option. You can improve your approach in two aspects.
Azure function part
You can configure in host.json to control queue processing. The queues.batchSize
knob is how many queue messages are fetched at a time.
"queues": {
// The maximum interval in milliseconds between
// queue polls. The default is 1 minute.
"maxPollingInterval": 2000,
// The visibility timeout that will be applied to messages that fail processing
// (i.e. the time interval between retries). The default is zero.
"visibilityTimeout" : "00:00:30",
// The number of queue messages to retrieve and process in
// parallel (per job function). The default is 16 and the maximum is 32.
"batchSize": 16,
// The number of times to try processing a message before
// moving it to the poison queue. The default is 5.
"maxDequeueCount": 5,
// The threshold at which a new batch of messages will be fetched.
// The default is batchSize/2. Increasing this value will increase the
// level of concurrency and therefore throughput. New batches of messages
// will be pulled until the number of messages being processed is greater
// than this threshold. When the number dips below this threshold, new
// batches will be fetched.
"newBatchThreshold": 8
}
Reference:
host.json settings for queues
Your code part
You can use multi-threaded code to insert the records into the queue.
Reference:
threading
How to use threading in Python