4

I have a database table from which I am trying to get 5+ million rows of two columns.

The following piece of code in python works perfectly and quickly (In about 3 minutes for the full 5+ rows of data, retrieved via query and written to CSV):

import pandas as pd
import teradatasql

hostname = "myhostname.domain.com"
username = "myusername"
password = "mypassword"

with teradatasql.connect(host=hostname, user=username, password=password, encryptdata=True) as conn:
    df = pd.read_sql("SELECT COL1, COL2 FROM MY_TABLE", conn)

df.to_csv(mypath, sep = '\t', index = False)

The following piece of code in R with teradatasql package works for small values of explicitly supplied row count to retrieve. But, when n is large enough (well not that large actually), or when I ask it to retrieve the full 5+ row data set, it take extraordinary amount of time or almost never returns.

Any idea what is going on?

library(teradatasql)

dbconn <- DBI::dbConnect(
  teradatasql::TeradataDriver(),
  host = 'myhostname.domain.com',
  user = 'myusername', password = 'mypassword'
  )

dbExecute(dbconn, "SELECT COL1, COL2 FROM MY_TABLE")
[1] 5348946

system.time(dbGetQuery(dbconn, "SELECT COL1, COL2 FROM MY_TABLE", n = 10))
   user  system elapsed 
  0.084   0.016   1.496 

system.time(dbGetQuery(dbconn, "SELECT COL1, COL2 FROM MY_TABLE", n = 100))
   user  system elapsed 
  0.104   0.024   1.548 

system.time(dbGetQuery(dbconn, "SELECT COL1, COL2 FROM MY_TABLE", n = 1000))
   user  system elapsed 
  0.488   0.036   1.826 

system.time(dbGetQuery(dbconn, "SELECT COL1, COL2 FROM MY_TABLE", n = 10000))
   user  system elapsed 
  7.484   0.100   9.413 

system.time(dbGetQuery(dbconn, "SELECT COL1, COL2 FROM MY_TABLE", n = 100000))
   user  system elapsed 
767.824   4.648 782.518 

system.time(dbGetQuery(dbconn, "SELECT COL1, COL2 FROM MY_TABLE", n = 5348946))
 < DOES NOT RETURN IN HOURS >

Here is some version information for reference:

> packageVersion('teradatasql')
[1] ‘17.0.0.2’
> version
               _                           
platform       x86_64-pc-linux-gnu         
arch           x86_64                      
os             linux-gnu                   
system         x86_64, linux-gnu           
status                                     
major          3                           
minor          6.1                         
year           2019                        
month          07                          
day            05                          
svn rev        76782                       
language       R                           
version.string R version 3.6.1 (2019-07-05)
nickname       Action of the Toes          
Gopala
  • 10,363
  • 7
  • 45
  • 77

1 Answers1

4

It is very slow for the teradatasql driver to construct a large data.frame in memory from the fetched result set rows.

To get good fetch performance, you want to limit the number of rows at a time that you fetch from the result set.

res <- DBI::dbSendQuery (con, "select * from mytable")
repeat {
  df <- DBI::dbFetch (res, n = 100)
  if (nrow (df) == 0) { break }
}

Here are the results of some informal performance testing of fetching rows from a two-column table having an integer column and a varchar(100) column. The best performance occurs with fetching 100 rows at a time.

Fetched 100000 total rows (10 rows at a time) in 28.6985738277435 seconds, throughput = 3484.49371039225 rows/sec
Fetched 100000 total rows (50 rows at a time) in 23.4930009841919 seconds, throughput = 4256.58689016736 rows/sec
Fetched 100000 total rows (100 rows at a time) in 22.7485280036926 seconds, throughput = 4395.8888233897 rows/sec
Fetched 100000 total rows (500 rows at a time) in 24.1652879714966 seconds, throughput = 4138.16711466265 rows/sec
Fetched 100000 total rows (1000 rows at a time) in 25.222993850708 seconds, throughput = 3964.63641833672 rows/sec
Fetched 100000 total rows (2000 rows at a time) in 27.1710178852081 seconds, throughput = 3680.3921156903 rows/sec
Fetched 100000 total rows (5000 rows at a time) in 34.9067471027374 seconds, throughput = 2864.77567519197 rows/sec
Fetched 100000 total rows (10000 rows at a time) in 45.7679090499878 seconds, throughput = 2184.9370459721 rows/sec
Tom Nolan
  • 394
  • 2
  • 5
  • 1
    Wow. This is the info I was looking for. Thank you. It is sad that the implementation has N^2 time. Never ran into this type of issue with many other database drivers. – Gopala Jul 18 '20 at 21:27