I have a directory which has 12 csv files. I read them using tensorflow using the following code:-
import tensorflow as tf
a = [0, 2, 3, 4, 5, 19, 23, 32, 39, 40, 42, 50, 51, 53, 56, 65, 66, 67, 68, 69]
data = tf.data.experimental.make_csv_dataset("./raw/*",
batch_size=2000,
select_columns = a,
label_name="Cancelled",
num_epochs = 30,
num_parallel_reads=2)
How can I split this dataset into training and testing datasets?
I am quite new to tensorflow and have no idea how to work with prefetched datasets.