I have data stored in parqueue format, i want to generate the delimitered text file from spark with row limit of 100 rows per file. is this possible to handle it from spark notebooks ? I am building ADF pipeline which triggers this notebook and the assume output is of textfile something like the below format please suggest me the possible ways .
5431732167 899 1011381 1 teststring 5431732163 899 912 teststring 5431932119 899 108808 40 teststring 5432032116 899 1082223 40 teststring
i also have a need to process the batch of text file and load them into database, please suggest the options to do this.
Thanks in advance.
Thanks, Manoj.