I have an iceberg table with 2 parquets files store 4 rows in s3 I tried the following command:
val tables = new HadoopTables(conf);
val table = tables.load("s3://iceberg-tests-storage/data/db/test5");
SparkActions.get(spark).rewriteDataFiles(table).option("target-file-size-bytes", "52428800").execute();
but nothing changed. what I'm doing wrong?