Adding a new "destination" DC to your existing cluster "source" DC, is a very common technic to migrate to a new DC.
- Add the new "destination" DC
- Change replication factor settings accordingly
nodetool rebuild
--> stream data from the "source" DC to the "destination" DC
nodetool repair
the new DC.
- Update your application clients to connect to the new DC once it's ready to serve (all data streamed + repaired)
- Decommission the "old" (source) DC
For the gory details see here:
If you prefer to go the full scan route. CQL reads on the source and CQL writes on the destination, with some ability for data manipulation and save points to resume from, than the Scylla Spark Migrator is a good option.
https://github.com/scylladb/scylla-code-samples/tree/master/spark-scylla-migrator-demo
You can also use the Scylla Spark migrator to migrate parquet files
https://www.scylladb.com/2020/06/10/migrate-parquet-files-with-the-scylla-migrator/
Remember not to migrate Materialized views (MV), you can always re-create them post migration again from the base tables.