I have a crawler that was working fine defining the schema of a parquet file in S3, but I came across a CrawlerRunningException when I ran it again. I checked its status and it has been stuck in STOPPING for some reason I don´t get. I can´t even delete the crawler to recreate it, at least from the Aws Console.
Does anybody knows why this happens and how to procede?
Thanks.