I am working with a git repo that is very large ( > 10gb ). The repo itself has many large binary files, with many versions of each ( > 100mb ). The reasons for this are beyond the scope of this question.
Currently, it is no longer possible to properly clone from the repo, as the server itself will run out of memory (it has 12gb) and send a fail code. I would paste it here, but it takes well over an hour to get to the point of failure.
Are there any methods by which I can make a clone succeed? Even one which grabs a partial copy of the repo? Or a way I can clone in bite sized chunks that won't make the server choke?