You want to save using a completely separate stack (create a new persistent store coordinator and managed object context on private queue). This is to take advantage of WAL - on by default in iOS7+. This allows you to write without blocking your contexts that are reading.
You also want to import in batches, with each batch inside its own autorelease pool. Save and reset the MOC.
I have ignored errors in this example, for clarity of an example that has nothing to do with dealing with errors... DO NOT do that...
- (void)importFromURL:(NSURL*)url batchSize:(NSUInteger)batchSize {
// Open the URL so it can be used to read the text file
// Create a new PSC, attached to the same store(s) as the PSC used by main context
NSManagedObjectModel *model = // same MOM used by your main context
NSPersistentStoreCoordinator *psc = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:model];
[psc addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:nil error:NULL];
// A private-queue MOC
NSManagedObjectContext *moc = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
moc.persistentStoreCoordinator = psc;
moc.undoManager = nil;
// Load all objects from the file, in batches
NSUInteger total = 0;
Record *record = nil;
do {
NSUInteger count = 0;
@autoreleasepool {
while (count < batchSize && (record = [self fetchNextRecord]) != nil) {
[self createManagedObjectFromRecord:record inManagedObjectContext:moc];
++count;
}
[moc save:NULL]; // don't ignore the error
[moc reset];
}
total += count;
} while (record != nil);
// Post a notification indicating that the import has finished.
// This allows your main context to refetch from the store.
[[NSNotificationCenter defaultCenter] postNotificationName:@"DidImport" object:self userInfo:@{ @"url":url, @"total":@(total)}];
Note that this assumes you have no relationships, and you are just loading straight objects.
If you do have relationships, you should load the objects first, then establish the relationships afterwards.
If your dataset is more like a bunch of small clusters of related objects, then you should load each small cluster as a batch, loading the objects, then connecting the relationships, then saving and moving to the next batch.
So, what does that example code do?
It creates a completely independent core data stack, which allows better concurrency during the import.
It loads objects in small batches. This prevents memory allocations from getting out of hand, and keeps update sizes manageable. This will help if you have the other context observing changes and automatically merging them in. Small batches will have less impact on the other threads doing the merge.
Save after each batch commits the data to the store. Reset the context frees any memory associated with the context, and allows the next batch to start fresh. This helps limit memory growth.
The autorelease pool ensures that any auto-released objects are freed at the end of every batch, also helping reduce memory footprint.
The notification at the end allows you to notify other code to refetch from the actual core data database file.
For very large imports, this may be much more efficient than observing the saves of the context and merging each time.
This is obviously not working code, but it shows you how you should proceed. If you follow this template, you should be able to import quickly and efficiently.