3

I've managed to delete all entities stored using Core Data (following this answer).

The problem is, I've noticed the primary key is still counting upwards. Is there a way (without manually writing a SQL query) to reset the Z_MAX value for the entity? Screenshot below to clarify what I mean.

The value itself isn't an issue, but I'm just concerned that at some point in the future the maximum integer may be reached and I don't want this to happen. My application syncs data with a web service and caches it using core data, so potentially the primary key may increase by hundreds/thousands at a time. Deleting the entire Sqlite DB isn't an option as I need to retain some of the information for other entities.

I've seen the 'reset' method, but surely that will reset the entire Sqlite DB? How can I reset the primary key for just this one set of entities? There are no relationships to other entities with the primary key I want to reset.

Screenshot

Community
  • 1
  • 1
Jamie Chapman
  • 4,229
  • 5
  • 29
  • 47

4 Answers4

8

I'm just concerned that at some point in the future the maximum integer may be reached and I don't want this to happen.

Really? What type is your primary key? Because if it's anything other than an Int16 you really don't need to care about that. A signed 32-bit integer gives you 2,147,483,647 values. A 64-bit signed integer gives you 9,223,372,036,854,775,807 values.

If you think you're going to use all those up, you probably have more important things to worry about than having an integer overflow.

More importantly, if you're using Core Data you shouldn't need to care about or really use primary keys. Core Data isn't a database - when using Core Data you are meant to use relationships and not really care about primary keys. Core Data has no real need for them.

lxt
  • 31,146
  • 5
  • 78
  • 83
8

Core Data uses 64 bit integer primary keys. Unless I/O systems get many orders of magnitude faster, which unlike CPUs, they have not in recent years, you could save as fast as possible for millions of years.

Please file a bug with bugreport.apple.com when you run out.

  • Ben
Ben
  • 846
  • 6
  • 3
2

From the sqlite faq:

If the largest possible integer key, 9223372036854775807, then an unused key value is chosen at random.

9,223,372,036,854,775,807 / (1024^4) = 8,388,608 tera-rows. I suspect you will run into other limits first. :) http://www.sqlite.org/limits.html reviews the more practical limits you'll run into.

asking sqlite3 about a handy core data store yields:

sqlite> .schema zbookmark
CREATE TABLE ZBOOKMARK ( Z_PK INTEGER PRIMARY KEY, ...

note lack of "autoincrement", which in sqlite means to never reuse a key. So core data does allow old keys to be reused, so you're pretty safe even if you manage to add (and remove most of) that many rows over time.

If you really do want to reset it, poking around in apple's z_ tables is really the only way. [This is not to say that this is a thing you should in fact do. It is not (at least in any code you want to ship), even if it seems to work.]

rgeorge
  • 7,385
  • 2
  • 31
  • 41
  • 3
    You ***are not*** meant to edit a Core Data SQLite (or any) store directly. The implementation details are not public and are subject to change at any time. This is a bad idea. – Joshua Nozzi Jan 11 '11 at 23:53
  • er, sorry. Yes, it should go without saying that violating Core Data's abstraction barrier is entirely at one's own risk. You pays youse money and you takes youse chances. – rgeorge Jan 12 '11 at 01:44
1

Besides the fact that directly/manually editing a Core Data store is a horrendously stupid idea, the correct answer is:

Delete the database and re-create it.

Of course, you're going to lose all your data doing that, but if you're that concerned about this little number, then that's ok, right?

Oh, and Core Data will make sure you don't have primary key collisions.

Dave DeLong
  • 242,470
  • 58
  • 448
  • 498