I'm trying to understand what is wrong with the following code.
Imagine the following class, "Foo":
@protocol FooDelegate <NSObject>
- (void)hereTakeThisFooBarDic:(NSDictionary *)fooBarDic;
@end
@interface Foo : NSObject <BarDelegate>
@property (nonatomic, strong) Bar *bar;
@property (nonatomic, weak) id <FooDelegate> *fooDelegate;
- (void)getFooBarDicForNum:(int)fooBarNum;
@end
@implementation Foo
static Foo *foo = nil;
- (id)init {
if (!foo) {
foo = [super init];
self.bar = [[bar alloc] init];
}
return foo;
}
- (void)getFooBarDicForNum:(int)fooBarNum {
self.bar.fooDelegate = self;
[self.bar getFooBarDicFromIntarwebsNumber:fooBarNum];
}
//We get this callback from self.bar after a few ms
- (void)callbackWithFooBarDicFromIntarwebs:(NSDictionary *)fooBarDic {
[self.fooDelegate hereTakeThisFooBarDic:fooBarDic];
}
@end
We call Foo from somewhere in code like this:
for(int i=0; i < 10; i++) {
Foo *foo = [[Foo alloc] init];
[foo getFooBarDicForNum:i];
}
Then we get the callbacks later in a hereTakeThisFooBarDic
method.
But the problem is we are getting unbounded memory growth. It seems that Foo's init
method acts like a singleton, but every time we call it we are allocating more memory for it. It does not register as a memory leak though. In looking at this code it does not seem like the right way to do a singleton, though.
I'd like to know what the authors of this code did wrong.