i would try to do it in the following way:
df = pd.DataFrame()
chunksize = 10**5
for t in pd.read_csv(filename, usecols=['A','B'], chunksize=chunksize):
df = pd.concat([df, t.drop_duplicates()], ignore_index=True).drop_duplicates()
print(df.groupby(['A'])['B'].nunique())
or if you need a dictionary:
print(df.groupby(['A'])['B'].nunique().to_dict())
PS i'm afraid you can't calculate it in separate chunks, because of possible duplicates in different chunks. So the best idea i currently have is to collect all your data and dropping duplicates on each step - this might help to reduce an amount of data a little bit
PPS if your resulting deduplicated DF doesn't fit into memory, then i would recommend you to have a look at the Apache Spark SQL project, where you can process your data frames on the cluster in a distributed manner.