0

I need apply updates (ClientDataSet.ApplyUpdates) to be applied in specified order: - Delete as first - Modify as second - Insert as third

They are applied in order in which they were done.

Vlada
  • 559
  • 2
  • 11
  • 27
  • AFAIK no way to handle this, and i don't have a clue what's the reason for this – Sir Rufo Mar 14 '13 at 08:04
  • I use cached updates. You may have some constraint in database (eg. for example overlaping of validity of the record or unique index) and if user insert record, than edit or delete another record then final resultset can be correct but can't be applied. For example: I have in database record with an unique KEY=1, VAL=A. I insert a record with KEY=1 and value VAL=B and delete record with KEY=1, VAL=A. This fail in default order but is successfull in order DELETE, MODIFY, INSERT. – Vlada Mar 14 '13 at 12:02
  • Yes, this will fail because you do it in the wrong order and this leeds to wrong data (having 2 rows with an identical primary key) - maybe just for a second, but wrong is wrong, no matter how long – Sir Rufo Mar 14 '13 at 16:07
  • Yes I know. But data are modified by a user - that's why I need to reorder operation to delete, modify, insert to solve it. – Vlada Mar 14 '13 at 19:59
  • I'm quite sure that all data operations are controled by your application. So it's up to you to throw an exception or simple update the record instead of append/delete – Sir Rufo Mar 14 '13 at 20:30

1 Answers1

0

see KTDataComponents to get the idea how this can be accomplished

besides that I sometimes form required update order by switching to specific index in OnUpdateData event handler on provider side. there are couple of tricks that have to be utilized in order not to break delta handling but it is quite doable

procedure TLogResolver.LogUpdateRecord(Tree: TUpdateTree);
var
  I: Integer;
  CurVal: Variant;
  fld: TField;
  SaveRecNo: Integer;
  OldIndexName: WideString;
  OldIndexFieldNames: WideString;
begin
  if(TClientDataSet(Tree.Delta).IndexName<>'')or(TClientDataSet(Tree.Delta).IndexFieldNames<>'') then begin
    OldIndexName:=TClientDataSet(Tree.Delta).IndexName;
    OldIndexFieldNames:=TClientDataSet(Tree.Delta).IndexFieldNames;
    SaveRecNo:=Tree.Delta.RecNo;
    Tree.Delta.Edit;                       // vavan
    for I := 0 to Tree.Delta.FieldCount - 1 do begin
      fld:=Tree.Delta.Fields[I];
      { Blobs, Bytes and VarBytes are not included in result packet }
      if {(fld.IsBlob) or} (fld.DataType in [ftBytes, ftVarBytes]) then // vavan allowed blobs
        continue;
      CurVal := fld.NewValue;
      if not VarIsClear(CurVal) then begin
        fld.Value:=CurVal;  { TODO -ovavan -cSIC! : edit delta }
      end;
    end;
    Tree.Delta.Post; // vavan
    TClientDataSet(Tree.Delta).IndexName:=''; { TODO -ovavan -cSIC! : reset delta index to get correct original recno in order not to break reconcilation mechanism }
  end;
  try
    Tree.ErrorDS.IndexFieldNames:='ERROR_RECORDNO';
//    Tree.InitErrorPacket(nil, rrApply);
    TreeInitErrorPacket(Tree,nil, rrApply);
    for I := 0 to TVPacketDataSet(Tree.Delta).NewValueFields.Count - 1 do
    begin
      fld:=TVPacketDataSet(Tree.Delta).NewValueFields[i];
      { Blobs, Bytes and VarBytes are not included in result packet }
      if {(Tree.Delta.Fields[I].IsBlob) or} // vavan allowed blobs
         (fld.DataType in [ftBytes, ftVarBytes]) then
        continue;
      CurVal := fld.NewValue;
      if not VarIsClear(CurVal) then begin { TODO 2 -ovavan -ccheck! : get rid of this check since we only process modified fields? }
        Tree.ErrorDS.Fields[6+fld.Index].Value := CurVal; { TODO -ovavan -ccheck : ErrorDS fieldd structure is identical to delta but with extra 6 leading fields }
      end;
    end;
    Tree.ErrorDS.Post;
  finally
    if(OldIndexName<>'')then begin
      TClientDataSet(Tree.Delta).IndexName:=OldIndexName;
      Tree.Delta.RecNo:=SaveRecNo;
    end else
    if(OldIndexFieldNames<>'')then begin
      TClientDataSet(Tree.Delta).IndexFieldNames:=OldIndexFieldNames;
      Tree.Delta.RecNo:=SaveRecNo;
    end;
  end;
end;
  • Can you be more specific, please? If I change index in OnUpdateData then it is in correct order but the behaviour is wrong - I think it is because delta dataset contains more than one row for Edits (The original row is first, followed by the changed fields in the second row.). IF I change order I get in a mess. – Vlada Mar 14 '13 at 11:27
  • 1
    depending on your requirements you can create extra field used solely for update ordering. then either in OnUpdateData (or on CDS side) you can fill it according required order keeping old/new records "paired" in case of updates – Vladimir Ulchenko Mar 14 '13 at 13:18
  • IF i preserve order of "paired" records it works fine for first time. But if first applyupdates fail and user modified data again on next applyupdates it goes wrong - something is wrong with the data probably due to changed dataset rows order. I can't find what's wrong :( – Vlada Mar 14 '13 at 20:02
  • right, whenever new values get modified to be propagated back to CDS or error happen during applying updates one needs to temporarily reset delta index in order not to break reconcilation logic. I personally do that in LogUpdateRecord/LogUpdateError in my own custom resolver – Vladimir Ulchenko Mar 15 '13 at 06:42
  • Is there some nice way how to reset delta index to original order? If I simply clear the index property it doesn't help. So now I use two fields one for original order and second for my order. But it is not so elegant. – Vlada Mar 15 '13 at 13:17
  • hmm, that's exactly what I'm doing. but I must confess that I'm using my own midas version and don't know if there is some critical difference in this regard. anyway I edited my answer and added the snippet from my code: – Vladimir Ulchenko Mar 15 '13 at 13:29