I know it's a bit late, but I have found a workaround.
I ran into the same problem while trying to read a schema into a dataset that has the relations. The error you will get in that case is:
'The same table '{0}' cannot be the child table in two nested relations'
I will share what I have learned
The dataset operates in two modes, though you CAN NOT tell this from the outside.
- (a) I'm a strict/manually created dataset, don't like nested relations
- (b) I'm a container for a serialized object, everything goes.
The dataset you have created is currently an 'a', we want to make it a 'b'.
Which mode it operates in is decided when a DatSet is 'loaded' (xml) and or some other considerations.
I spend feverish hours reading the code of the DataSet to figure out a way to fool it, and I found that MS can fix the problem with just the addition of a property on the dataset and a few additional checks. Checkout the source code for the DataRelation: http://referencesource.microsoft.com/#System.Data/System/Data/DataRelation.cs,d2d504fafd36cd26,references and that the only method we need to fool is the 'ValidateMultipleNestedRelations' method.)
The trick is to fool the dataset into thinking it build all relationships itself. The only way I found to do that is to actually make the dataset create them, by using serialization.
(We are using this solution in the part of oursystem where we're creating output with a DataSet oriented 3rd party product.)
In meta, what you want to do is:
- Create your dataset in code, including relationships. Try if you
can mimic the MS naming convention (though not sure if required)
- Serialize your dataset (best to have not any rows in it)
- Make the serialized dataset look like MS serialized it. (I'll
expand on this below)
- Read the modified dataset into a new instance.
- Now you can import your rows, MS does not check the relationships,
and things should work.
Some experimentation taught me that in this situation, less is more.
If a DataSet reads a schema, and finds NO relationships or Key-Columns, it will operate in mode 'b' otherwise it will work in mode 'a'.
It COULD be possible that we can still get a 'b' mode dataset with SOME relationships or Key-Columns, but this was not pertinent for our problem.
So, here we go, this code assumes you have an extension method 'Serialize' that knows how to handle a dataset.
Assume sourceDataSet is the DataSet with the schema only.
Target will be the actually usable dataset:
var sourceDataSet = new DataSet();
var source = sourceDataSet.Serialize();
// todo: create the structure of your dataset.
var endTagKeyColumn = " msdata:AutoIncrement=\"true\" type=\"xs:int\" msdata:AllowDBNull=\"false\" use=\"prohibited\" /";
var endTagKeyColumnLength = endTagKeyColumn.Length - 1;
var startTagConstraint = "<xs:unique ";
var endTagConstraint = "</xs:unique>";
var endTagConstraintLength = endTagConstraint.Length - 1;
var cleanedUp = new StringBuilder();
var subStringStart = 0;
var subStringEnd = source.IndexOf(endTagKeyColumn);
while (subStringEnd > 0)
{
// throw away unused key columns.
while (source[subStringEnd] != '<') subStringEnd--;
if (subStringEnd - subStringStart > 5)
{
cleanedUp.Append(source.Substring(subStringStart, subStringEnd - subStringStart));
}
subStringStart = source.IndexOf('>', subStringEnd + endTagKeyColumnLength) + 1;
subStringEnd = source.IndexOf(endTagKeyColumn, subStringStart);
}
subStringEnd = source.IndexOf(startTagConstraint, subStringStart);
while (subStringEnd > 0)
{
// throw away relationships.
if (subStringEnd - subStringStart > 5)
{
cleanedUp.Append(source.Substring(subStringStart, subStringEnd - subStringStart));
}
subStringStart = source.IndexOf(endTagConstraint, subStringEnd) + endTagConstraintLength;
subStringEnd = source.IndexOf(startTagConstraint, subStringStart);
}
cleanedUp.Append(source.Substring(subStringStart + 1));
target = new DataSet();
using (var reader = new StringReader(cleanedUp.ToString()))
{
target.EnforceConstraints = false;
target.ReadXml(reader, XmlReadMode.Auto);
}
Note, so as I said at the start, I had to fix this problem when we are loading the dataset, and though you are saving the dataset, the workaround will be the same.