3

I'm using a python script to extract and post process results fra an Abaqus FE model, but experience inconsistency when working with data from the odb-file. An example is given below.

odbObj = session.openOdb(name=JobName+'.odb', readOnly=True)
step = odbObj.steps['LC5']
set = odbObj.rootAssembly.instances['DETAILEDTOPPLATE-1#TOPPLATE-1'].nodeSets['FD3_N1A']

>>> print 
step.frames[1].fieldOutputs['S'].getSubset(region=set,position=
   ELEMENT_NODAL,elementType='S8R').bulkDataBlocks[0].data
[[  1.29479978e-42  -2.41047720e+07   0.00000000e+00   3.10530625e+05]
 [ -1.08975990e+07  -2.39987960e+07   0.00000000e+00   3.74051719e+05]
 [ -1.10543630e+07  -2.40516500e+07   0.00000000e+00   3.66518000e+05]
 [ -1.10951790e+07  -2.41662480e+07   0.00000000e+00   3.20761438e+05]]
>>> print
step.frames[1].fieldOutputs['S'].getSubset(region=set,position=
   ELEMENT_NODAL,elementType='S8R').bulkDataBlocks[0].data
[[  4.87651866e-43  -2.41047720e+07   0.00000000e+00   3.10530625e+05]
 [ -1.08975990e+07  -2.39987960e+07   0.00000000e+00   3.74051719e+05]
 [ -1.10543630e+07  -2.40516500e+07   0.00000000e+00   3.66518000e+05]
 [ -1.10951790e+07  -2.41662480e+07   0.00000000e+00   3.20761438e+05]]
>>> print 
step.frames[1].fieldOutputs['S'].getSubset(region=set,position=
   ELEMENT_NODAL,elementType='S8R').bulkDataBlocks[0].data
[[  5.60519386e-45   5.60519386e-45   2.38220739e-44   1.92838405e+31]
 [  5.42138869e-11   1.77519978e+28   1.25672711e-14   3.72739562e+05]
 [ -1.10543630e+07  -2.40516500e+07   0.00000000e+00   3.66518000e+05]
 [ -1.10951790e+07  -2.41662480e+07   0.00000000e+00   3.20761438e+05]]
>>> print step.frames[1].fieldOutputs['S'].getSubset(region=set,position=
ELEMENT_NODAL,elementType='S8R').bulkDataBlocks[0].data
[[  2.24207754e-44   5.60519386e-45   0.00000000e+00   3.10530625e+05]
 [ -1.08975990e+07  -2.39987960e+07   0.00000000e+00   3.74051719e+05]
 [ -1.10543630e+07  -2.40516500e+07   0.00000000e+00   3.66518000e+05]
 [ -1.10951790e+07  -2.41662480e+07   0.00000000e+00   3.20761438e+05]]

As seen from above the arrays are not consistent even though the call is exactly the same and thus the data should be identical. I can understand and accept that the really small numbers vary, but all sizes of numbers change.

I hope someone can help solve this problem or give a work around.

Thanks in advance.

Additional information based on comments.

Two workarounds have been suggest (examples shown below is with another data set than above). Method 1) solves the problem.

1) tmp=x.bulkDataBlocks which does the job

tmp=step.frames[1].fieldOutputs['S'].getSubset(
   region=set,position=ELEMENT_NODAL,elementType='S8R').bulkDataBlocks
print tmp[0].data
[[-20119512.     -7074813.5           0.     -2039073.375]
 [-20130472.     -7037518.            0.     -1930314.125]
 [-20122654.     -6948099.            0.     -2073283.625]
 [-20107980.     -6968545.5           0.     -1941211.375]]
print tmp[0].data
[[-20119512.     -7074813.5           0.     -2039073.375]
 [-20130472.     -7037518.            0.     -1930314.125]
 [-20122654.     -6948099.            0.     -2073283.625]
 [-20107980.     -6968545.5           0.     -1941211.375]]
print tmp[0].data
[[-20119512.     -7074813.5           0.     -2039073.375]
 [-20130472.     -7037518.            0.     -1930314.125]
 [-20122654.     -6948099.            0.     -2073283.625]
 [-20107980.     -6968545.5           0.     -1941211.375]]

2) tmp=np.copy(x.bulkDataBlocks) which makes even more inconsistency

tmp=np.copy(step.frames[1].fieldOutputs['S'].getSubset(
   region=set,position=ELEMENT_NODAL,elementType='S8R').bulkDataBlocks)
print tmp[0].data
[[  2.24207754e-44   5.60519386e-45   0.00000000e+00  -1.78478850e+06]
 [ -1.63939740e+07  -7.07835200e+06   0.00000000e+00  -1.76956088e+06]
 [ -1.63960690e+07  -7.07548150e+06   0.00000000e+00  -1.79225850e+06]
 [ -1.63969780e+07  -7.07681000e+06   0.00000000e+00  -1.79695375e+06]]
print tmp[0].data
[[  1.68155816e-44   5.60519386e-45   0.00000000e+00  -1.78478850e+06]
 [ -1.63939740e+07  -7.07835200e+06   0.00000000e+00  -1.76956088e+06]
 [ -1.63960690e+07  -7.07548150e+06   0.00000000e+00  -1.79225850e+06]
 [ -1.63969780e+07  -7.07681000e+06   0.00000000e+00  -1.79695375e+06]]
print tmp[0].data
[[  5.60519386e-45   5.60519386e-45   0.00000000e+00   0.00000000e+00]
 [  0.00000000e+00   0.00000000e+00   0.00000000e+00   0.00000000e+00]
 [  0.00000000e+00   0.00000000e+00   0.00000000e+00   0.00000000e+00]
 [  0.00000000e+00   0.00000000e+00   0.00000000e+00   0.00000000e+00]]
Mads B
  • 65
  • 4
  • what do you get if you look at `fieldobject.values[i].data? – agentp Mar 05 '18 at 21:58
  • From values[i].data I get the correct and consistent data, so this seems to be the solution. Thank you! Can you explain the difference between the data in bulkDataBlocks and in values? A problem with values compared to bulkDataBlocks, is with values I cannot tell if the output is on the positive or negative side of the shell elements where I extract data from... – Mads B Mar 06 '18 at 09:56
  • I've never tried `bulkDataBlocks`. Will try it on my own code when I get a chance. – agentp Mar 06 '18 at 11:28
  • 1
    I've gotten inconsistent results indexing `bulkDataBlocks` directly as well. My workaround was to set `foo = x.bulkDataBlocks`, `print foo[0].data` This seems to indicate that the `_getitem_` method of `bulkDataBlocks` is faulty somehow. – Daniel F Mar 06 '18 at 13:00
  • 1
    The workaround with foo=x.bulkDataBlocks did the job and I now have consistent data. Thank you so much Daniel F! – Mads B Mar 07 '18 at 12:57
  • There are also other bugs to when accesing data with the bulkDataBlocks method. Always copy the returned numpy array (np.copy(...)) or you will soon run into another weird bugs... – max9111 Mar 08 '18 at 12:45
  • I actually experience even more inconsistency when using np.copy. See the example in the edited question above (too many characters to show here) – Mads B Mar 10 '18 at 08:48
  • 1
    Oh thats's clear. You np.copy a Abaqus object instead of the data in it. I don't know why this is even possible... Try tmp=step.frames[1].fieldOutputs['S'].getSubset(region=set,position=ELEMENT_NODAL,elementType='S8R').bulkDataBlocks tmp_data=np.copy(tmp[0].data) #That get's you the array. – max9111 Mar 12 '18 at 08:33
  • 2
    From the [`C++` docs](http://abaqus.software.polimi.it/v6.14/books/ker/default.htm?startat=pt02ch61pyo05.html#ker-fieldbulkdata-cpp) the `data` attribute of a `fieldBulkData` object is a pointer. Judging by the strange performance, this is likely a *relative* pointer that's not correctly handled by `python`'s native or `numpy`'s modified `_getitem_` methods, but is correctly handled by `abaqus`'s `sequence` object's `_getitem_` method. But you can't use that method until you have a `sequence` - if you try to index `bulkDataBlocks` directly you use `python`'s `_getitem_`. Maybe? – Daniel F Mar 12 '18 at 08:49
  • Are the values the same if you check them using the CAE/Viewer? – Matt P Mar 13 '18 at 18:53

1 Answers1

0

It's possible you're looking at extrapolation errors (which could be caused by issues with your mesh or analysis setup). Note that ELEMENT_NODAL data is generally extrapolated from the integration points used in the analysis each time the data is requested, unless it has been explicitly stored as a field output for the job.

See the Abaqus Scripting User's Guide (section 10.10.8) for more info:

If the requested field values are not found in the output database at the specified odb_Enum::ELEMENT_NODAL or odb_Enum::CENTROID positions, they are extrapolated from the field data at the odb_Enum::INTEGRATION_POINT position.

MrCMedlin
  • 9
  • 2
  • could be. You'd think the extrapolation would be the same if you repeat it though. – agentp Mar 05 '18 at 21:59
  • It could might be the problem. I did a small test requesting data only from the integration points and there it doesn't seem to be a problem. Very small numbers in the range 1e-6 to 1e-50 still change though, but that's ok. As agentp mentions you'd think the extrapolation would be the same when repeating the call. I think I'll dig a bit deeper into the problem and return with any usefull information. – Mads B Mar 06 '18 at 08:10