1

i have 2 DataTables in PowerShell.

$DT1, $DT2

the tables both contains the same columns. (Product) - (Row) - (Stock)

now i want to display them both in a new DataTable with each column.

$DT3:

DT1.Product - DT1.Row - DT1.Stock - DT2.Product - DT2.Row - DT2-Stock

i can simply do that with a foreach-object loop but the tables contains each 15000 rows and it is taking for ever. first i need to check if Product and row exists if not create new row

is there a faster wat to combine those 2 tabels?

$DT1 | % {
    $Row = $DT3.NewRow() 
    $Row.DT1Product = $_.Product
    $Row.DT1Row = $_.Row
    $Row.DT1Stock = $_.Stock
    $DT3.Rows.Add($Row)
}
$DT2 | % {
    $dbSearch = ""
    $dbSearch = $DT3.Select("DT1Row = '$_.Row' and DT1Product = '$_.Product'")
    if ($dbSearch) {
        $dbSearch[0].DT2Product = $_.Product
        $dbSearch[0].DT2Row = $_.Row
        $dbSearch[0].DT2Stock = $_.Stock
    }
    Else {
        $Row = $DT3.NewRow() 
        $Row.DT2Product = $_.Product
        $Row.DT2Row = $_.Row
        $Row.DT2Stock = $_.Stock
        $DT3.Rows.Add($Row)
    }
}
Proxx
  • 141
  • 2
  • 15

1 Answers1

0

I find with large databases that staging data changes in small manageable chunks is a great way of achieving the most reliable outcome. From experience there are a few methods I might use:

1) Export the table's unique field as a reference point for an input file and then administer the changes via a find|select method, say 2000 records at a time OR;

2) Export the entire table(s) into a file; Administer the changes to the exported file; Import the update to the database.

The point I guess I'm trying to make here is, don't limit yourself to the technology you are using to achieve your results. Combine the technology with a deployment process and you open your options up to more effective ways of achieving the desired outcome. Just a thought.