Background: Working through different techniques to retrieve remote directory information over FTP within PowerShell.
The script below seems to be a popular way of retrieving remote directory details using the 'off-the-shelf' FTP ListDirectoryDetails available in PowerShell. (link1, link2, link3) and it works well.
$server = "ftp://servername"
$ftp = [system.net.ftpwebrequest] [system.net.webrequest]::create($server)
$ftp.method = [system.net.WebRequestMethods+ftp]::listdirectorydetails
$response = $ftp.getresponse()
$stream = $response.getresponsestream()
$buffer = new-object System.Byte[] 1024
$encoding = new-object System.Text.AsciiEncoding
$outputBuffer = ""
$foundMore = $false
## Read all the data available from the stream, writing it to the
## output buffer when done.
do
{
## Allow data to buffer for a bit
start-sleep -m 1000
## Read what data is available
$foundmore = $false
$stream.ReadTimeout = 1000
do
{
try
{
$read = $stream.Read($buffer, 0, 1024)
if($read -gt 0)
{
$foundmore = $true
$outputBuffer += ($encoding.GetString($buffer, 0, $read))
}
} catch { $foundMore = $false; $read = 0 }
} while($read -gt 0)
} while($foundmore)
$outputBuffer
As the do loop
starts, a delay is set. When multiple folders are being retrieved, the delay is multiplied by the number of folders. So the shorter the delay, the faster the download. I dropped it to 100, a ten-fold improvement and it worked fine. But not being a guru in these things, I don't really understand the need for the delay, nor the consequences for having insufficient delay.
Can someone explain the mechanism we're dealing with here?