I don't have a decent text editor on this server, but I need to see what's causing an error on line 10 of a certain file. I do have PowerShell though...
-
5what about `(get-content myfile.txt)[9]` ? – CB. Feb 07 '13 at 19:47
-
2yes, the problem is that with big files can be really slow, 'cause all the file is read before return the [index] – CB. Feb 07 '13 at 20:10
-
I tried (get-content myfile.txt)[9] in Windows Powershell – Steve Staple Nov 11 '15 at 10:46
7 Answers
It's as easy as using select:
Get-Content file.txt | Select-Object -Index (line - 1)
E.g. to get line 5
Get-Content file.txt | Select-Object -Index 4
Or you can use:
(Get-Content file.txt)[4]

- 52,376
- 9
- 98
- 114
-
4wouldnt this reterive the whole content into memory first which is bad ? – Stacker Oct 05 '18 at 10:19
-
See C.B.'s answer for performance stats (time, not memory). If you're working with large files, then this is inefficient. If you're using small and just a few files, then performance isn't that important. Sometimes clean code is more important. This answer is 5 years old - things have changed in powershell as well – Frode F. Oct 05 '18 at 14:24
-
@Stacker [@Lance's answer](https://stackoverflow.com/a/14760194/2518317) is the correct answer (so long as the line you need is toward the start of the file). Basically equivalent to 'nix' `head -n` – Hashbrown Jun 22 '23 at 18:47
-
1
This will show the 10th line of myfile.txt:
get-content myfile.txt | select -first 1 -skip 9
both -first
and -skip
are optional parameters, and -context
, or -last
may be useful in similar situations.

- 5,448
- 4
- 35
- 47
-
4This will work well for small files. Unless something has changed, `Get-Content` will read the entire file into memory. That does not always work well with large files. – lit Oct 23 '17 at 23:47
You can use the -TotalCount
parameter of the Get-Content
cmdlet to read the first n
lines, then use Select-Object
to return only the n
th line:
Get-Content file.txt -TotalCount 9 | Select-Object -Last 1;
Per the comment from @C.B. this should improve performance by only reading up to and including the n
th line, rather than the entire file. Note that you can use the aliases -First
or -Head
in place of -TotalCount
.

- 15,725
- 6
- 48
- 68
Just for fun, here some test:
# Added this for @Graimer's request ;) (not same computer, but one with HD little more
# performant...)
> measure-command { Get-Content ita\ita.txt -TotalCount 260000 | Select-Object -Last 1 }
Days : 0
Hours : 0
Minutes : 0
Seconds : 28
Milliseconds : 893
Ticks : 288932649
TotalDays : 0,000334412788194444
TotalHours : 0,00802590691666667
TotalMinutes : 0,481554415
TotalSeconds : 28,8932649
TotalMilliseconds : 28893,2649
> measure-command { (gc "c:\ps\ita\ita.txt")[260000] }
Days : 0
Hours : 0
Minutes : 0
Seconds : 9
Milliseconds : 257
Ticks : 92572893
TotalDays : 0,000107144552083333
TotalHours : 0,00257146925
TotalMinutes : 0,154288155
TotalSeconds : 9,2572893
TotalMilliseconds : 9257,2893
> measure-command { ([System.IO.File]::ReadAllLines("c:\ps\ita\ita.txt"))[260000] }
Days : 0
Hours : 0
Minutes : 0
Seconds : 0
Milliseconds : 234
Ticks : 2348059
TotalDays : 2,71766087962963E-06
TotalHours : 6,52238611111111E-05
TotalMinutes : 0,00391343166666667
TotalSeconds : 0,2348059
TotalMilliseconds : 234,8059
> measure-command {get-content .\ita\ita.txt | select -index 260000}
Days : 0
Hours : 0
Minutes : 0
Seconds : 36
Milliseconds : 591
Ticks : 365912596
TotalDays : 0,000423509949074074
TotalHours : 0,0101642387777778
TotalMinutes : 0,609854326666667
TotalSeconds : 36,5912596
TotalMilliseconds : 36591,2596
the winner is : ([System.IO.File]::ReadAllLines( path ))[index]
-
how about @Bacon 's answer ? since you already have a sample file :-) – Frode F. Feb 08 '13 at 09:30
-
@Graimer Added :). All this test are intended for seeking in a big file for big index, I think that for little index's value the results may vary. Each test was done in a new powershell session to avoid HD pre-caching features. – CB. Feb 08 '13 at 09:58
-
2I'm really surprised that `ReadAllLines()` is not only faster, but so much faster than the two uses of `Get-Content`. As the name suggests, it's reading the entire file, too. Anyways, I posted another approach, if you want to try that one, too. Also, whenever I use `Measure-Command` to benchmark code I usually run it like this `1..10 | % { Measure-Command { ... } } | Measure-Object TotalMilliseconds -Average -Min -Max -Sum;` so I can get a more accurate number from multiple test runs. – Lance U. Matthews Feb 08 '13 at 16:22
-
I get this error: ```Exception calling "ReadAllLines" with "1" argument(s): "Array dimensions exceeded supported range." At line:1 char:1``` – Nate Anderson Jan 23 '21 at 02:36
Here's a function that uses .NET's System.IO
classes directly:
function GetLineAt([String] $path, [Int32] $index)
{
[System.IO.FileMode] $mode = [System.IO.FileMode]::Open;
[System.IO.FileAccess] $access = [System.IO.FileAccess]::Read;
[System.IO.FileShare] $share = [System.IO.FileShare]::Read;
[Int32] $bufferSize = 16 * 1024;
[System.IO.FileOptions] $options = [System.IO.FileOptions]::SequentialScan;
[System.Text.Encoding] $defaultEncoding = [System.Text.Encoding]::UTF8;
# FileStream(String, FileMode, FileAccess, FileShare, Int32, FileOptions) constructor
# http://msdn.microsoft.com/library/d0y914c5.aspx
[System.IO.FileStream] $input = New-Object `
-TypeName 'System.IO.FileStream' `
-ArgumentList ($path, $mode, $access, $share, $bufferSize, $options);
# StreamReader(Stream, Encoding, Boolean, Int32) constructor
# http://msdn.microsoft.com/library/ms143458.aspx
[System.IO.StreamReader] $reader = New-Object `
-TypeName 'System.IO.StreamReader' `
-ArgumentList ($input, $defaultEncoding, $true, $bufferSize);
[String] $line = $null;
[Int32] $currentIndex = 0;
try
{
while (($line = $reader.ReadLine()) -ne $null)
{
if ($currentIndex++ -eq $index)
{
return $line;
}
}
}
finally
{
# Close $reader and $input
$reader.Close();
}
# There are less than ($index + 1) lines in the file
return $null;
}
GetLineAt 'file.txt' 9;
Tweaking the $bufferSize
variable might affect performance. A more concise version that uses default buffer sizes and doesn't provide optimization hints could look like this:
function GetLineAt([String] $path, [Int32] $index)
{
# StreamReader(String, Boolean) constructor
# http://msdn.microsoft.com/library/9y86s1a9.aspx
[System.IO.StreamReader] $reader = New-Object `
-TypeName 'System.IO.StreamReader' `
-ArgumentList ($path, $true);
[String] $line = $null;
[Int32] $currentIndex = 0;
try
{
while (($line = $reader.ReadLine()) -ne $null)
{
if ($currentIndex++ -eq $index)
{
return $line;
}
}
}
finally
{
$reader.Close();
}
# There are less than ($index + 1) lines in the file
return $null;
}
GetLineAt 'file.txt' 9;

- 15,725
- 6
- 48
- 68
-
1Overengineering: See BACON's solution on SO for a quick way to read a text file. :) – northben Feb 08 '13 at 16:30
-
5I stumbled on this question while looking for how to do this for a **large** file - exactly what I needed. – Tao Oct 22 '13 at 13:08
-
@Tao Thank you. Glad _someone_ found this useful. Sometimes the built-in PowerShell cmdlets don't give you the control or efficiency you need, especially, like you said, when working with large files. – Lance U. Matthews Oct 22 '13 at 15:50
-
+1 For Northben's (funny) explanation for overengineering. +1 For Bacon for his effort. – prabhakaran Mar 26 '14 at 12:27
To reduce memory consumption and to speed up the search you may use -ReadCount option of Get-Content cmdlet (https://technet.microsoft.com/ru-ru/library/hh849787.aspx).
This may save hours when you working with large files.
Here is an example:
$n = 60699010
$src = 'hugefile.csv'
$batch = 100
$timer = [Diagnostics.Stopwatch]::StartNew()
$count = 0
Get-Content $src -ReadCount $batch -TotalCount $n | % {
$count += $_.Length
if ($count -ge $n ) {
$_[($n - $count + $_.Length - 1)]
}
}
$timer.Stop()
$timer.Elapsed
This prints $n'th line and elapsed time.

- 49
- 3
I know that this is an old question, but despite being one of the most viewed ones for this topic, none of the answers are completely satisfactory to me. Get-Content
is easy to use, but it shows its limitations when it comes to really large text files (e.g. >5 GB).
I figured out a solution which does not require the entire file to be loaded into main memory and that is way faster than Get-Content
(almost as fast as sed
on Linux, like this):
[Linq.Enumerable]::ElementAt([System.IO.File]::ReadLines("<path_to_file>"), <index>)
This takes about 4 seconds on my machine to find a line in the middle of a ~4.5 GB file, whereas (Get-Content -Path <path_to_file> -TotalCount <index>)[-1]
takes about 35 seconds.

- 663
- 5
- 21