Depends on your approach. But if the files are really that big then you probably don't want Excel to load the entire file. So, you'll probably open the files and read line by line without knowing how big the file is and how many rows it has. In that case it's probably easiest do just store two lines at a time in two separate string variables. As soon as you hit the last row you can exit your loop - as shown above in your code - and output not only the last row (as is already done in your code) but also the content of the second last row in that file.
Public Sub GetSecondLastRow()
Dim strSecondLastLine As String
Dim strFileToImport As String
Dim strLastLine As String
Dim intPointer As Integer
Dim lngCounter As Long
strFileToImport = ThisWorkbook.Path & IIf(InStr(1, ThisWorkbook.Path, "\") > 0, "\", "/") & "MyTextFile.txt"
intPointer = FreeFile()
Open strFileToImport For Input Access Read Lock Read As #intPointer
lngCounter = 0
Do Until EOF(lngCounter)
strSecondLastLine = strLastLine
Line Input #intPointer, strLastLine
lngCounter = lngCounter + 1
Loop
Close intPointer
Debug.Print "Content of the second last row:"
Debug.Print "---------------------------------------"
Debug.Print strSecondLastLine
Debug.Print "---------------------------------------"
Debug.Print "Content of the last row:"
Debug.Print "---------------------------------------"
Debug.Print strLastLine
End Sub
The alternative would be to first query the file for its row count and then get the second last record in that file using ADO. But I doubt that would be faster. The problem with ADO is that you get a huge recordset
back containing the entire text file. This is due to the fact that you have no where restriction in the clause SELECT * from MyTextFile.txt
. So, the entire text file goes into memory before you can do anything with it. Then - of course - you can check the RecordCount
and go again through all records with a cursor fast forward until you hit the second last row. Unfortunately, ADO does not support
row_number() over (order by @@ROWCOUNT).
Otherwise, you could first get the row count with select count(1) from MyTextFile.txt
and then afterwards only the applicable row.
So, in any case, I am almost certain (without having tested it) that ADO will perform below par and the first solution is the way to go if the text files are as big as you say. If you still prefer ADO then this is the code for that (based on the following SO question / answer: Copying text from .txt file in Excel using ADO ignores first row).
Sub ImportTextFile()
'Imports text file into Excel workbook using ADO.
'If the number of records exceeds 65536 then it splits it over more than one sheet.
Dim strFilePath As String, strFilename As String, strFullPath As String
Dim lngCounter As Long
Dim oConn As ADODB.Connection
Dim oRS As ADODB.Recordset
Dim oFSObj As Object
'Get a text file name
strFullPath = Application.GetOpenFilename("Text Files (*.txt),*.txt", , "Please select text file...")
If strFullPath = "False" Then Exit Sub 'User pressed Cancel on the open file dialog
'This gives us a full path name e.g. C:\temp\folder\file.txt
'We need to split this into path and file name
Set oFSObj = CreateObject("SCRIPTING.FILESYSTEMOBJECT")
strFilePath = oFSObj.GetFile(strFullPath).ParentFolder.Path
strFilename = oFSObj.GetFile(strFullPath).Name
'Open an ADO connection to the folder specified
Set oConn = New ADODB.Connection
oConn.Open "Provider=Microsoft.Jet.OLEDB.4.0;" & _
"Data Source=" & strFilePath & ";" & _
"Extended Properties=""text;HDR=No;FMT=Delimited"""
Set oRS = New ADODB.Recordset
'Now actually open the text file and import into Excel
oRS.Open "SELECT count(1) FROM [" & strFilename & "]", oConn, 3, 1, 1
Range("A1").CopyFromRecordset oRS
Set oRS = New ADODB.Recordset
'Now actually open the text file and import into Excel
oRS.Open "SELECT * FROM [" & strFilename & "]", oConn, 3, 1, 1
While Not oRS.EOF And Not oRS.BOF
If oRS.AbsolutePosition = Range("A1").Value2 Then
Range("A2").Value = oRS.Fields(0).Value
End If
oRS.MoveNext
Wend
oRS.Close
oConn.Close
End Sub