The problem indeed was the size of the xml file. However, I may have a solution which involves multiple steps. Essentially, you can take the large xml file and slice it into smaller pieces (e.g., 10 for 100 Placemark elements) and have R iterate through each of the subsetted xml files.
STEP 1: XSL SCRIPTING
To slice you can run this XML transform stylesheet, xsl file, selecting Placemark elements at subsequent positions.
FIRST 100 Placemarks
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:template match="/">
<xsl:element name="kml">
<xsl:for-each select="(//Placemark)[100 >= position()]"><xsl:copy-of select="."/></xsl:for-each>
</xsl:element>
</xsl:template>
</xsl:stylesheet>
NEXT 101 - 200 Placemarks
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:template match="/">
<xsl:element name="kml">
<xsl:for-each select="(//Placemark)[(200 >= position()) and (101 <= position())]"><xsl:copy-of select="."/></xsl:for-each>
</xsl:element>
</xsl:template>
</xsl:stylesheet>
...And so on until 1016 Placemarks
STEP 2: XML TRANSFORMATION
Next, use R, Python, PHP, VBA, or some other language to transform the large xml using these above separate 10 xsl files. R may have an xslt package (for this, I used Python's lxml module). In your code, after each transformation, save each smaller xml file separately (~1.5 MB each): PrimaryCatchments_100.xml, PrimaryCatchments_200.xml, etc. Each file will look like its larger 18 MB version but only contain 100 Placemarks.
STEP 3: R XML PARSING
Then, have R extract each smaller xml file (consider a loop lapply function).
doc<-xmlTreeParse("PrimaryCatchments_100.xml", options=OLDSAX)
top<-xmlRoot(doc)
extract <- xmlSApply(top, function(x) xmlSApply(x, xmlValue))
extract_df <- data.frame(t(extract),row.names=NULL)
my.df <- data.frame(lapply(extract_df, as.character), stringsAsFactors=FALSE)
write.csv(my.df,"Extract_100.csv", row.names=FALSE)
...
STEP 4: R CSV APPENDING
Finally, use R to append all 10 csv extract files together. Do note the maximum character count of an Excel cell is 32,767 and some coordinates had 54,000+ which forces new cells in the spreadsheet view of csv.
extract100df<-read.csv("Extract_100.csv")
extract200df<-read.csv("Extract_200.csv")
extract300df<-read.csv("Extract_300.csv")
extract400df<-read.csv("Extract_400.csv")
extract500df<-read.csv("Extract_500.csv")
extract600df<-read.csv("Extract_600.csv")
extract700df<-read.csv("Extract_700.csv")
extract800df<-read.csv("Extract_800.csv")
extract900df<-read.csv("Extract_900.csv")
extract1016df<-read.csv("Extract_1016.csv")
finaldf <- rbind(extract100df, extract200df, extract300df, extract400df, extract500df,
extract600df, extract700df, extract800df, extract900df, extract1016df)
write.csv <- ("FinalExtractData", row.names=FALSE)