Thanks in advance for the help. Essentially, I was testing obtaining data off websites, when I ran across this one: http://lib.stat.cmu.edu/datasets/sleep. I proceeded in the following fashion:
(A) Get a sense of the data (in R): I essentially typed the following
readLines("http://lib.stat.cmu.edu/datasets/sleep", n=100)
(B) I notice that the data I would want really starts on the 51st line, so I write this code:
sleep_table <- read.table("http://lib.stat.cmu.edu/datasets/sleep", header=FALSE, skip=50)
(C) I get the following error:
Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, :
line 1 did not have 14 elements
Where I got the above approach was from another question on stack overflow (import dat file into R). However, this question deals with a .dat file and my question is with data at a particular URL. What I'd like to know is how do I get the data from line 51 down (if you used readLines) into a dataframe with no headers (I'll add those in later with a colnames(sleep_table) <- c("etc.", "etc2", "etc3"...).