Here is something quick to give you a start. Due to your few time points I would use a linear model. I assume that absolute differences of OM are sensible here, i.e., that samples are normalized in some meaningful way. You might need to work with relative values instead (and could possibly even need a GLMM in that case?).
library(data.table)
DT <- fread("Untitled spreadsheet.csv")
setnames(DT, make.names(names(DT)))
DT[, DiffOM := OM.at.collection..g. - Original.OM..T0...g.]
DT <- DT[-1]
library(ggplot2)
p <- ggplot(DT, aes(x = Day.of.collection, y = DiffOM, color = Plot)) +
geom_point() +
facet_wrap(~ Sample.type, ncol = 1)
print(p)

Some people advice only fitting random effects if a larger number of groups is available, but I generally trust also models with few groups if the resulting fit seems reasonable. Of course, you shouldn't put too much trust into the variance estimate of the random effects in such a case. Alternatively, you could treat Plot
as a fixed effects, but your model would need two more parameters then. However, usually we are not interested too much in plot differences and prefer to concentrate on treatment effects. YMMV.
library(lmerTest)
fit1 <- lmer(DiffOM ~ Day.of.collection * Sample.type + (1 | Plot), data = DT)
fit2 <- lmer(DiffOM ~ Day.of.collection * Sample.type + (Day.of.collection | Plot), data = DT)
lme4:::anova.merMod(fit1, fit2)
#random slope doesn't really improve the model
fit3 <- lmer(DiffOM ~ Day.of.collection * Sample.type + (Sample.type | Plot), data = DT)
lme4:::anova.merMod(fit1, fit3)
#including the Sample type doesn't either
summary(fit1)
#apparently the interactions are far from significant
fit1a <- lmer(DiffOM ~ Day.of.collection + Sample.type + (1 | Plot), data = DT)
lme4:::anova.merMod(fit1, fit1a)
plot(fit1a)
#seems more or less okay with possibly exception of small degradation
#you could try a variance structure as implemented in package nlme
anova(fit1a)
#Analysis of Variance Table of type III with Satterthwaite
#approximation for degrees of freedom
# Sum Sq Mean Sq NumDF DenDF F.value Pr(>F)
#Day.of.collection 3909.4 3909.4 1 102 222.145 < 2.2e-16 ***
#Sample.type 452.4 226.2 2 102 12.853 1.051e-05 ***
#---
#Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Apparently degradation rates before your samplings were different between sample types (see the different intercepts according to sample type), which would mean non-linear rates (as we would expect). A linear model of the differences means constant absolute degradation rates.
summary(fit1a)
newdat <- expand.grid(Day.of.collection = seq(28, 84, by = 1),
Plot = c("A", "B", "C"),
Sample.type = c("X", "Y", "Z"))
newdat$pred <- predict(fit1a, newdata = newdat)
newdat$pred0 <- predict(fit1a, newdata = newdat, re.form = NA)
p +
geom_line(data = newdat, aes(y = pred, size = "subjects")) +
geom_line(data = newdat, aes(y = pred0, size = "population", color = NULL)) +
scale_size_manual(name = "Level",
values = c("subjects" = 0.5, "population" = 1.5))
