I'll recommend 2 relatively simple measures that might help you classify a word sequence as sentence/non-sentence. Unfortunately, I don't know how well SharpNLP will handle either. More complete toolkits exist in Java, Python, and C++ (LingPipe, Stanford CoreNLP, GATE, NLTK, OpenGRM, ...)
Language-model probability: Train a language model on sentences with start and stop tokens at the beginning/end of the sentence. Compute the probability of your target sequence per that language model. Grammatical and/or semantically sensible word sequences will score much higher than random word sequences. This approach should work with a standard n-gram model, a discriminative conditional probability model, or pretty much any other language modeling approach. But definitely start with a basic n-gram model.
Parse tree probability: Similarly, you can measure the inside probability of recovered constituency structure (e.g. via a probabilistic context free grammar parse). More grammatical sequences (i.e., more likely to be a complete sentence) will be reflected in higher inside probabilities. You will probably get better results if you normalize by the sequence length (the same may apply to a language-modeling approach as well).
I've seen preliminary (but unpublished) results on tweets, that seem to indicate a bimodal distribution of normalized probabilities - tweets that were judged more grammatical by human annotators often fell within a higher peak, and those judged less grammatical clustered into a lower one. But I don't know how well those results would hold up in a larger or more formal study.