Assuming that our brains do not have access to a metaphysical cloud server, meaning is represented as configuration of neuronal connections, hormonal levels, electrical activity -- maybe even quantum fluctuations -- and the interaction between all these and the outer world and other brains. So this is good news: at least we know that there is -- at least -- one answer to your question (meaning is represented somewhere, somehow). Bad news is that most of us do not have any idea how this works and those who think they do understand haven't been able to convince the others or each other. Being one of the clueless people, I can't give the answer to your question, but provide a list of the answers that I have come across to smaller and degenerated versions of the grand problem.
If you want to represent the meaning of lexical entities (e.g., concepts, actions) you can use distributed models such as vector space models. In these models, usually, meaning has a geometric component. Each concept is represented as a vector and you place the concepts in a space in such a way that similar concepts are closer to each other. A very common way to construct such a space is to pick a set of commonly used words (basis words) as the dimensions of the space and simply count the number of times a target concept is observed together in speech/text with these basis words. Similar concepts will be used in similar contexts; thus, their vectors will be pointing similar directions. On top of that you can carry out a bunch of weighting, normalization, dimensionality reduction and recombination techniques (e.g., tf-idf, http://en.wikipedia.org/wiki/Pointwise_mutual_information, SVD). A slightly related, but probabilistic -- rather than geometric -- approach is latent Dirichlet allocation and other generative/Bayesian models which are already mentioned in another answer.
Vector space model approach is good for discriminative purposes. You can decide whether two given phrases are semantically related or not (for example matching queries to documents or finding similar search query pairs to help the user to expand his query). But it is not very straightforward to incorporate syntax in these models. I can't see very clearly how you could represent the meaning of a sentence by a vector.
Grammar formalisms could help to incorporate syntax and bring a structure to meaning and the relations between the concepts (e.g., head-driven phrase structure grammar). If you build two agents who share a vocabulary and grammar and make them communicate (i.e., transfer information from one to the other) via these mechanisms you could say they represent the meaning. It is rather a philosophical question where and how the meaning is represented when a robot tells another to pick the "red circle above the black box" via a built-in or emerged grammar and vocabulary and the other one successfully picks the intended object (see this very interesting experiment on grounding vocabulary: Talking Heads).
Another way to capture meaning is to use networks. For example, by representing each concept as a node in a graph and the relations between the concepts as edges between the nodes, one can come up with a practical representation of meaning. Concept Net is a project that aims to represent common sense and it is possible to view it as a semantic network of commonsense concepts. In a way, the meaning of a certain concept is represented via its location relative to other concepts in the network.
Speaking of common sense, Cyc is another ambitious example of a project that tries to capture commonsense knowledge, but it does so in a very different way than Concept Net. Cyc uses a well-defined symbolic language to represent the attributes of objects and the relations between objects in a non-ambiguous way. By employing a very large set of rules and concepts, and an inference engine, one can come up with deductions about the world, answer questions like "Can horses be sick?", "Bring me a picture of a sad person."