To answer your question - Yes, LDA can be used to return a list of similar words given a query word. The similarity in this case would refer to the co-occurrences between the words, i.e. if u is a similar word to v, it is likely that the probability P(u|v,d) is high, i.e. for any document d, it is likely that you would see u if you have already seen v.
Such statistical co-occurrences would be able to put words such as 'Obama', 'president' and 'USA' in the same group (equivalence class defined by the similarity relation).
The exact way you get similar words in LDA is to use the output phi matrix (a KxV matrix, K=#latent topics, V=#words). Each column vector of this matrix represents a word. Given a query word, get its vector and return a list of words whose vectors are most similar (inner-product) to the query one.
However, LDA won't be particularly a good choice to capture synonymy relations between terms, e.g. 'sun' and 'solar'. The use of word vector embedding is a particularly good choice in such a scenario.
The main difference of word vector with LDA is that the notion of similarity used in the former is more contextual. To be more precise, word vectors u and v are similar if they are both similar to their context vectors - other words in close proximity around these words. Coming back to the example, in both the contexts of words 'sun' and 'solar', you expect to see words such as 'star', 'planets', 'energy', 'heat' etc., which all contribute to the belief that 'sun' and 'solar' could be used synonymously.
Also from a practical view-point, using word vector embedding is a much better choice because the training is much faster as compared to LDA. Use the C implementation word2vec by Mikolov. It has a distance utility executable, which given a query word, would give you a list of words ranked by decreasing cosine similarity values with the query word.