This is a basic problem with screen scraping. The information on an HTML page is designed for human users, not for software access, and it will change over time based on the perceived needs of human users, ignoring the needs of screen scrapers.
You haven't said what you're using Selenium for. The two main users are (a) software testing (to check that your software is displaying the screen correctly) and (b) scraping data from third-party web sites. The strategy for solving the problem is different for the two cases.
For testing, try to test as much of the functionality of your application as possible using unit tests that don't rely on looking at the HTML; only look at the HTML where you actually need to test the user interface. For those tests, you're going to have to face the fact that when the HTML changes, the tests have to change.
For extracting data from third-party web sites, use a published API to the data in preference to screen-scraping if you possibly can - even if you have to pay for access, it will be cheaper in the long run. Scraping the data off HTML pages is inefficient and it leaves you completely exposed to unannounced changes the screen appearance.
Having said that, there are ways of writing XPath that make it more resilient to such changes. But only if you guess correctly what aspects of the page are likely to change, and what's likely to remain stable. It's not a difference between "xpath" and "full xpath" as you suggest, rather there are different ways of writing XPath expressions to make them resilient to changes in the HTML. Clearly for example //tr[td[1]='London']/td[2]
is more likely to keep working than //div[3]/div[1]/table[9]/tbody/tr[43]/td[2]
.
But the best advice is, if you want to write an application that's resilient to change, steer clear of screen scraping entirely.