I am not sure about the nature of roblox however what you describe here is called web-crawling, thus in order to accomplish this, there is not a single language, most of them are suitable. what I would do first is check weather roblox provides any usable APIs which are there to help developers such as your self in fetching the data you need, in a more use friendly way such as JSON which you can easily use in any language.
in case an API is not available you can try to fetch the data as plain text with various tools such as curl or text based web-browsers in order to determine weather an html parser will suffice or the website requires something more advanced such as a javascript interpeter, and there are such, an headless browser such as phantomjs(also available for use commandline just like curl, with full js support). It is most preffered to limit yourself to just fetching the page, parsing the html and getting the data you need rather than using a full headless browser solution such as phantomjs as the latter has the potential to slow things down and is generally more complex.
for sake of simplicity, since you mentioned that your final result is to make a webserver which serves the data I would go the following way:
install a lemp(linux nginx mysql php) OR lamp(linux apache mysql php) stack. just download it into your linux box using your favourite package manager.
Since the final reault is a web server you might want to use php which comes out of the box in the packages I stated above:
In case there is an API, it gets as simple as fetching the page of the supported API and running JSON/XML parse on it and using the data.
but if there is no api
First fetch the page in php using curl or file get contents functions available in php, then try to parse the page with any HTML parsers available for PHP such as SIMPLE DOM PARSER.
The above steps are in case you don't need to go into the complexities of a full blown browser, but if you do, you should find comfort in phantomjs and try to use it standalone(javascript) to fetch your data, or try to find a php interface to communicate with phantomjs in google. The steps are similar in their approach: fetch the page and parse the html in it in order to get the required data.
- Since you already are in a webserver (lemp/lamp) you are in fact already able to preasent a webpage to your devices online. so simpley do step 2, save to the database(mysql) and generate a page matching your need. Note php runs only when the user loads the page on which it resides, therefore if you need periodic checks, just use cron jobs to schedule tasks at certain times and re run your php scripts.
Note 1: the steps above are very general since you did not specify your background in this field.
These steps simply describe how web crawling works in general.
Note 2: If you wish to make your service accesible outside of your network, In order to do that you should configure(usually is the default) your webserver(lemp/lamp) at port 80 and then you should provide your users with your outside ip address.
If your ip is dynamically changing you can use free services such as NO-IP or maybe this.
there are other more complex solutions such as renting a domain name.