The root problem here (at least on my computer, maybe different with your version...) is that site returns gzipped data, and it isn't being uncompressed properly by php and curl before being passed to the dom parser. If you are using php 5.4, you can use gzdecode and file_get_contents to uncompress it yourself.
On older php versions, this code will work:
<?php
// download the site
$data = file_get_contents("http://www.tsetmc.com/loader.aspx?ParTree=151311&i=49776615757150035");
// decompress it (a bit hacky to strip off the gzip header)
$data = gzinflate(substr($data, 10, -8));
include("simple_html_dom.php");
// parse and use
$html = str_get_html($data);
echo $html->root->innertext();
Note that this hack will not work on most sites. The main reason underlying this seems to me that curl doesn't announce that it accepts gzip data... but the web server on that domain doesn't pay attention to that header, and gzips it anyway. Then neither curl nor php actually checks the Content-Encoding header on the response, and assumes it isn't gzipped so it passes it through without an error nor calling gunzip. Bugs in both the server and the client here!
For a more robust solution, maybe you can use curl to get the headers and inspect them yourself to determine if you need to decompress it. Or you can just use this hack for this site and the normal method for others to keep things simple.
It might still also help to set the character encoding on your output. Add this before you echo anything to ensure the data you read isn't recorrupted in the user's browser by being read as the wrong charset:
header('Content-Type: text/html; charset=utf-8');