Browsers expect one big response by default and don't render until they have a minimum amount of data: The following link discusses this limitation in Chunked Encoding, a way to send data in chunks.
using-transfer-encoding-chunked-how-much-data-must-be-sent-before-browsers-s/16909228#16909228
So this limitation will apply even if you cobble together a valid HTTP1.0 chunked response series : The browser has valid headers, and will accept your data in chunks, but will still delay rendering anything until a bunch have passed.
The question is: do you really need a full web browser as a client? There are simpler ways to read some raw data from a TCP stream.. You could write one in python, or use e.g. netcat or even an old telnet client? Problem solved ;-)
Okay, say that it really needs to be a browser.. Than you'll have to do much more work. One standard (W3C) way of sending out live data immediately, is a mechanism called Server-Sent-Events. You sent the Content-Type text/event-stream, and then data, line by line, preceded by "data: ":
def clientthread(conn):
conn.send("HTTP/1.1 200 OK\r\nContent-Type: text/event-stream\r\n\r\n");
count = 0
while True:
count = count + 1
conn.send("data: %d\r\n" % (count))
print count
time.sleep(1)
conn.close()
There are libraries to do it idiomatically, currently, etc.. but I wanted to show how simple it basically is.
.. but now, you need some an EventSource in client-side Javascript to make sense of this to e.g. set some HTML element to the counter value each time a new counter is received.
It doesn't stop there.. You now have to serve the resulting HTML and script, and if it isn't on the same server, make sure to set various security-related headers or your browser will ignore your scripts..
Also, things may get complex soon, unless this is an academic exercise, you will have to consider robustness, standards compliance, edge cases, etc.. I would strongly recommend using a higher-level HTTP server implementation like HTTPServer and BaseHTTPRequestHandler that do much of that kind of work for you.
This example (python3) serves both the html (at /) with example EventSource and the SSE stream (at /counter) with the counter:
import sys, time
from http.server import HTTPServer,BaseHTTPRequestHandler
from socketserver import ThreadingMixIn
from socket import error
html_and_js = """<html>
<head>
<meta charset="UTF-8">
<title>Counter SSE Client</title>
</head>
<body>
Count:<span id="counter">0</span>
<script>
"use strict";
var counter = document.getElementById('counter');
var event_source=new EventSource("/counter");
event_source.onmessage=function(msg) {
counter.innerHTML=msg.data;
};
</script>
</body>
</html>
"""
class SSECounterRequestHandler(BaseHTTPRequestHandler):
server_version = "DzulianisCounter/0.1"
def do_html(self):
self.send_header("Content-type", "text/html")
self.send_header("Access-Control-Allow-Origin", "*")
self.end_headers()
self.wfile.write(bytes(html_and_js,'UTF-8'))
def do_sse(self):
self.counter=0
self.send_header("Content-type", "text/event-stream")
self.send_header("Cache-Control", "no-cache")
self.end_headers()
self.running=True
while self.running:
try:
self.wfile.write(bytes('data: %d\r\n\r\n' % (self.counter),'UTF-8'))
self.counter+=1
time.sleep(1)
except error:
self.running=False
def do_GET(self):
self.send_response(200)
if self.path=='/counter':
self.do_sse()
else:
self.do_html()
class SSECounterServer(ThreadingMixIn, HTTPServer):
def __init__(self,listen):
HTTPServer.__init__(self,listen,SSECounterRequestHandler)
if __name__=='__main__':
if len(sys.argv)==1:
listen_addr=''
listen_port=8888
elif len(sys.argv)==3:
listen_addr=sys.argv[1]
listen_port=int(sys.argv[2])
else:
print("Usage: dzulianiscounter.py [<listen_addr> <listen_port>]")
sys.exit(-1)
server=SSECounterServer((listen_addr,listen_port))
server.serve_forever()
This is much more efficient, and has a better response time, than e.g. having the page poll some URL regularly, or shudders reloading the page all the time :-) At your speed (1 per second) this also keeps the http connection open, avoiding connection overhead but adding some memory overhead to OS's the network stack, which could be felt if you would get many simultaneous users on that.
Enjoy!