I would like to integrate an HTML5 microphone in my web application, stream audio streams to a (Node.js) back-end, use the Dialogflow API for audio streaming, use the google Speech API, and return audio (Text to Speech) to a client to play this in a browser.
I found a github project which is exactly what I want to do. https://github.com/dialogflow/selfservicekiosk-audio-streaming
This is Ms. Lee Boonstra's Medium blog. (https://medium.com/google-cloud/building-your-own-conversational-voice-ai-with-dialogflow-speech-to-text-in-web-apps-part-i-b92770bd8b47) She has developed this project. (Thank you very much, Ms. Boonstra!) She explains this project very precisely.
First, I tried demo web application which Ms. Boonstra deployed with App Engine Flex. I accessed it (https://selfservicedesk.appspot.com/) and it worked perfectly.
Next, I cloned this project and tried to deploy locally. I followed this README.md. (I skipped the Deploy with AppEngine steps.) https://github.com/dialogflow/selfservicekiosk-audio-streaming/blob/master/README.md
However, it didn't work. The web app didn't give me any response. I use Windows 10, Windows Subsystems for Linux, Debian 10.3 and Google Chrome browser.
This is Chrome's console.
This is Terminal. (I didn't get any error message, which is mysterious for me.)
Could you give me any advice? Thank you in advance.