0

I try to run a chatbot script from inside a docker container, but it fails with the following error:

Traceback (most recent call last):
  File "script.py", line 16, in <module>
    with sr.Microphone() as source:
  File "/home/datamastery/.local/lib/python3.8/site-packages/speech_recognition/__init__.py", line 86, in __init__
    device_info = audio.get_device_info_by_index(device_index) if device_index is not None else audio.get_default_input_device_info()
  File "/usr/local/lib/python3.8/dist-packages/pyaudio.py", line 949, in get_default_input_device_info
    device_index = pa.get_default_input_device()
OSError: No Default Input Device Available

Dockerfile:

FROM python:3.6-stretch

RUN pip install --upgrade pip
RUN apt-get update && apt-get install -y espeak
RUN apt-get install portaudio19-dev -y

RUN useradd -rm -d /home/datamastery -s /bin/bash -g root -G sudo -u 1001 datamastery
USER datamastery

WORKDIR /home/datamastery

COPY script.py ./script.py
COPY requirements.txt ./requirements.txt


RUN pip install -r requirements.txt

CMD ["python", "script.py"]

requirements.txt

pyttsx3==2.90
transformers==4.6.1
SpeechRecognition==3.8.1
torch==1.8.1
PyAudio==0.2.11

script file:

# import library
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import pyttsx3
import speech_recognition as sr

engineio = pyttsx3.init()
voices = engineio.getProperty("voices")
engineio.setProperty("rate", 130)  # Aquí puedes seleccionar la velocidad de la voz
engineio.setProperty("voice", voices[0].id)
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")

r = sr.Recognizer()

with sr.Microphone() as source:
    for step in range(5):
        r.adjust_for_ambient_noise(source)
        print("Sprich...")
        audio = r.listen(source, timeout=3)
        print("Danke!")
        audio_text = r.recognize_google(audio)

        new_user_input_ids = tokenizer.encode(
            audio_text + tokenizer.eos_token, return_tensors="pt"
        )
        bot_input_ids = (
            torch.cat([chat_history_ids, new_user_input_ids], dim=-1)
            if step > 0
            else new_user_input_ids
        )
        chat_history_ids = model.generate(
            bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id
        )
        print(chat_history_ids.shape)
        print(type(chat_history_ids))
        new_text = tokenizer.decode(
            chat_history_ids[:, bot_input_ids.shape[-1] :][0], skip_special_tokens=True
        )

        print(new_text)
        # recoginize_() method will throw a request error if the API is unreachable, hence using exception handling

        try:
            # using google speech recognition
            engineio.say(new_text)
            engineio.runAndWait()
        except:
            engineio.say("Sorry, did not understand you")

I tried the solution from this link: OSError: No Default Input Device Available, but it gives me a wrong index error (I added device_index=0).

  File "/home/datamastery/.local/lib/python3.6/site-packages/speech_recognition/__init__.py", line 84, in __init__
    assert 0 <= device_index < count, "Device index out of range ({} devices available; device index should be between 0 and {} inclusive)".format(count, count - 1)
AssertionError: Device index out of range (0 devices available; device index should be between 0 and -1 inclusive)

May the reason result from the fact, that ubuntu does not recognize my mic? If that is the case, do I have to install a library or set something in in my docker run command.

Data Mastery
  • 1,555
  • 4
  • 18
  • 60
  • 1
    It's generally not expected for sound hardware to be passed through to a container. – Charles Duffy May 21 '21 at 20:29
  • What's your host OS? (If it's Linux you can potentially use a volume to pass the device through; if it's Docker for Mac or Docker for Windows there's a VM layer between the host and the container, and that makes everything harder). – Charles Duffy May 21 '21 at 20:31
  • It's going to be hard to make you audio devices available in the container. You can mount their device nodes in, but you will have to make sure they are no occupied by the host system already. – Klaus D. May 21 '21 at 20:31
  • So this means you can not use speech_recognition app in a container? Any ideas or workaround how I can use my model with docker? – Data Mastery May 21 '21 at 20:31
  • I'd argue that this can/should be treated as a duplicate of [run apps using audio in a docker container](https://stackoverflow.com/questions/28985714/run-apps-using-audio-in-a-docker-container). – Charles Duffy May 21 '21 at 20:31

0 Answers0