2

I started learning Python a few days ago and the first thing I did was to make a Python application with a query form where we could ask a question and get a voiced Text-To-Speech answer, and the day before I found out that there is such a performance like Neuro-Sama, I wanted to replicate it in technical terms Neuro-Sama works like this: Analyzes Twitch Chat and sends a text request conventionally to ChatGPT, then as it received a response from ChatGPT it plays back a response of this format "Username, Reply"

I found this task easy and started to write my own code, but I faced some problems, I'm asking interested people to help me, I'm ready to pay $10 for a reasonable response and to point out my mistakes, I attach my failed code below:

import tkinter as tk
import requests
import pyttsx3
import twitchio
import os

# Создание окна
root = tk.Tk()
root.title("Пример приложения с ChatGPT и озвучкой")

# Создание текстового поля и кнопки
text_field = tk.Entry(root, width=50)
text_field.pack()
button = tk.Button(root, text="Отправить")

# Создание объекта для озвучивания речи
engine = pyttsx3.init()

# Инициализация Twitch-клиента
bot_nickname = "pasha_tech"
bot_oauth = "YOUR_BOT_OAUTH_TOKEN" # замените "YOUR_BOT_OAUTH_TOKEN" на OAuth-токен вашего бота
bot = twitchio.Client(bot_nickname, oauth=bot_oauth)

# Функция для отправки запроса к API ChatGPT и получения ответа
def get_response(username, text):
    url = "https://api.openai.com/v1/engines/davinci-codex/completions"
    headers = {
        "Content-Type": "application/json",
        "Authorization": "Bearer API_KEY" # замените "API_KEY" на ваш API ключ от ChatGPT
    }
    data = {
        "prompt": f"Пользователь {username} написал в чате: {text}\nAI ответит:",
        "max_tokens": 50,
        "temperature": 0.7
    }
    response = requests.post(url, headers=headers, json=data)
    if response.status_code == 200:
        result = response.json()["choices"][0]["text"]
    else:
        result = "Ошибка при обработке запроса"
    # Отображение ответа в окне
    response_label.config(text=result)
    # Озвучивание ответа
    engine.say(result)
    engine.runAndWait()

# Функция-обработчик сообщений в чате Twitch
async def on_message(channel, user, message):
    if user.name != bot_nickname:
        # Отправка сообщения на обработку в ChatGPT
        get_response(user.name, message)
        
# Функция для подключения к IRC-чату Twitch
async def connect_to_twitch_chat():
    await bot.connect()
    await bot.join(os.environ['CHANNEL_NAME']) # замените "CHANNEL_NAME" на имя канала, к которому подключается бот
    print(f"Бот {bot_nickname} подключился к чату {os.environ['CHANNEL_NAME']}")

# Привязка функции к кнопке
button.config(command=get_response)
button.pack()

# Создание метки для отображения ответа
response_label = tk.Label(root, text="")
response_label.pack()

# Запуск бесконечного цикла обработки сообщений в чате Twitch
bot.loop.create_task(connect_to_twitch_chat())
bot.loop.run_forever()

root.mainloop()

Using this code I got errors: bot = twitchio.Client(bot_nickname, token=bot_token) TypeError: init() got multiple values for argument 'token', Help me understand, this is the 3rd day I've been sitting here wondering what my problem is

The code was originally generated through ChatGPT, point out the possible bugs of this service so I know how to fix them in the future, Thanks!

  • 1
    Does this answer your question? [OpenAI API error: "Cannot specify both model and engine"](https://stackoverflow.com/questions/75176667/openai-api-error-cannot-specify-both-model-and-engine) – Rok Benko Feb 20 '23 at 09:41

1 Answers1

1

Im nearing the stable version of AI Vtuber project, use the new gpt-3.5- turbo model instead. also you will need to give a initial prompt so i takes on persona and a fed back loop to give it short term memory (unlike chatgpt where bot memory is automatic)

https://www.youtube.com/watch?v=dkgJBcTitpE

Try to follow the same code structure in the doc: https://twitchio.dev/en/latest/quickstart.html

class Bot(commands.Bot):

    def __init__(self):
        # Initialise our Bot with our access token, prefix and a list of channels to join on boot...
        # prefix can be a callable, which returns a list of strings or a string...
        # initial_channels can also be a callable which returns a list of strings...

        # --- GET YOUR TWITCH ACCEESS_TOKEN USING: https://twitchtokengenerator.com/ ----

        super().__init__(token='ACCESS_TOKEN', prefix='?', initial_channels=['...'])

    async def event_ready(self):
        # Notify us when everything is ready!
        # We are logged in and ready to chat and use commands...
        print(f'Logged in as | {self.nick}')
        print(f'User id is | {self.user_id}')

    async def event_message(self, message):
        # Messages with echo set to True are messages sent by the bot...
        # For now we just want to ignore them...
        if message.echo:
            return

        # ---------------PUT YOUR OPEN AI  CODE HERE----------
        # SOME VARIABLE
        # user unique Id (never changes per account): message.author.id 
        # user chat name: message.author.name
        # user's message you include in your prompt: message.content

        # Since we have commands and are overriding the default `event_message`
        # We must let the bot know we want to handle and invoke our commands...
        await self.handle_commands(message)

        # WHEN YOU SEND ?hello THIS STILL GOES THROUGH event_message() THEN CAUGHT BY THIS METHOD
    @commands.command()
    async def hello(self, ctx: commands.Context):
        # Here we have a command hello, we can invoke our command with our prefix and command name
        # e.g ?hello
        # We can also give our commands aliases (different names) to invoke with.

        # Send a hello back!
        # Sending a reply back to the channel is easy... Below is an example.
        await ctx.send(f'Hello {ctx.author.name}!')


bot = Bot()
bot.run()    # bot.run() is blocking and will stop execution of any below code here until stopped or closed.

On the open Ai code block integrate this technique to give bot memory and persona: https://github.com/daveshap/LongtermChatExternalSources