I have a Dialogflow agent that is normally able to speak. However, when inside a function (which calls out to the Spotify api), it does not speak anything I write inside an "agent.add()".
What makes it even more strange is that on my Firebase console for the function, the output of the Spotify api call is actually recorded when inside a "console.log". This means that the Spotify api call functions as normal, but the dialogflow agent cannot read out the result of the SPotify api call - and I have no idea why (important code below).
/**
* ---------------------------Google Assistant Fulfillment----------------------------------------------------------------------------------------
* Below is the dialogflow firebase fulfillment code which controls what happens when various intents happen:
*/
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({request, response});
/**
* Function controls when the user replies 'yes' to 'Would you like to hear an angry song?'.
* Uses the random number within the bounds of the angry songs to select and recommend a song
* for the user.
* @param agent The dialogflow agent
* @returns {Promise<admin.database.DataSnapshot | never>} The song of the desired emotion.
*/
//4
async function playAngrySong(agent) {
return admin.database().ref(`${randomNumber}`).once('value').then((snapshot) => {
// Get the song, artist and spotify uri (with and without the preceding characters) from the Firebase Realtime Database
const song = snapshot.child('song').val();
const artist = snapshot.child('artist').val();
const spotify_uri = snapshot.child('spotifyCode').val();
const just_uri = snapshot.child('spotifyCode').val();
// Agent vocalises the retrieved song to the user
agent.add(`I reccomend ${song} by ${artist}`);
var tempo = '';
agent.add(`Here is the tempo for the song (before getAudioAnalysisForTrack call: ${tempo}`);
/**
* Callout to the Spotify api using the spotify-web-api node package. LINK PACKAGE.
* Agent vocalises the analysis extracted on the track.
*/
Spotify.getAudioAnalysisForTrack('4AKUOaCRcoKTFnVI9LtsrN').then(
function (data) {
var analysis = console.log('Analyser Version', data.body.meta.analyzer_version);
var temp = console.log('Track tempo', data.body.track.tempo);
tempo = data.body.track.tempo;
agent.add(
`The track's tempo is, ${tempo}, does this sound good or would you prefer something else?`
);
var textResponse = `The track's tempo is, ${tempo}, does this sound good or would you prefer something else?`;
agent.add(textResponse);
agent.add(`Here is the song's tempo: ${tempo}`);
return;
},
function (err) {
console.error(err);
}
);
// agent.add(`${agentSays}`);
agent.add(`Here is the tempo for the song: ${tempo}`);
});
});
}
So in the above code, the user is asked by google if they want a recommendation for an angry song. They say 'yes' which runs this function 'playAngrySong'. A song is selected from a database and the user is told the recommended song e.g "I recommend Suck My Kiss by Red Hot Chili Peppers". From this point in the code onwards (where it says var tempo), the agent does not speak anymore (by text of voice).
The console.log lines are written to the function logs however:
var analysis = console.log('Analyser Version', data.body.meta.analyzer_version);
var temp = console.log('Track tempo', data.body.track.tempo);
Lastly, Google support sent this in reply to tmy concern (and have not emailed me back since) - does anyone know what I should do based on their suggestion? I am new to Javascript so have tried adding in the 'async' keyword before the function (as shown in the code here) but I may have been wrong in thinking this was the right way to use it.