The new Voice Control API is super cool, it's the one where you can speak to your iOS device and perform actions. (Video from Apple)
My only gripe is that finding information on it is kinda difficult. I don't see any WWDC videos out there on it, and I can't find any other documentation.
It's basically powered by accessibilityLabels. Since each accessibilityElement can really only have one accessibilityLabel it's (from what I can see) limited to that.
Is that accurate? Is there a way to provide users with more custom actions? For instance there's the accessibility custom actions API that allows you to add more by swiping up/down with VoiceOver, but those don't seem to be available in any way to Voice Control, it's just the accessibilityLabel.
It's such a cool API, but with VoiceOver custom actions and rotor actions I can normally provide more easily accessible actions to users, and I can't figure out how to do that for a user who uses Voice Control.