How do I recognize and synthesize with Azure Speech on Android?

Last reviewed: 9/12/2023

HowTo Article ID: H032312

The information in this article applies to:

  • SpeechKit 12

Summary

MCSSynthesizer and MCSRecognizer classes are available to access the Microsoft Cognitive Services speech API that provides access to Microsoft Azure Speech cloud resources from Android platforms.

More Information

Synthesizing and Recognizing with Azure Speech using SpeechKit classes MCSSynthesizer and MCSRecognizer is now available on Android. See the Android tabs in Knowledge Base article H032309.

There are a few code and configuration options that enable Azure as an option for speech recognition and speech synthesis for native Android apps and cross-platform apps with C++Builder and Delphi.

Native Android Apps

A new method, setActivity, enables the application to set the UI context so callback events run on the UI thread.

// For Azure Speech, set Activity for callbacks on UI thread
            _Recognizer.setActivity(this);
            // For Azure Speech, set Activity for callbacks on UI thread
            _Synthesizer.setActivity(this);
            

C++Builder and Delphi cross-platform apps do not need to use this method as this is handled automatically.

Optionally, SpeechKit Azure classes MCSSynthesizer and MCSRecognizer support the InitComplete event even though this event does not exist in Azure. Android apps can easily switch back and forth between speech APIs and do not have to conditionally use the event depending on which API they use. When using Azure, this event only fires if the setContext method is used. The setContext method is only required for Android Speech.

Android Studio provides a simple way to add the Azure SDK files to the app project to ensure classes and deployment libraries bind correctly. See Integrating SpeechKit/Android Java Applications documentation for the procedure for integrating the Azure SDK libraries directly from Maven.

Cross-platform C++Builder and Delphi Apps

For C++Builder and Delphi cross-platform apps to access Azure resources, the SDK files need to be added to the project. See Integrating SpeechKit/C++Builder Android Applications and Integrating SpeechKit/Delphi Android Applications for the steps for adding classes via JAR library and deployment files for runtime.

There are no additional considerations for using either speech API. Apps can easily recognize and synthesize with Azure Speech as they do with Android Speech.