Last reviewed: 3/23/2024 10:26:43 AM
Objective-C Applications
Develop iOS and Mac applications that speak and listen using Objective-C with Xcode.
The following sections describe the steps for integrating SpeechKit with Objective-C applications.
SpeechKit Objective-C Classes
SpeechKit includes Objective-C classes for developing applications that speak and listen.
To access the SpeechKit classes within your application, copy the following Chant SpeechKit classes to your application project:
- Program Files\Chant\SpeechKit 13\Objective-C\include\ChantShared.h,
- Program Files\Chant\SpeechKit 13\Objective-C\include\ChantShared.m,
- Program Files\Chant\SpeechKit 13\Objective-C\include\ChantSpeechKit.h, and
- Program Files\Chant\SpeechKit 13\Objective-C\include\ChantSpeechKit.m.
Import the SpeechKit classes in either your header or source file and optionally declare your objects as properties:
#import "ChantSpeechKit.h"
@property (strong, nonatomic) SPSpeechKit* speechKit;
@property (strong, nonatomic) SPChantRecognizer* recognizer;
@property (strong, nonatomic) SPChantSynthesizer* synthesizer;
Add the Speech.framework for speech recognition and/or add the AVFoundation.framework for speech synthesis.
Object Instantiation
Instantiate SpeechKit and set the credentials. For speech recognition, instantiate a recognizer object, set delegate for event(s), and define recognition event handler(s). For speech synthesis, instantiate a synthesizer object, optionally set delegate for event(s), and optionally define synthesis callback event handler(s).
_speechKit = [[SPSpeechKit alloc] init];
if (_speechKit != nil)
{
// Set credentials
[_speechKit setCredentials:@"Credentials"];
_recognizer = [_speechKit createChantRecognizer];
if (_recognizer != nil)
{
[_recognizer setDelegate:(id<SPChantRecognizerDelegate>)self];
}
_synthesizer = [_speechKit createChantSynthesizer];
if (_synthesizer != nil)
{
[_synthesizer setDelegate:(id<SPChantSynthesizerDelegate>)self];
}
}
Event Callbacks
Event callbacks are the mechanism in which the class object sends information back to the application such as speech recognition occurred, audio playback finished, or there was an error.
For speech recognition, add the delegate protocol SPChantRecognizerDelegate to your interface declaration
@interface ViewController : UIViewController <UIPickerViewDataSource, UIPickerViewDelegate, SPChantRecognizerDelegate>
Add the delegate protocol methods to your class only for the desired events since protocol methods are optional in Objective-C.
...
-(void)recognitionDictation:(NSObject *)sender args:(SPRecognitionDictationEventArgs *)args;
{
NSString* newText = [NSString stringWithFormat:@"%@%@", [_textView1 text], [args text]];
[_textView1 setText:newText];
}
...
For speech synthesis, add the delegate protocol SPChantSynthesizerDelegate to your intertface declaration
@interface ViewController : UIViewController <UIPickerViewDataSource, UIPickerViewDelegate, SPChantSynthesizerDelegate>
Add the delegate protocol methods to your class only for the desired events since protocol methods are optional in Objective-C.
...
-(void)audioDestStop:(NSObject*)sender args:(SPAudioEventArgs*)args
{
_button1.enabled = true;
}
-(void)rangeStart:(NSObject*)sender args:(SPRangeStartEventArgs*)args
{
[_textView1 setSelectedRange:NSMakeRange([args location], [args length])];
}
...
Permissions
Speech recognition requires the user to grant speech recognition permission and access to the microphone. The app's Info.plist must contain an NSSpeechRecognitionUsageDescription key with string value and an NSMicrophoneUsageDescription key with string value.
Select the Info.plist file in the project and add two keys with usage description text:
- Add a key to the Information Property List by clicking the plus button.
- Select: Privacy - Speech Recognition Usage Description.
- Enter a string value description such as: speech recognition.
- Add a key to the Information Property list by clicking the plus button.
- Select: Privacy - Microphone Usage Description.
- Enter a string value description such as: mic for speech recognition.
macOS applications require an additional project setting to enable audio input. Select the project Signing & Capabilities tab. Select the Audio Input checkbox under App Sandbox/Hardware and under Hardware Runtime/Resource Access.
Linking SpeechKit Static Library
Link the SpeechKit static library libChantSpeechKit.a to the application under Settings General tab. Add as a framework library. Set the app as the target.
For iOS applications there are two libraries. One targets devices and the other targets the simulator. Copy the applicable library to the project:
- Program Files\Chant\SpeechKit 13\Objective-C\iOS\lib\libChantSpeechKit.a, or
- Program Files\Chant\SpeechKit 13\Objective-C\iOS\libSimulator\libChantSpeechKit.a.
For macOS applications there is only one library. Copy the applicable library to the project:
- Program Files\Chant\SpeechKit 13\Objective-C\macOS\lib\libChantSpeechKit.a.
Development and Deployment Checklist
When developing and deploying Objective-C applications, ensure you have a valid license, and bundle the correct Chant SpeechKit static library. Review the following checklist before developing and deploying your applications:
- Develop and deploy Objective-C applications to any system with a valid license from Chant. See the section License for more information about licensing Chant software.
- Build and link with the libChantSpeechKit.a library, Speech.framework for speech recognition, and AVFoundation.framework for speech synthesis.
Sample Projects
Objective-C sample projects are installed at the following location:
- Documents\Chant\SpeechKit 13\Objective-C.