Inherits from NSObject, BZEventsObserverDelegate
Declared in BZVoiceFlowController.swift

Overview

BZVoiceFlowController class provides the main interface for the BZVoiceFlow framework. This interface exposes the methods which allow an application to execute configurable and runtime adaptable speech-enabled interactions between an applicaiton and its users. After an applicaiton creates an instance of BZVoiceFlowController and initializes its parameters, the application is then ready to provide BZVoiceFlowController a series of pre-configured sturctured data in JSON, referred to as Voiceflows, which, when processed by BZVoiceFlow framework and with support from BZMedia framework, result in speech-enabled interactions between the application and its users.

Each Voiceflow comprises multiple and connected Voiceflow Modules (VFMs) that provide the following capabilities:
- Play Audio
- Record Audio.
- Execute and Audio Dialog.
- Execute and Audio Listener.
- Execute Pause and Resume functionallity during Voiceflow processing.
- Detect and handle Audio Session Interruptions.
- Detect and handle interruptions and alterations from applications toVoiceflow processing.
- Detect and handle real time Media and Voiceflow processing events with application notification.
- adjusts to runtime changes in Voiceflow parameter values.
and more.

A BZVoiceFlowController object is created as follows:

let bzVoiceFlowController = BZVoiceFlowController()


Before processing a Voiceflow, the application initializes BZVoiceFlowController object, initializes application audio session, initializes media modules required by the application, sets the URL locations of media resource files for playing audio, recording audio, performing custom speech recognition and processing Voiceflows.

Example:

bzVoiceFlowController.initialize()
bzVoiceFlowController.initializeDefaultAudioSession()
bzVoiceFlowController.setMediaResourceLocation(fileCategory: .FC_PLAY_AUDIO, localURL: path1)
bzVoiceFlowController.setMediaResourceLocation(fileCategory: .FC_RECORD_AUDIO, localURL: path2)
bzVoiceFlowController.setMediaResourceLocation(fileCategory: .FC_SPEECH_RECOGNITION, localURL: path3)
bzVoiceFlowController.setMediaResourceLocation(fileCategory: .FC_VOICEFLOW, localURL: path4)

bzVoiceFlowController.loadAudioPromptModules(localFileUrl: audioPromptModulesFile)
bzVoiceFlowController.loadVoiceflow(localFileUrl: voiceFlowFile)
bzVoiceFlowController.runVoiceflow()


Note: A class initializing an instance of BZVoiceFlowController must be a subclass of BZVoiceFlowCallback in order to receive Voiceflow callbacks from BZVoiceFlow framework. The class must also be a subclass of BZEventsObserverDelegate in order to receive real-time event notifications directly from the BZMedia framework.

Note: Voiceflow processing generating callbacks from BZVoiceFlow framework cover most of the events needed by an application including a subset of realtime event notifications from BZMedia framework. For a complete set of realtime event notifications from BZMedia framework, the application must implement bzMedia_EventNotification function from BZEventsObserverDelegate.

Note: VoiceFfow processing callbacks from BZVoiceFlow framework and realtime event notifications from BZMedia framework occur on the main thread of an application. The application should be careful not to tie its main thread with complex and time consuming tasks so these callbacks and events are received timely. Also the application should release the callback and event notfication methods quickly without leveraging these methods to execute complex and time comsuming tasks.

Example:

#import BZVoiceFlow_Framework

public final class MyVoiceFlowClass: NSObject, BZVoiceFlowCallback, BZEventsObserverDelegate {

    var bzEventsObserver: BZEventsObserver? = nil
    var bzVoiceFlowController: BZVoiceFlowController? = nil

    func InitializeBZVoiceFlowController () {
        bzVoiceFlowController = BZVoiceFlowController()
        _ = bzVoiceFlowController.initialize()
        bzVoiceFlowController!.setVoiceFlowCallback(self)

        bzEventsObserver = BZEventsObserver()
        bzEventsObserver!.delegate = self
    }

    // Optional implementation of callback methods from BZVoiceFlowCallback protocol
    func BZVFC_PreModuleStart(vfModuleID: String) {
    }

    func BZVFC_PreModuleEnd(vfModuleID: String) {
    }

    func BZVFC_SRHypothesis(vfModuleID: String, srData: BZSRData) {
    }

    func BZVFC_MediaEvent(vfModuleID: String, mediaItemID: String, mediaFunction:BZNotifyMediaFunction, mediaEvent:BZNotifyMediaEvent, mediaEventData: [AnyHashable : Any]) {
    }

    func BZVFC_PlayAudioSegmentData(vfModuleID: String, promptID:String, audioSegmentType:BZAudioSegmentType, audioFile: String?, textString: String?, textFile: String?) {
    }

    func BZVFC_PermissionEvent(permissionEvent:BZNotifyMediaEvent) {
    }

    // Optional implementation of media event notification directly from media framework using  BZEventsObserverDelegate protocol
    func bzMedia_EventNotification(_ mediaJobID: String!, mediaItemID: String!, mediaFunction: BZNotifyMediaFunction, mediaEvent: BZNotifyMediaEvent, mediaEventData: [AnyHashable : Any]!) {
    }
}

Tasks

  • – setLogLevel

    Sets the log level of the BZVoiceFlow framework. This method can be invoked before initializing the framework.
    On Apple devices Unified logging is utilized. All logs are available in Apple’s Console application. Also all logs are visible in Xcode output console when running the application in Xcode in debug mode.
    The following are the valid log levels:
    - “none”
    - “fault”
    - “error”
    - “default”
    - “info”
    - “debug”
    - “verbose”

    Default log level is: “default”.

    Sample implementation code:

  • – setMediaModulesLogLevels

    Sets the log levels of the BZMedia framework modules. This method can be invoked before initializing the BZVoiceFlow framework. BZMedia framework contains many media modules. Logging for each media module can be controlled independently.
    On Apple devices Unified logging is utilized. All logs are available in Apple’s Console application. Also all logs are visible in Xcode output console when running the application in Xcode in debug mode.

    Here is a list of the media modules:
    - “MediaController”
    - “MediaPermissions”
    - “MediaEngineWrapper”
    - “MediaEngine”
    - “AudioStreamer”
    - “AudioSession”
    - “AudioPlayer”
    - “AudioRecorder”
    - “AudioFileRecorder”
    - “FliteSS”
    - “AppleSS”
    - “PocketSphinxSR”
    - “AppleSR”

    The following are the valid log levels:
    - “none”
    - “fault”
    - “error”
    - “default”
    - “info”
    - “debug”
    - “verbose”

    Default log level for all media modules is: “default”.

    Sample implementation code:

  • – initialize

    Initializes the BZVoiceFlow framework. This method must be invoked after a BZVoiceFlowConntroller object is created.

    Sample implementation code:

  • – forBZResult

    Retrieves a textual representation of a BZ_RESULT constant value.

    Sample implementation code:

  • – getLastError

    Gets the last BZ_RESULT error encountered. Use forBZResult method to get a textual representation of the error.

  • – setVoiceFlowCallback

    Sets the Voiceflow call back object that is implementing the BZVoiceFlowCallback protocol in order for an application to receive callbacks from BZVoiceFlow framework.

    A class initializing an instance of BZVoiceFlowController must be a subclass of BZVoiceFlowCallback in order to receive Voiceflow processing callbacks from BZVoiceFlow framework.

  • – initializeDefaultAudioSession

    Applies to iOS only. Initializes the audio session for the application with default audio session parameters.
    Default audio session parameters are:
    - Allow bluetooth.
    - Duck others.
    - Default to Device Speaker

    The default audio session parameters also set the audio session mode to ASM_Default as defined in BZAudioSessionMode

    Please reference initializeAudioSession(jsonData) and initializeAudioSession(localFileURL) to intialize audio sessions with custom parameters.

    Sample implementation code:

  • – initializeAudioSession(jsonData)

    Applies to iOS only. Initializes the audio session for the application using audio session parameters obtained from a JSON structure which is passed to the method as a string.

    Here’s a sample JSON structure shown in the following sample implementation code:

  • – initializeAudioSession(localFileURL)

    Applies to iOS only. Initializes the audio session for the application using audio session parameters obtained from a local file URL containing a JSON structure which define the audio session parameters. the local file URL is then passed to the method as a string.

    The JSON structure in the file should conform to a structure similar to the following

  • – initializeDefaultMediaModules

    Initializes the default media modules required to proess the Voiceflows provided by an application.
    Media modules that will be initialized by default:
    - Audio player.
    - Audio recorder.
    - Apple speech synthesizer with “Samantha” as the default voice on US english.
    - Apple speech recognizer for US english.

    Please reference initializeMediaModules(jsonData) and initializeMediaModules(localFileURL) to intialize the modules needed to process the application Voiceflows.

    Sample implementation code:

  • – initializeMediaModules(jsonData)

    Initializes the media modules required to process the Voiceflows provided by an application. This method allows for initializing only the needed media modules for an application. the media modules are defined in a JSON structure which is passed to the method as a string. The following are the media modules that can be initialized by an application:
    - AudioPlayer
    - AudioRecorder
    - FliteSS
    - AppleSS
    - PocketSphinxSR
    - AppleSR

    Here’s a sample JSON structure shown in the following sample implementation code:

  • – initializeMediaModules(localFileURL)

    Initializes the media modules required to process the Voiceflows provided by an application. This method allows for initializing only the needed media modules for an application. the media modules are obtained from a local file URL containing a JSON structure. The local file URL is passed to the method as a string. The following are the media modules that can be initialized by an application:
    - AudioPlayer
    - AudioRecorder
    - FliteSS
    - AppleSS
    - PocketSphinxSR
    - AppleSR

    The JSON structure in the file should conform to a structure similar to the following:

  • – getSSVoices

    Gets a list of all available speech synthesis voices assoicated with a speech synthesis engine.
    This method returns an array of a speech synthesis voices. Each voice entry comprises the following voice property strings in order:

        - Name
        - Gender
        - LanguageCode
        - Quality
        - ID
    



    Sample implementation code:

  • – getActiveSSVoice

    Gets the active speech synthesis voice associated with a speech synthesis engine.
    This method returns a speech synthesis voice entry which comprises the following voice property strings in order:

        - Name
        - Gender
        - LanguageCode
        - Quality
        - ID
    



    Sample implementation code:

  • – setMediaResourceLocation

    Sets the location of resources for access by BZVoiceFlow and BZMedia frameworks during Voiceflow processing.

    During Voiceflow processing, the frameworks access ref_Voiceflow_ref files, Audio Prompt Module list files, Audio-to-Text Map files, pre-recorded files for audio playback, speech recognition task files for customized speech recognition, locations to save recorded audio for various tasks, etc. This is an optional and a convenience method so not to always specify the locations for where to access resource files from or where to save data and files to. An application can also specify or overide the paths at the time it passes the files to the frameworks or from Voiceflow files.

  • – setLanguageCode

    Sets the language locale code for Voiceflow processing. Default language code is “en-US” for US english. When this method is called, the frameworks will additionaly treat this language code as a possible existing folder name under thelocalURL path set by calling the method setMediaResourceLocation, and if it exists, media resource files will be read from or saved to that path. If the path with the language code string does not exist then only localURL path is used.

    Calling this method may also cause the speech recognition language and the speech synthesis voice used during Voiceflow processing to execute with the newly selected language code. The method getSSVoices retrieves all available voices with associated language codes. On iOS devices, additional voices and languages can be loaded in Settings. for more information about Apple speech products please consult Apple Developers Website.
    Sample language codes: “bg-BG”, “ca-ES”, “cs-CZ”, “da-DK”, “de-DE”, “ar-001”, “es-ES”, “fr-CA”, etc.

    Sample implementation code:

  • – loadSSLexicon(lexiconDictionary)

    (Flite SS only). Loads a speech synthesis lexicon dictionary for speech synthesis custom pronunciation. This method allows the application to directly load lexicon from a provided dictionary to a speech synthesizer before Voiceflows are processed.
    Custom pronunciation may be required to have a speech synthesizer pronounce correctly specific words that the synthesizer is not familiar with, such as foreign names and unknown or made-up words.

    Sample implementation code:

  • – loadSSLexicon(lexiconFile)

    (Flite SS only). Loads a speech synthesis lexicon file for speech synthesis custom pronunciation. This method allows the application to directly load lexicon from a file to a speech synthesizer before Voiceflows are processed.
    Custom pronunciation may be required to have a speech synthesizer pronounce correctly specific words that the synthesizer is not familiar with, such as foreign names and unknown or made-up words.

  • – loadAudioToTextMap(jsonData)

    Loads a string in JSON format containing mappings between the name of audio file names used for audio playback and the corresponding text structured. This can be used during Voiceflow processing to automatically replace playing recorded audio with its corresponding audio playback of synthesized speech generated from the text. This guards against the unavailability of recorded audio files. This can also be used to test Voiceflow processing with synthesized text before substituting with professionally recorded audio.

    The Audio-to-Text Map JSON string must conform to the JSON structure described by the Audio-to-Text JSON schema.

    Here’s a sample JSON structure shown in the following sample implementation code:

  • – loadAudioToTextMap(localFileURL)

    Loads a file containing the mapping between the name of audio file names used for audio playback and the corresponding text. This can be used during Voiceflow processing to automatically replace playing recorded audio with its corresponding audio playback of synthesized speech generated from the text. This guards against the unavailability of recorded audio files. This can also be used to test Voiceflow processing with synthesized text before substituting with professionally recorded audio.

    The content of an Audio-to-Text Map file must conform to the JSON structure described by the Audio-to-Text JSON schema.

    Here’s an example of the JSON content in an Audio-to-Text Map file:

  • – requestMicrophonePermission

    Requests the permission for the application to use the microphone. This method must be invoked in order to be able to collect audio from the microphone.

    For macOS, this method always returns BZ_PERMISSION_GRANTED.

    For iOS, the first time this method is invoked, it results with presenting the user with a request to approve the microphone usage, and returns BZ_PERMISSION_WAIT to the calling application. The result of the interaction with the user to approve or deny the application to use the microphone is posted to the application using the callback method BZVFC_PermissionEvent provided by the BZVoiceFlowCallback protocol.

    A class initializing an instance of BZVoiceFlowController must be a subclass of BZVoiceFlowCallback and must implement the call back method BZVFC_PermissionEvent in order to receive the result of the user accepting or rejecting the microphone usage.

    Sample implementation code:

  • – requestSpeechRecognitionPermission

    Requests the permission for the application to perform automatic speech recognition. This method must be invoked in order to be able to perform speech recogniton on collected speech utterancres.

    The first time this method is invoked, it results with presenting the user with a request to approve the speech recognition usage, and returns BZ_PERMISSION_WAIT to the calling application. The result of the interaction with the user to approve or deny the application to use speech recognition is posted to the application using the callback method BZVFC_PermissionEvent provided by the BZVoiceFlowCallback protocol.

    A class initializing an instance of BZVoiceFlowController must be a subclass of BZVoiceFlowCallback and must implement the call back method BZVFC_PermissionEvent in order to receive the result of the user accepting or rejecting speech recognition usage.

    Sample implementation code:

  • – isDeviceSpeakerEnabled

    Checks if the device speaker is being used for audio playback instead of other speakers that may be present on a device such as a phone speaker.

    Sample implementation code:

  • – enableDeviceSpeaker

    Switches speakers used for audio playback between device speaker and other speaker such as a phone speaker.

    Sample implementation code:

  • – setVoiceflowRuntimeField

    Sets the runtime value of a field name during Voiceflow processing. During Voiceflow processing, the interpretation of the JSON structure detects if a value of a JSON key (aka field name) is a dynamic value that needs to be retrieved from an internal runtime repository engine. An application sets this dynamic value and Voiceflow processing accesses this value when it requires it. The application usuallly sets runtime value for a field name during a Voiceflow callback to the application.

    In a Voiceflow, a JSON value for a field name is a dynamic value that can be set at runtime by an application if the value is made up of a another shared key string surrounded by a $[ and a ]. For example, with "promptID": "$[Prompt_AIChat_WhatToChatAbout]", the value of field namepromptID is the value of the shared key Prompt_AIChat_WhatToChatAbout set by the application and retrieved by Voiceflow processing at runtime.

    Sample implementation code:

  • – getVoiceflowRuntimeField

    Gets the runtime value of a shared field name during Voiceflow processing. Voiceflow processing has the abitlity to set a runtime value for a field name that the application can obtain at runtime, usually during a Voiceflow callback to the application.

    Voiceflow processing can set a value for a shared field name if the field name is surrounded by a $[ and a ]. For example, the follwing Voiceflow JSON structure sets values for two shared field names:

         "processParams": {
             "setValueForFieldNameCollection": [
                 {
                     "name": "$[AIChat_WhatToChatAbout]",
                     "value": "AIChat_Select_WhatToChatAbout",
                 },
                 {
                     "name": "$[CompletedPlayiText]",
                     "value": true,
                 },
             ],
         },
    

    The values of these shared field names are retreived by the application during Voiceflow processing.

    Sample implementation code:

  • – resetVoiceflowRuntimeField

    Resets the value of shared field name, and with that, actively removes the shared field name from the internal runtime repository engine.
    In a Voiceflow, a shared field name between a Voiceflow and an application is one that is surrounded by a $[ and a ].

    Sample implementation code:

  • – setUserIntent

    Sets the user intent to a string value and passes that to Voiceflow processing. An application usually evaluates a speech recognition hypothesis to some user intent, and submits the user intent to Voiceflow processing to take action on. User intent is an intenral field named intent and is evaluated in a Voiceflow “audioDialog” or “audioListener” voice flow module type as follows:

         "userIntentCollection": [
             {
                 "intent": "AIChatSubmitted",
                 "goTo": "AIChat_AudioDialog_AIChatWait",
             },
             {
                 "intent": "AudioListenerCommand",
                 "goTo": "AIChat_Process_AudioListenerCommand",
             },
             {
                 "intent": "TransitionToSleepMode",
                 "goTo": "AIChat_Process_SleepModeRequested",
             },
        ]
    



    Sample implementation code:

  • – resetUserIntent

    Resets the user intent to nil. Voiceflow processing automatically resets the user intent before processing Voiceflow “audioDialog” or “audioListener” voice flow module types. An application can also reset the user intent by calling this method.

    Sample implementation code:

  • – loadAudioPromptModules(localFileURL)

    Loads a file containing configured Audio Prompt Modules that are accessed during Voiceflow processing to execute audio playback.

    The content of an Audio Prompt Modules file must conform to the JSON structure described by the Audio Prompt Module JSON schema.

    Here’s an example of the JSON content in an Audio Prompt Modules file:

  • – loadAudioPromptModules(jsonData)

    Loads a string in JSON format containing configured Audio Prompt Modules that are accessed during Voiceflow processing to execute audio playback.

    The Audio Prompt Modules JSON string must conform to the JSON structure described by the Audio Prompt Module JSON schema.

    Here’s a sample JSON structure shown in the following sample implementation code:

  • – loadVoiceflow(localFileURL)

    Loads a ref_Voiceflow_ref file containing configured Voiceflow Modules that are processed to generate a conversational interaction with an application user.

    The content of an ref_Voiceflow_ref file must conform to the JSON structure described by the Voiceflow JSON schema.

    Here’s an example of the JSON content in an Voiceflow file:

  • – loadVoiceflow(jsonData)

    Loads a string in JSON format containing configured Vocieflow modules that are processed to generate a conversational interaction with an application user.

    The Voiceflow Modules JSON string must conform to the JSON structure described by the Voiceflow JSON schema.

    Here’s a sample JSON structure shown in the following sample implementation code:

  • – runVoiceflow

    Interprets and processes the loaded Voiceflow Modules, Audio Prompt Modules and optional Audio-to-Text Maps to generate a conversational Voiceflow interaction between an application and its user. loadAudioPromptModules and loadVoiceflow methods must be invoked successfully at least once before calling this method.

    This method processes the Voiceflow asynchronously and ends when Voiceflow processing reaches and VF_END Voiceflow Module, when it is stopped or when it is interrupted. During Voiceflow processing events with event data are posted to the application using the callback methods provided by the BZVoiceFlowCallback protocol.

    Sample implementation code:

  • – stopVoiceflow

    Stops and ends active Voiceflow processing. If successful, this method executes asychronously. While stopping Voiceflow processing, events with event data are posted to an application using the callback methods provided by the BZVoiceFlowCallback protocol.

    Sample implementation code:

  • – interruptVoiceflow

    Interrupts active Voiceflow processing and directs Voiceflow processing to resume at another Voiceflow Module identified by its uniqueid. If successful, this method executes asychronously. While interrupting Voiceflow processing, events with event data are posted to an application using the callback methods provided by the BZVoiceFlowCallback protocol.

    Sample implementation code:

  • – resumeVoiceflow

    Instructs Voiceflow procerssing to resume Voiceflow processing after Voiceflow processing was paused. Voiceflow processing pauses when it processes a Voiceflow Module of type pauseResume. Voiceflow processing remains paused until an application calls this method to have Voiceflow processing resume. If successful, this method executes asychronously. While and after resuming Voiceflow processing, events with event data are posted to an application using the callback methods provided by the BZVoiceFlowCallback protocol.

    Sample implementation code:

  • – loadAudioListenerPrompt

    Loads an Audio Prompt Module with a specific ID for Voiceflow processing when a Voiceflow Module of type audioListener is getting processed. An audioListener Voice flow module performs continuous audio playback of large amounts of text and recorded audio files while listening in the background for user speech input or commands. audioListener type VFMs take only one Audio Prompt Module to process a single audio segment for audio playback. The text or audio processed for audio playback is set dynamically during by an application using the setVoiceflowRuntimeFieldmethod to define the Audio Prompt Module ID (if configured to be set dynamically) and its audio playback parameters which include the text or audio to play.

    When Voiceflow processing starts to process and audioListener type Voiceflow Module it makes a BZVFC_PreModuleStart(vfModuleID: String) callback to the application. During this call back the application sets Audio Prompt Module ID (if configured to be set dynamically) and the audio or text to play in its audio segment by calling the setVoiceflowRuntimeFieldmethod. When the audio segment completes audio playback, Voiceflow processing makes a BZVFC_MediaEvent callback to the application with media function set to BZNotifyMediaFunction.NMF_PLAY_AUDIO_SEGMENT and media event set to BZNotifyMediaEvent.NME_ENDED. During this this call back the application calls the loadAudioListenerPrompt to add an Audio Prompt Module with its runtime configured audio segment to continue to process for audio playback.

    The following is an example of an audioListener Voiceflow Module referencing an Audio Prompt Module with ID P_AIChat_ChatResponseText:

    {
         "id": "VA_VFM_AIChat_AudioListener_ChatResponse",
         "type": "audioListener",
         "name": "VA_VFM_AIChat_AudioListener_ChatResponse",
         "recognizeAudioParams": {
             "srEngine": "apple",
             "languageCode": "en-US",
             "appleSRSessionParams": {
                 "enablePartialResults": true,
             },
         },
         "audioListenerParams": {
             "promptID": "P_AIChat_ChatResponseText",
         },
         "recordAudioParams": {
             "vadParams": {
                 "enableVAD": true,
             },
             "stopRecordParams": {
                 "maxRecordLengthMs": 0,
                 "maxAudioLengthMs": 0,
                 "maxSpeechLengthMs": 0,
                 "maxPreSpeechSilenceLengthMs": 8000,
                 "maxPostSpeechSilenceLengthMs": 1000,
             },
         },
         "goTo": {
             "maxSRErrorCount": "VA_VFM_AIChat_PlayAudio_NotAbleToListen",
             "loadPromptFailure": "VA_VFM_AIChat_PlayAudio_CannotPlayPrompt",
             "internalFailure": "VA_VFM_AIChat_PlayAudio_HavingTechnicalIssueListening",
             "userIntentCollection": [
                 {
                     "intent": "AudioListenerCommand",
                     "goTo": "VA_VFM_AIChat_Process_AudioListenerCommand",
                 },
                 {
                     "intent": "TransitionToSleepMode",
                     "goTo": "VA_VFM_AIChat_Process_SleepModeRequested",
                 },
                 {
                     "intent": "TransitionToShutdownMode",
                     "goTo": "VA_VFM_AIChat_Process_ShutdownModeRequested",
                 },
             ],
             "DEFAULT": "VA_VFM_AIChat_Process_RentryModule",
         },
    },
    



    The Audio Prompt Module with ID P_AIChat_ChatResponseText is configured as follows to play text contained in the runtime field ChatResponseSectionText. Optionally, the runtime field ChatResponseParagraphStartPlayPosition is used to start audio playback from a specific position:

    {
         "id": "P_AIChat_ChatResponseText",
         "style": "single",
         "textString": "$[ChatResponseSectionText]",
         "audioPlaybackParams": {
             "startPosMsRuntime": "$[ChatResponseParagraphStartPlayPosition]",
         },
    },
    



Instance Methods

enableDeviceSpeaker

public func enableDeviceSpeaker(enable:Bool)

Discussion

Switches speakers used for audio playback between device speaker and other speaker such as a phone speaker.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
 var bzResult = bzVoiceFlowController.initialize()
 bzResult = bzVoiceFlowController.initializeDefaultAudioSession()

if bzVoiceFlowController.isDeviceSpeakerEnabled() {
    // switch audio playback to use phone speaker
    bzVoiceFlowController.enableDeviceSpeaker(false)
} else {
     // switch audio playback to use device speaker
     bzVoiceFlowController.enableDeviceSpeaker(true)
}


Parameters

enable

Boolean. If true then route audio playback to device speaker. If false then route audio playback to other speaker such as phone speaker.

Declared In

BZVoiceFlowController.swift

forBZResult

public func forBZResult(bzResult:BZ_RESULT) -> String

Discussion

Retrieves a textual representation of a BZ_RESULT constant value.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
let bzResult = bzVoiceFlowController.initialize()
let strResult = bzVoiceFlowController.forBZMediaResult(bzResult)


Parameters

bzResult

The BZ_RESULT constant value.

Return Value

A String containing a textual representation of bzResult.

Declared In

BZVoiceFlowController.swift

getActiveSSVoice

public func getActiveSSVoice(ssEngine: BZSSEngine) -> [String]

Discussion

Gets the active speech synthesis voice associated with a speech synthesis engine.
This method returns a speech synthesis voice entry which comprises the following voice property strings in order:

    - Name
    - Gender
    - LanguageCode
    - Quality
    - ID



Sample implementation code:

 let bzVoiceFlowController = BZVoiceFlowController()
 var bzResult =  bzVoiceFlowController.initialize()
 bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
 bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

let SSVoiceEntry = bzVoiceFlowController.getActiveVoice(ssEngine: .SSE_APPLE)
print("Name: \(SSVoiceEntry[0]), Gender: \(SSVoiceEntry[1]), LanguageCode: \(SSVoiceEntry[2]), Quality: \(SSVoiceEntry[3]), ID: \(SSVoiceEntry[4])")


Parameters

ssEngine

The speech synthesizer engine as defined in BZSSEngine.

Return Value

[String]. An active speech synthesis voice entry.

Declared In

BZVoiceFlowController.swift

getLastError

public func getLastError() -> BZ_RESULT

Discussion

Gets the last BZ_RESULT error encountered. Use forBZResult method to get a textual representation of the error.

Return Value

Declared In

BZVoiceFlowController.swift

getSSVoices

public func getSSVoices(ssEngine: BZSSEngine) -> [[String]]

Discussion

Gets a list of all available speech synthesis voices assoicated with a speech synthesis engine.
This method returns an array of a speech synthesis voices. Each voice entry comprises the following voice property strings in order:

    - Name
    - Gender
    - LanguageCode
    - Quality
    - ID



Sample implementation code:

 let bzVoiceFlowController = BZVoiceFlowController()
 var bzResult =  bzVoiceFlowController.initialize()
 bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
 bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

let AppleSSVoices = bzVoiceFlowController.getSSVoices(ssEngine: .SSE_APPLE)
 for SSVoiceEntry:[String] in AppleSSVoices {
        print("Name: \(SSVoiceEntry[0]), Gender: \(SSVoiceEntry[1]), LanguageCode: \(SSVoiceEntry[2]), Quality: \(SSVoiceEntry[3]), ID: \(SSVoiceEntry[4])")
  }


Parameters

ssEngine

The speech synthesizer engine as defined in BZSSEngine.

Return Value

[[String]]. An array of speech synthesis voice entries.

Declared In

BZVoiceFlowController.swift

getVoiceflowRuntimeField

public func getVoiceflowRuntimeField(name:String) -> Any?

Discussion

Gets the runtime value of a shared field name during Voiceflow processing. Voiceflow processing has the abitlity to set a runtime value for a field name that the application can obtain at runtime, usually during a Voiceflow callback to the application.

Voiceflow processing can set a value for a shared field name if the field name is surrounded by a $[ and a ]. For example, the follwing Voiceflow JSON structure sets values for two shared field names:

     "processParams": {
         "setValueForFieldNameCollection": [
             {
                 "name": "$[AIChat_WhatToChatAbout]",
                 "value": "AIChat_Select_WhatToChatAbout",
             },
             {
                 "name": "$[CompletedPlayiText]",
                 "value": true,
             },
         ],
     },

The values of these shared field names are retreived by the application during Voiceflow processing.

Sample implementation code:

 let bzVoiceFlowController = BZVoiceFlowController()
 var bzResult = bzVoiceFlowController.initialize()
 bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
 bzResult = bzVoiceFlowController.initializeDefaultMediaModules()
 bzVoiceFlowController.setVoiceFlowCallback(self)

 bzVoiceFlowController.loadAudioPromptModules(...)
 bzVoiceFlowController.loadVoiceflow(...)
 bzVoiceFlowController.runVoiceflow()

// Optional implementation of callback methods from BZVoiceFlowCallback protocol

func BZVFC_PreModuleEnd(vfModuleID: String) {
     if vfModuleID == "AIChat_Select_WhatToChatAbout"  {
        let isCompletedPlayText:Bool = bzVoiceFlowController.getVoiceflowRuntimeField("CompletedPlayText") as? Bool
        }
    }
}


Parameters

name

The name of the shared field.

Return Value

Any. Any can be casted to another format such as string, boolean or integer.

Declared In

BZVoiceFlowController.swift

initializeAudioSession(jsonData)

public func initializeAudioSession(jsonData:String) -> BZ_RESULT

Discussion

Applies to iOS only. Initializes the audio session for the application using audio session parameters obtained from a JSON structure which is passed to the method as a string.

Here’s a sample JSON structure shown in the following sample implementation code:

let bzAudioSessionJSON:String = """
 {
     "categories": {
         "mixWithOthers": "yes",
         "duckOthers": "no",
         "allowBluetooth": "yes",
         "defaultToSpeaker": "yes",
         "interruptSpokenAudioAndMixWithOthers": "no",
         "allowBluetoothA2DP": "no",
         "allowAirPlay": "no",
     },
     "mode": "default",
     "_comment": "Other values for mode are: voiceChat, gameChat and spokenAudio",
 }  """

 let bzVoiceFlowController = BZVoiceFlowController()
 var bzResult =  bzVoiceFlowController.initialize()
 bzResult = bzVoiceFlowController.initializeAudioSession(jsonData:bzAudioSessionJSON)


Parameters

jsonData

The string containing the JSON structure which define the audio session parameters .

Return Value

Declared In

BZVoiceFlowController.swift

initializeAudioSession(localFileURL)

public func initializeAudioSession(localFileURL:String) -> BZ_RESULT

Discussion

Applies to iOS only. Initializes the audio session for the application using audio session parameters obtained from a local file URL containing a JSON structure which define the audio session parameters. the local file URL is then passed to the method as a string.

The JSON structure in the file should conform to a structure similar to the following

    {
     "categories": {
         "mixWithOthers": "yes",
         "duckOthers": "no",
         "allowBluetooth": "yes",
         "defaultToSpeaker": "yes",
         "interruptSpokenAudioAndMixWithOthers": "no",
         "allowBluetoothA2DP": "no",
         "allowAirPlay": "no",
     },
     "mode": "default",
     "_comment": "Other values for mode are: voiceChat, gameChat and spokenAudio",
    }



Sample implementation code:

 let bzVoiceFlowController = BZVoiceFlowController()
 var bzResult =  bzVoiceFlowController.initialize()

guard let path = Bundle.main.path(forResource: "MZMedia/Session/AudioSession", ofType: "json") else {
 if(bzVoiceFlowController.initializeDefaultAudioSession() == .BZ_SUCCESS) {
     // Initilaized with default audio session
 }
 else {
     // Error: Failed to initilaize with default audio session
 }
 return
}
 if(bzVoiceFlowController.initializeAudioSession(localFileURL:path) == .BZ_SUCCESS) {
     print("Initilaized with audio session param file ", path)
 }
 else {
    // check the error
}


Parameters

localFileURL

The file containing the JSON structure which define the audio session parameters .

Return Value

Declared In

BZVoiceFlowController.swift

initializeDefaultAudioSession

public func initializeDefaultAudioSession() -> BZ_RESULT

Discussion

Applies to iOS only. Initializes the audio session for the application with default audio session parameters.
Default audio session parameters are:
- Allow bluetooth.
- Duck others.
- Default to Device Speaker

The default audio session parameters also set the audio session mode to ASM_Default as defined in BZAudioSessionMode

Please reference initializeAudioSession(jsonData) and initializeAudioSession(localFileURL) to intialize audio sessions with custom parameters.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()


Return Value

Declared In

BZVoiceFlowController.swift

initializeDefaultMediaModules

public func initializeDefaultMediaModules() -> BZ_RESULT

Discussion

Initializes the default media modules required to proess the Voiceflows provided by an application.
Media modules that will be initialized by default:
- Audio player.
- Audio recorder.
- Apple speech synthesizer with “Samantha” as the default voice on US english.
- Apple speech recognizer for US english.

Please reference initializeMediaModules(jsonData) and initializeMediaModules(localFileURL) to intialize the modules needed to process the application Voiceflows.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()

bzResult = bzVoiceFlowController.initializeDefaultMediaModules()


Declared In

BZVoiceFlowController.swift

initializeMediaModules(jsonData)

public func initializeMediaModules(jsonData:String) -> BZ_RESULT

Discussion

Initializes the media modules required to process the Voiceflows provided by an application. This method allows for initializing only the needed media modules for an application. the media modules are defined in a JSON structure which is passed to the method as a string. The following are the media modules that can be initialized by an application:
- AudioPlayer
- AudioRecorder
- FliteSS
- AppleSS
- PocketSphinxSR
- AppleSR

Here’s a sample JSON structure shown in the following sample implementation code:

let bzMediaModulesJSON:String = """
 [
     {
         "module": "AudioPlayer",
         "enable": "yes",
     },
     {
         "module": "AudioRecorder",
         "enable": "yes",
     },
     {
         "module": "FliteSS",
         "enable": "yes",
         "voiceName": "default",
     },
     {
         "module": "AppleSS",
         "enable": "yes",
         "voiceName": "Ava (Premium)",
     },
     {
         "module": "PocketSphinxSR",
         "enable": "yes",
     },
     {
         "module": "AppleSR",
         "enable": "yes",
         "languageCode": "en-us",
     },
 ]  """

 let bzVoiceFlowController = BZVoiceFlowController()
 var bzResult =  bzVoiceFlowController.initialize()
 bzResult = bzVoiceFlowController.initializeDefaultAudioSession()

 bzResult = bzVoiceFlowController.initializeMediaModules(jsonData:bzMediaModulesJSON)


Parameters

jsonData

The string containing the JSON structure which define the media modules to initialize .

Return Value

Declared In

BZVoiceFlowController.swift

initializeMediaModules(localFileURL)

public func initializeMediaModules(localFileURL:String) -> BZ_RESULT

Discussion

Initializes the media modules required to process the Voiceflows provided by an application. This method allows for initializing only the needed media modules for an application. the media modules are obtained from a local file URL containing a JSON structure. The local file URL is passed to the method as a string. The following are the media modules that can be initialized by an application:
- AudioPlayer
- AudioRecorder
- FliteSS
- AppleSS
- PocketSphinxSR
- AppleSR

The JSON structure in the file should conform to a structure similar to the following:

    [
         {
             "module": "AudioPlayer",
             "enable": "yes",
         },
         {
             "module": "AudioRecorder",
             "enable": "yes",
         },
         {
             "module": "FliteSS",
             "enable": "yes",
             "voiceName": "default",
         },
         {
             "module": "AppleSS",
             "enable": "yes",
             "voiceName": "Ava (Premium)",
         },
         {
             "module": "PocketSphinxSR",
             "enable": "yes",
         },
         {
             "module": "AppleSR",
             "enable": "yes",
             "languageCode": "en-us",
         },
    ]



Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
 var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()

guard let path = Bundle.main.path(forResource: "MZMedia/Session/MediaModules", ofType: "json") else {
 if(bzVoiceFlowController.initializeDefaultMediaModules() == .BZ_SUCCESS) {
     // Initilaized with default media modules
 }
 else {
     // Error: Failed to initilaize with default media modules
 }
 return
}
 if(bzVoiceFlowController.initializeMediaModules(localFileURL:path) == .BZ_SUCCESS) {
     print("Initilaized with media modules param file ", path)
 }
 else {
    // check the error
}


Parameters

jsonData

The string containing the JSON structure which define the media modules to initialize .

Return Value

Declared In

BZVoiceFlowController.swift

initialize

public func initialize() -> BZ_RESULT

Discussion

Initializes the BZVoiceFlow framework. This method must be invoked after a BZVoiceFlowConntroller object is created.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
let bzResult =  = bzVoiceFlowController.initialize()


Return Value

Declared In

BZVoiceFlowController.swift

interruptVoiceflow

public func interruptVoiceflow(gotoVFMID:String) -> BZ_RESULT

Discussion

Interrupts active Voiceflow processing and directs Voiceflow processing to resume at another Voiceflow Module identified by its uniqueid. If successful, this method executes asychronously. While interrupting Voiceflow processing, events with event data are posted to an application using the callback methods provided by the BZVoiceFlowCallback protocol.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

// At some point an application executes the following methods
bzResult = bzVoiceFlowController.loadAudioPromptModules(jsonData: bzAudioPromptModulesJSON)
bzResult = bzVoiceFlowController.loadVoiceflow(jsonData: bzVoiceFlowJSON)
bzResult = bzVoiceFlowController.runVoiceflow()

// Later, the application decides to interrupt Voiceflow processing and instructs it to resume processing at another Voiceflow Module
bzResult = bzVoiceFlowController.interruptVoiceflow(gotoVFMID: "AudioDialog_WhatToChatAbout")



Parameters

gotoVFMID

The id of a Voiceflow Module configured in the current Voiceflow being processed.

Return Value

Declared In

BZVoiceFlowController.swift

isDeviceSpeakerEnabled

public func isDeviceSpeakerEnabled() -> Bool

Discussion

Checks if the device speaker is being used for audio playback instead of other speakers that may be present on a device such as a phone speaker.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
 var bzResult = bzVoiceFlowController.initialize()
 bzResult = bzVoiceFlowController.initializeDefaultAudioSession()

if bzVoiceFlowController.isDeviceSpeakerEnabled() {
 // Device speaker is used for audio playback
 } else {
 // Device speaker is not used for audio playback
 }


Return Value

true if device speaker is being used and false if another speaker such as a phone speaker is being used

Declared In

BZVoiceFlowController.swift

loadAudioListenerPrompt

public func loadAudioListenerPrompt(promptID:String) -> BZ_RESULT

Discussion

Loads an Audio Prompt Module with a specific ID for Voiceflow processing when a Voiceflow Module of type audioListener is getting processed. An audioListener Voice flow module performs continuous audio playback of large amounts of text and recorded audio files while listening in the background for user speech input or commands. audioListener type VFMs take only one Audio Prompt Module to process a single audio segment for audio playback. The text or audio processed for audio playback is set dynamically during by an application using the setVoiceflowRuntimeFieldmethod to define the Audio Prompt Module ID (if configured to be set dynamically) and its audio playback parameters which include the text or audio to play.

When Voiceflow processing starts to process and audioListener type Voiceflow Module it makes a BZVFC_PreModuleStart(vfModuleID: String) callback to the application. During this call back the application sets Audio Prompt Module ID (if configured to be set dynamically) and the audio or text to play in its audio segment by calling the setVoiceflowRuntimeFieldmethod. When the audio segment completes audio playback, Voiceflow processing makes a BZVFC_MediaEvent callback to the application with media function set to BZNotifyMediaFunction.NMF_PLAY_AUDIO_SEGMENT and media event set to BZNotifyMediaEvent.NME_ENDED. During this this call back the application calls the loadAudioListenerPrompt to add an Audio Prompt Module with its runtime configured audio segment to continue to process for audio playback.

The following is an example of an audioListener Voiceflow Module referencing an Audio Prompt Module with ID P_AIChat_ChatResponseText:

{
     "id": "VA_VFM_AIChat_AudioListener_ChatResponse",
     "type": "audioListener",
     "name": "VA_VFM_AIChat_AudioListener_ChatResponse",
     "recognizeAudioParams": {
         "srEngine": "apple",
         "languageCode": "en-US",
         "appleSRSessionParams": {
             "enablePartialResults": true,
         },
     },
     "audioListenerParams": {
         "promptID": "P_AIChat_ChatResponseText",
     },
     "recordAudioParams": {
         "vadParams": {
             "enableVAD": true,
         },
         "stopRecordParams": {
             "maxRecordLengthMs": 0,
             "maxAudioLengthMs": 0,
             "maxSpeechLengthMs": 0,
             "maxPreSpeechSilenceLengthMs": 8000,
             "maxPostSpeechSilenceLengthMs": 1000,
         },
     },
     "goTo": {
         "maxSRErrorCount": "VA_VFM_AIChat_PlayAudio_NotAbleToListen",
         "loadPromptFailure": "VA_VFM_AIChat_PlayAudio_CannotPlayPrompt",
         "internalFailure": "VA_VFM_AIChat_PlayAudio_HavingTechnicalIssueListening",
         "userIntentCollection": [
             {
                 "intent": "AudioListenerCommand",
                 "goTo": "VA_VFM_AIChat_Process_AudioListenerCommand",
             },
             {
                 "intent": "TransitionToSleepMode",
                 "goTo": "VA_VFM_AIChat_Process_SleepModeRequested",
             },
             {
                 "intent": "TransitionToShutdownMode",
                 "goTo": "VA_VFM_AIChat_Process_ShutdownModeRequested",
             },
         ],
         "DEFAULT": "VA_VFM_AIChat_Process_RentryModule",
     },
},



The Audio Prompt Module with ID P_AIChat_ChatResponseText is configured as follows to play text contained in the runtime field ChatResponseSectionText. Optionally, the runtime field ChatResponseParagraphStartPlayPosition is used to start audio playback from a specific position:

{
     "id": "P_AIChat_ChatResponseText",
     "style": "single",
     "textString": "$[ChatResponseSectionText]",
     "audioPlaybackParams": {
         "startPosMsRuntime": "$[ChatResponseParagraphStartPlayPosition]",
     },
},



With that, the following shows a sample implementation code that keeps setting the runtime field ChatResponseSectionText to more text for audio playback each time the audio playback of the previoud audio segment completes:

public func BZVFC_PreModuleStart(vfModuleID: String) {
    if vfModuleID == "VA_VFM_AIChat_AudioListener_ChatResponse" {
        bzVoiceFlowController.setVoiceflowRuntimeField(name: "ChatResponseSectionText", value: "Let's go ahead and start chatting".)

        bzVoiceFlowController?setVoiceflowRuntimeField(name: "ChatResponseParagraphStartPlayPosition", value: 0)
    }
}

public func BZVFC_MediaEvent(vfModuleID: String, mediaItemID: String, mediaFunction: BZNotifyMediaFunction, mediaEvent: BZNotifyMediaEvent, mediaEventData: [AnyHashable : Any]) {
    if vfModuleID == "VA_VFM_AIChat_AudioListener_ChatResponse" {
        if mediaItemID == "P_AIChat_ChatResponseText" {
            if mediaFunction == .NMF_PLAY_AUDIO_SEGMENT {
                if mediaEvent == .NME_ENDED {
                    bzVoiceFlowController.setVoiceflowRuntimeField(name: "ChatResponseSectionText", value: "let's just keep chatting")
                    _ = bzVoiceFlowController.loadAudioListenerPrompt(promptID: "P_AIChat_ChatResponseText")
                }
            }
        }
    }
}



Parameters

promptID

The ID of the Audio Prompt Module to pass to Voiceflow processing while audioListerner type Voiceflow Module is being processed.

Return Value

Declared In

BZVoiceFlowController.swift

loadAudioPromptModules(localFileURL)

loadAudioPromptModules(localFileURL:String) -> BZ_RESULT

Discussion

Loads a file containing configured Audio Prompt Modules that are accessed during Voiceflow processing to execute audio playback.

The content of an Audio Prompt Modules file must conform to the JSON structure described by the Audio Prompt Module JSON schema.

Here’s an example of the JSON content in an Audio Prompt Modules file:

[
     {
         "id": "P_Okay",
         "style": "single",
         "audioFile": "Okay.wav",
         "textString": "Okay.",
     },
     {
         "id": "P_Sure",
         "style": "single",
         "_audioFile": "Sure.wav",
         "textString": "Sure.",
     },
     {
         "id": "P_PreShutdown_Moderate",
         "style": "select",
         "promptCollection": [
             {
                 "promptID": "P_Okay",
             },
             {
                 "promptID": "P_Sure",
             },
         ]
     },
     {
         "id": "P_TurningOff",
         "style": "single",
         "audioFile": "TurningOff.wav",
         "textString": "Turning off.",
     },
]

Note: If an Audio Prompt Modules file is provided with an absolute path, then that path will be checked to verify the file exists. If the file is provided as just a file name or with a relative path, then media resource location for FC_VOICEFLOW set using the method setMediaResourceLocation and language code set using the method setLanguageCode will be used to construct an absolute path to the file and to verify the file exists. If language code is set, then a file found at a location with a path that includes the language code will be prioritized.

Sample implementation code:

 let bzVoiceFlowController = BZVoiceFlowController()
 var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

 bzResult = bzVoiceFlowController.loadAudioPromptModules(localFileURL: "AudioPromptModules.json")


Parameters

localFileURL

The file containing the Audio Prompt Modules structured in JSON Format.

Return Value

Declared In

BZVoiceFlowController.swift

loadAudioPromptModules(jsonData)

public func loadAudioPromptsModules(jsonData:String) -> BZ_RESULT

Discussion

Loads a string in JSON format containing configured Audio Prompt Modules that are accessed during Voiceflow processing to execute audio playback.

The Audio Prompt Modules JSON string must conform to the JSON structure described by the Audio Prompt Module JSON schema.

Here’s a sample JSON structure shown in the following sample implementation code:

 let bzAudioPromptModulesJSON:String = """

 [
     {
         "id": "P_Okay",
         "style": "single",
         "audioFile": "Okay.wav",
         "textString": "Okay.",
     },
     {
         "id": "P_Sure",
         "style": "single",
         "_audioFile": "Sure.wav",
         "textString": "Sure.",
     },
     {
         "id": "P_PreShutdown_Moderate",
         "style": "select",
         "promptCollection": [
             {
                 "promptID": "P_Okay",
             },
             {
                 "promptID": "P_Sure",
             },
         ]
     },
     {
         "id": "P_TurningOff",
         "style": "single",
         "audioFile": "TurningOff.wav",
         "textString": "Turning off.",
     },
 ]  """

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

bzResult = bzVoiceFlowController.loadAudioPromptModules(jsonData: bzAudioPromptModulesJSON)



Parameters

jsonData

The string containing the Audio Prompt Modules structured in JSON format.

Return Value

Declared In

BZVoiceFlowController.swift

loadAudioToTextMap(jsonData)

public func loadAudioToTextMap(jsonData:String) -> BZ_RESULT

Discussion

Loads a string in JSON format containing mappings between the name of audio file names used for audio playback and the corresponding text structured. This can be used during Voiceflow processing to automatically replace playing recorded audio with its corresponding audio playback of synthesized speech generated from the text. This guards against the unavailability of recorded audio files. This can also be used to test Voiceflow processing with synthesized text before substituting with professionally recorded audio.

The Audio-to-Text Map JSON string must conform to the JSON structure described by the Audio-to-Text JSON schema.

Here’s a sample JSON structure shown in the following sample implementation code:

let bzAudioToTextMapJSON:String = """
 [
     {
         "audioFile": "Hello.wav",
         "textString": "Hello.",
         "textLanguageCode": "en-US",
     },
     {
         "audioFile": "Bonjour.wav",
         "textString": "Bonjour.",
         "textLanguageCode": "fr-FR",
     },
     {
         "audioFile": "HelloHello.wav",
         "textString": "Hello Hello. I am your assistant.",
     },
     {
         "audioFile": "HelloThere.wav",
         "textString": "Hello there.",
     },
     {
         "audioFile": "Hi.wav",
         "textString": "Hi.",
     },
 ] """

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

bzResult = bzVoiceFlowController.loadAudioToTextMap(jsonData: bzAudioToTextMapJSON)



Parameters

jsonData

The string containing the Audio-to-Text Map data structured in JSON format.

Return Value

Declared In

BZVoiceFlowController.swift

loadAudioToTextMap(localFileURL)

public func loadAudioToTextMap(localFileURL:String) -> BZ_RESULT

Discussion

Loads a file containing the mapping between the name of audio file names used for audio playback and the corresponding text. This can be used during Voiceflow processing to automatically replace playing recorded audio with its corresponding audio playback of synthesized speech generated from the text. This guards against the unavailability of recorded audio files. This can also be used to test Voiceflow processing with synthesized text before substituting with professionally recorded audio.

The content of an Audio-to-Text Map file must conform to the JSON structure described by the Audio-to-Text JSON schema.

Here’s an example of the JSON content in an Audio-to-Text Map file:

[
     {
         "audioFile": "Hello.wav",
         "textString": "Hello.",
         "textLanguageCode": "en-US",
     },
     {
         "audioFile": "Bonjour.wav",
         "textString": "Bonjour.",
         "textLanguageCode": "fr-FR",
     },
     {
         "audioFile": "HelloHello.wav",
         "textString": "Hello Hello. I am your assistant.",
     },
     {
         "audioFile": "HelloThere.wav",
         "textString": "Hello there.",
     },
     {
         "audioFile": "Hi.wav",
         "textString": "Hi.",
     },
]

Note: If an Audio-to-Text Map file is provided with an absolute path, then that path will be checked to verify the file exists. If the file is provided as just a file name or with a relative path, then media resource location for FC_VOICEFLOW set using the method setMediaResourceLocation and language code set using the method setLanguageCode will be used to construct an absolute path to the file and to verify the file exists. If language code is set, then a file found at a location with a path that includes the language code will be prioritized.

Sample implementation code:

 let bzVoiceFlowController = BZVoiceFlowController()
 var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

 bzResult = bzVoiceFlowController.loadAudioToTextMap(localFileURL: "AudioTextMap.json")


Parameters

localFileURL

The audio file containing the Audio-to-Text Map entries structured in JSON Format.

Return Value

Declared In

BZVoiceFlowController.swift

loadSSLexicon(lexiconDictionary)

public func loadSSLexicon(ssEngine:BZSSEngine, lexiconDictionary:[String:String]) -> BZ_RESULT

Discussion

(Flite SS only). Loads a speech synthesis lexicon dictionary for speech synthesis custom pronunciation. This method allows the application to directly load lexicon from a provided dictionary to a speech synthesizer before Voiceflows are processed.
Custom pronunciation may be required to have a speech synthesizer pronounce correctly specific words that the synthesizer is not familiar with, such as foreign names and unknown or made-up words.

Sample implementation code:

let lexDict:[String:String] = ["sleekit ":" s l iy1 k ih0 t", "trochled":" t r ao1 k ax0 l d    ", "mounir":"m uw0 n iy1 r", "chalhoub":"sh ax0 l hh uw1 b"]

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

bzResult = bzVoiceFlowController.loadSSLexicon(ssEngine: .SSE_FLITE, lexiconDictionary:lexDict)

Parameters

ssEngine

The speech synthesizer engine as defined in BZSSEngine.

lexiconDictionary

The speech synthesis lexicon dictionary containing words and their lexicons.

Return Value

Declared In

BZVoiceFlowController.swift

loadSSLexicon(lexiconFile)

public func loadSSLexicon(ssEngine:BZSSEngine, lexiconFile:String) -> BZ_RESULT

Discussion

(Flite SS only). Loads a speech synthesis lexicon file for speech synthesis custom pronunciation. This method allows the application to directly load lexicon from a file to a speech synthesizer before Voiceflows are processed.
Custom pronunciation may be required to have a speech synthesizer pronounce correctly specific words that the synthesizer is not familiar with, such as foreign names and unknown or made-up words.

The entries in the lexicon file should conform to the following entry style:

    #
    # External Addenda lexicon
    #
    #  #Comment
    #  Blank lines are ignored
    #  "headword" [pos] : phone phone phone phone ...
    #
    #  phone *must* be in the phoneset for the lexicon, thus vowels must be
    #  appended with 0 or 1 stress values (if you have them in your language)
    #  head word should be quoted if it contains non-ascii or whitespace
    #
    
    sleekit : s l iy1 k ih0 t
    trochled : t r ao1 k ax0 l d
    mounir : m uw0 n iy1 r
    chalhoub : sh ax0 l hh uw1 b

Note: If a lexicon file is provided with an absolute path, then that path will be checked to verify the file exists. If the file is provided as just a file name or with a relative path, then media resource location for FC_SPEECH_SYNTHESIS set using the method setMediaResourceLocation and language code set using the method setLanguageCode will be used to construct an absolute path to the file and to verify the file exists. If language code is set, then a file found at a location with a path that includes the language code will be prioritized.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

bzResult = bzVoiceFlowController.loadSSLexicon(ssEngine: .SSE_FLITE, lexiconFile:"Lexicon.lex")

Parameters

ssEngine

The speech synthesizer engine as defined in BZSSEngine.

lexiconFile

The speech synthesis lexicon file.

Return Value

Declared In

BZVoiceFlowController.swift

loadVoiceflow(localFileURL)

public func loadVoiceflow(localFileURL:String) -> BZ_RESULT

Discussion

Loads a ref_Voiceflow_ref file containing configured Voiceflow Modules that are processed to generate a conversational interaction with an application user.

The content of an ref_Voiceflow_ref file must conform to the JSON structure described by the Voiceflow JSON schema.

Here’s an example of the JSON content in an Voiceflow file:

[
     {
         "id": "VF_START",
         "type": "node",
         "name": "VA_VFM_Shutdown_Node_VF_START",
         "goTo": {
             "DEFAULT": "VA_VFM_Shutdown_Process_LoadPreStart",
         },
     },
     {
         "id": "VA_VFM_Shutdown_Process_LoadPreStart",
         "type": "process",
         "name": "VA_VFM_Shutdown_Process_LoadPreStart",
         "goTo": {
             "DEFAULT": "$[ShutdownModeFlowPlayStartModule]",
         },
     },
     {
         "id": "VA_VFM_Shutdown_PlayAudio_DefaultShutdown",
         "type": "playAudio",
         "name": "VA_VFM_Shutdown_PlayAudio_DefaultShutdown",
         "playAudioParams": {
             "ssEngine": "apple",
             "style": "combo",
             "promptCollection": [
                 {
                     "promptID": "P_TurningOffShuttingDown",
                 },
                 {
                     "promptID": "P_GoodBye",
                 },
             ],
         },
         "goTo": {
             "DEFAULT": "VA_VFM_Shutdown_Process_TransitionToShutdownMode",
         },
     },
     {
         "id": "VA_VFM_Shutdown_Process_TransitionToShutdownMode",
         "type": "process",
         "name": "VA_VFM_Shutdown_Process_TransitionToShutdownMode",
         "goTo": {
             "DEFAULT": "VF_END",
         },
     },
     {
         "id": "VF_END",
         "type": "node",
         "name": "VA_VFM_Shutdown_Node_VF_END",
         "goTo": {
             "DEFAULT": "",
         },
     },
]

Note: If a Voiceflow file is provided with an absolute path, then that path will be checked to verify the file exists. If the file is provided as just a file name or with a relative path, then media resource location for FC_VOICEFLOW set using the method setMediaResourceLocation and language code set using the method setLanguageCode will be used to construct an absolute path to the file and to verify the file exists. If language code is set, then a file found at a location with a path that includes the language code will be prioritized.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

bzResult = bzVoiceFlowController.loadAudioPromptModules(jsonData: bzAudioPromptModulesJSON)
bzResult = bzVoiceFlowController.loadVoiceflow(localFileURL: "Voiceflow.json")


Parameters

localFileURL

The file containing the Voiceflow Modules structured in JSON Format.

Return Value

Declared In

BZVoiceFlowController.swift

loadVoiceflow(jsonData)

public func loadVoiceflow(jsonData:String) -> BZ_RESULT

Discussion

Loads a string in JSON format containing configured Vocieflow modules that are processed to generate a conversational interaction with an application user.

The Voiceflow Modules JSON string must conform to the JSON structure described by the Voiceflow JSON schema.

Here’s a sample JSON structure shown in the following sample implementation code:

 let bzVoiceFlowJSON:String = """

 [
     {
         "id": "VF_START",
         "type": "node",
         "name": "VA_VFM_Shutdown_Node_VF_START",
         "goTo": {
             "DEFAULT": "VA_VFM_Shutdown_Process_LoadPreStart",
         },
     },
     {
         "id": "VA_VFM_Shutdown_Process_LoadPreStart",
         "type": "process",
         "name": "VA_VFM_Shutdown_Process_LoadPreStart",
         "goTo": {
             "DEFAULT": "$[ShutdownModeFlowPlayStartModule]",
         },
     },
     {
         "id": "VA_VFM_Shutdown_PlayAudio_DefaultShutdown",
         "type": "playAudio",
         "name": "VA_VFM_Shutdown_PlayAudio_DefaultShutdown",
         "playAudioParams": {
             "ssEngine": "apple",
             "style": "combo",
             "promptCollection": [
                 {
                     "promptID": "P_TurningOffShuttingDown",
                 },
                 {
                     "promptID": "P_GoodBye",
                 },
             ],
         },
         "goTo": {
             "DEFAULT": "VA_VFM_Shutdown_Process_TransitionToShutdownMode",
         },
     },
     {
         "id": "VA_VFM_Shutdown_Process_TransitionToShutdownMode",
         "type": "process",
         "name": "VA_VFM_Shutdown_Process_TransitionToShutdownMode",
         "goTo": {
             "DEFAULT": "VF_END",
         },
     },
     {
         "id": "VF_END",
         "type": "node",
         "name": "VA_VFM_Shutdown_Node_VF_END",
         "goTo": {
             "DEFAULT": "",
         },
     },
 ]  """

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

bzResult = bzVoiceFlowController.loadAudioPromptModules(jsonData: bzAudioPromptModulesJSON)
bzResult = bzVoiceFlowController.loadVoiceflow(jsonData: bzVoiceFlowJSON)



Parameters

jsonData

The string containing the Voiceflow Modules structured in JSON format.

Return Value

Declared In

BZVoiceFlowController.swift

requestMicrophonePermission

public func requestMicrophonePermission() -> BZ_RESULT

Discussion

Requests the permission for the application to use the microphone. This method must be invoked in order to be able to collect audio from the microphone.

For macOS, this method always returns BZ_PERMISSION_GRANTED.

For iOS, the first time this method is invoked, it results with presenting the user with a request to approve the microphone usage, and returns BZ_PERMISSION_WAIT to the calling application. The result of the interaction with the user to approve or deny the application to use the microphone is posted to the application using the callback method BZVFC_PermissionEvent provided by the BZVoiceFlowCallback protocol.

A class initializing an instance of BZVoiceFlowController must be a subclass of BZVoiceFlowCallback and must implement the call back method BZVFC_PermissionEvent in order to receive the result of the user accepting or rejecting the microphone usage.

Sample implementation code:

#import BZVoiceFlow_Framework

public final class MyVoiceFlowClass: NSObject, BZVoiceFlowCallback  {

 var bzVoiceFlowController: BZVoiceFlowController? = nil

 func InitializeBZVoiceFlowController () {
     bzVoiceFlowController = BZVoiceFlowController()
    var bzResult = bzVoiceFlowController.initialize()
    bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
    bzResult = bzVoiceFlowController.initializeDefaultMediaModules()
     bzVoiceFlowController!.setVoiceFlowCallback(self)

     let bzResult = bzVoiceFlowController!.requestMicrophonePermission()

    if  bzResult == .BZ_PERMISSION_GRANTED {
        // Code here
    } else if bzResult == .BZ_PERMISSION_DENIED {
        // Code here
    } else if BZResult == .BZ_PERMISSION_WAIT {
        // Wait for BZVFC_PermissionEvent callback method for permission result
    }
 }

 // Optional implementation of callback methods from BZVoiceFlowCallback protocol
 func BZVFC_PermissionEvent(permissionEvent:BZNotifyMediaEvent) {
    if permissionEvent == .NME_MICROPHONE_PERMISSION_GRANTED {
        // Code here
    } else if permissionEvent == .NME_MICROPHONE_PERMISSION_DENIED {
        // Code here
    }
}


Return Value

Declared In

BZVoiceFlowController.swift

requestSpeechRecognitionPermission

public func requestSpeechRecognitionPermission() -> BZ_RESULT

Discussion

Requests the permission for the application to perform automatic speech recognition. This method must be invoked in order to be able to perform speech recogniton on collected speech utterancres.

The first time this method is invoked, it results with presenting the user with a request to approve the speech recognition usage, and returns BZ_PERMISSION_WAIT to the calling application. The result of the interaction with the user to approve or deny the application to use speech recognition is posted to the application using the callback method BZVFC_PermissionEvent provided by the BZVoiceFlowCallback protocol.

A class initializing an instance of BZVoiceFlowController must be a subclass of BZVoiceFlowCallback and must implement the call back method BZVFC_PermissionEvent in order to receive the result of the user accepting or rejecting speech recognition usage.

Sample implementation code:

#import BZVoiceFlow_Framework

public final class MyVoiceFlowClass: NSObject, BZVoiceFlowCallback  {

 var bzVoiceFlowController: BZVoiceFlowController? = nil

 func InitializeBZVoiceFlowController () {
     bzVoiceFlowController = BZVoiceFlowController()
     var bzResult = bzVoiceFlowController.initialize()
     bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
     bzResult = bzVoiceFlowController.initializeDefaultMediaModules()
     bzVoiceFlowController!.setVoiceFlowCallback(self)

     let bzResult = bzVoiceFlowController!.requestSpeechRecognitionPermission()

    if  bzResult == .BZ_PERMISSION_GRANTED {
        // Code here
    } else if bzResult == .BZ_PERMISSION_DENIED {
        // Code here
     } else if bzResult == .BZ_PERMISSION_RESTRICTED {
         // Code here
     } else if BZResult == .BZ_PERMISSION_WAIT {
        // Wait for BZVFC_PermissionEvent callback method for permission result
    }
 }

 // Optional implementation of callback methods from BZVoiceFlowCallback protocol
 func BZVFC_PermissionEvent(permissionEvent:BZNotifyMediaEvent) {
    if permissionEvent == .NME_SPEECHRECOGNIZER_PERMISSION_GRANTED {
        // Code here
    } else if permissionEvent == .NME_SPEECHRECOGNIZER_PERMISSION_DENIED {
        // Code here
    }
}


Return Value

Declared In

BZVoiceFlowController.swift

resetUserIntent

public func resetUserIntent() -> Bool

Discussion

Resets the user intent to nil. Voiceflow processing automatically resets the user intent before processing Voiceflow “audioDialog” or “audioListener” voice flow module types. An application can also reset the user intent by calling this method.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult = bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()
bzVoiceFlowController.setVoiceFlowCallback(self)

bzVoiceFlowController.loadAudioPromptModules(...)
bzVoiceFlowController.loadVoiceflow(...)
bzVoiceFlowController.runVoiceflow()

// Optional implementation of callback methods from BZVoiceFlowCallback protocol
 func BZVFC_PreModuleStart(vfModuleID: String) {
     if vfModuleID == "AIChat_AudioDialog_AIChat" {
        let bResult = bzVoiceFlowController.resetUserIntent(
    }
}


Return Value

Bool.

Declared In

BZVoiceFlowController.swift

resetVoiceflowRuntimeField

public func resetVoiceflowRuntimeField(name:String) -> Bool

Discussion

Resets the value of shared field name, and with that, actively removes the shared field name from the internal runtime repository engine.
In a Voiceflow, a shared field name between a Voiceflow and an application is one that is surrounded by a $[ and a ].

Sample implementation code:

 let bzVoiceFlowController = BZVoiceFlowController()
 var bzResult = bzVoiceFlowController.initialize()
 bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
 bzResult = bzVoiceFlowController.initializeDefaultMediaModules()
 bzVoiceFlowController.setVoiceFlowCallback(self)

 bzVoiceFlowController.loadAudioPromptModules(...)
 bzVoiceFlowController.loadVoiceflow(...)
 bzVoiceFlowController.runVoiceflow()

 // Optional implementation of callback methods from BZVoiceFlowCallback protocol
 func BZVFC_PreModuleEnd(vfModuleID: String) {
      if vfModuleID == "AIChat_Select_WhatToChatAbout"  {
         let isCompletedPlayText:Bool = bzVoiceFlowController.getVoiceflowRuntimeField("CompletedPlayText") as? Bool
        bzVoiceFlowController.resetVoiceflowRuntimeField("CompletedPlayText")
         }
     }
 }

Parameters

name

The name of the shared field.

Return Value

Bool true if successful. otherwise false.

Declared In

BZVoiceFlowController.swift

resumeVoiceflow

public func resumeVoiceflow() -> BZ_RESULT

Discussion

Instructs Voiceflow procerssing to resume Voiceflow processing after Voiceflow processing was paused. Voiceflow processing pauses when it processes a Voiceflow Module of type pauseResume. Voiceflow processing remains paused until an application calls this method to have Voiceflow processing resume. If successful, this method executes asychronously. While and after resuming Voiceflow processing, events with event data are posted to an application using the callback methods provided by the BZVoiceFlowCallback protocol.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

// At some point an application executes the following methods
bzResult = bzVoiceFlowController.loadAudioPromptModules(jsonData: bzAudioPromptModulesJSON)
bzResult = bzVoiceFlowController.loadVoiceflow(jsonData: bzVoiceFlowJSON)
bzResult = bzVoiceFlowController.runVoiceflow()

// Active Voice flow processing pauses when it encounters a Voiceflow Module of type "pauseResume"

// Later, the application instructs Voiceflow processing to resume Voiceflow processing from the Voiceflow Module of type "pauseResume"
bzResult = bzVoiceFlowController.resumeVoiceflow()



Return Value

Declared In

BZVoiceFlowController.swift

runVoiceflow

public func runVoiceflow() -> BZ_RESULT

Discussion

Interprets and processes the loaded Voiceflow Modules, Audio Prompt Modules and optional Audio-to-Text Maps to generate a conversational Voiceflow interaction between an application and its user. loadAudioPromptModules and loadVoiceflow methods must be invoked successfully at least once before calling this method.

This method processes the Voiceflow asynchronously and ends when Voiceflow processing reaches and VF_END Voiceflow Module, when it is stopped or when it is interrupted. During Voiceflow processing events with event data are posted to the application using the callback methods provided by the BZVoiceFlowCallback protocol.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

// At some point an application executes the following methods
bzResult = bzVoiceFlowController.loadAudioPromptModules(jsonData: bzAudioPromptModulesJSON)
bzResult = bzVoiceFlowController.loadVoiceflow(jsonData: bzVoiceFlowJSON)
bzResult = bzVoiceFlowController.runVoiceflow()



Return Value

Declared In

BZVoiceFlowController.swift

setLanguageCode

public func setLanguageCode(langCode:String) -> BZ_RESULT

Discussion

Sets the language locale code for Voiceflow processing. Default language code is “en-US” for US english. When this method is called, the frameworks will additionaly treat this language code as a possible existing folder name under thelocalURL path set by calling the method setMediaResourceLocation, and if it exists, media resource files will be read from or saved to that path. If the path with the language code string does not exist then only localURL path is used.

Calling this method may also cause the speech recognition language and the speech synthesis voice used during Voiceflow processing to execute with the newly selected language code. The method getSSVoices retrieves all available voices with associated language codes. On iOS devices, additional voices and languages can be loaded in Settings. for more information about Apple speech products please consult Apple Developers Website.
Sample language codes: “bg-BG”, “ca-ES”, “cs-CZ”, “da-DK”, “de-DE”, “ar-001”, “es-ES”, “fr-CA”, etc.

Sample implementation code:

 let bzVoiceFlowController = BZVoiceFlowController()
  var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

bzResult = bzVoiceFlowController.setLanguageCode(langCode: "en-US")

//changing the language code to french
bzResult = bzVoiceFlowController.setLanguageCode(langCode: "fr-FR")


Parameters

langCode

The language locale code.

Return Value

Declared In

BZVoiceFlowController.swift

setLogLevel

public func setLogLevel(logLevel:String?)

Discussion

Sets the log level of the BZVoiceFlow framework. This method can be invoked before initializing the framework.
On Apple devices Unified logging is utilized. All logs are available in Apple’s Console application. Also all logs are visible in Xcode output console when running the application in Xcode in debug mode.
The following are the valid log levels:
- “none”
- “fault”
- “error”
- “default”
- “info”
- “debug”
- “verbose”

Default log level is: “default”.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
bzVoiceFlowController.setLogLevel("debug")
bzVoiceFlowController.initialize()


Parameters

logLevel

The log level.

Declared In

BZVoiceFlowController.swift

setMediaModulesLogLevels

public func setMediaModulesLogLevels(logLevels:[String:String?])

Discussion

Sets the log levels of the BZMedia framework modules. This method can be invoked before initializing the BZVoiceFlow framework. BZMedia framework contains many media modules. Logging for each media module can be controlled independently.
On Apple devices Unified logging is utilized. All logs are available in Apple’s Console application. Also all logs are visible in Xcode output console when running the application in Xcode in debug mode.

Here is a list of the media modules:
- “MediaController”
- “MediaPermissions”
- “MediaEngineWrapper”
- “MediaEngine”
- “AudioStreamer”
- “AudioSession”
- “AudioPlayer”
- “AudioRecorder”
- “AudioFileRecorder”
- “FliteSS”
- “AppleSS”
- “PocketSphinxSR”
- “AppleSR”

The following are the valid log levels:
- “none”
- “fault”
- “error”
- “default”
- “info”
- “debug”
- “verbose”

Default log level for all media modules is: “default”.

Sample implementation code:

let logLevels:[String:String?] = ["MediaController":"debug", "AudioPlayer":"verbose", "AppleSS":"error", "AudioStreamer":"none"]

let bzVoiceFlowController = BZVoiceFlowController()
bzVoiceFlowController.setLogLevel("debug")
bzVoiceFlowController.setMediaModulesLogLevels(logLevels)
bzVoiceFlowController.initialize()


Parameters

logLevels

The log levels is a dictionary of key value pairs where key is the media module and the value is the log level.

Declared In

BZVoiceFlowController.swift

setMediaResourceLocation

public func setMediaResourceLocation(fileCategory:BZFileCategory, localURL:String) -> BZ_RESULT

Discussion

Sets the location of resources for access by BZVoiceFlow and BZMedia frameworks during Voiceflow processing.

During Voiceflow processing, the frameworks access ref_Voiceflow_ref files, Audio Prompt Module list files, Audio-to-Text Map files, pre-recorded files for audio playback, speech recognition task files for customized speech recognition, locations to save recorded audio for various tasks, etc. This is an optional and a convenience method so not to always specify the locations for where to access resource files from or where to save data and files to. An application can also specify or overide the paths at the time it passes the files to the frameworks or from Voiceflow files.

Note: If a LanguageCode string is set using the setLanguageCode method, then the frameworks will additionaly treat this string as an additional folder name under localURL, and if it exists, the files will be read from or saved to that folder. If the folder with LanguageCode string does not exist then only localURL is used.

Sample implementation code:

 let bzVoiceFlowController = BZVoiceFlowController()
  var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()

// the following assumes that `MZMedia/AudioPrompts` is a valid folder in the application bundle containg audio files to be processed for audio playback.

bzResult = bzVoiceFlowController.setMediaResourceLocation(fileCategory: .FC_PLAY_AUDIO, localURL: Bundle.main.path(forResource: "MZMedia/AudioPrompts", ofType: "")!)


// the following assumes that `MZMedia/AudioText` is a valid folder in the application bundle containg text files to be processed for audio playback using speech synthesis.

bzResult = bzVoiceFlowController.setMediaResourceLocation(fileCategory: .FC_PLAY_TEXT, localURL: Bundle.main.path(forResource: "MZMedia/AudioText", ofType: "")!)


// the following assumes that `/Users/username/Data/RecordedAudio` is a valid folder for storing files containing recorded audio.

bzResult = bzVoiceFlowController.setMediaResourceLocation(fileCategory: .FC_RECORD_AUDIO, localURL: "/Users/username/Data/RecordedAudio")


// the following assumes that `MZMedia/SR` is a valid folder in the application bundle containg speech recognition task files, contextual phrase files, custom dictionary files, etc.

bzResult = bzVoiceFlowController.setMediaResourceLocation(fileCategory: .FC_SPEECH_RECOGNITION, localURL: Bundle.main.path(forResource: "MZMedia/SR", ofType: "")!)


// the following assumes that `MZMedia/SS` is a valid folder in the application bundle containg speech synthesis resource files to be used for customized speech synthesis.

bzResult = bzVoiceFlowController.setMediaResourceLocation(fileCategory: .FC_SPEECH_SYNTHESIS, localURL: Bundle.main.path(forResource: "MZMedia/SS", ofType: "")!)

// the following assumes that `MZMedia/VoiceFlows` is a valid folder in the application bundle containing the application Voiceflow files.

bzResult = bzVoiceFlowController.setMediaResourceLocation(fileCategory: .FC_VOICEFLOW, localURL: Bundle.main.path(forResource: "MZMedia/VoiceFlows", ofType: "")!)


Parameters

fileCategory

The file resource category as defined in BZFileCategory.

localURL

The local path URL that specifies the location of where files can be read from or where files can be saved to.

Return Value

Declared In

BZVoiceFlowController.swift

setUserIntent

public func setUserIntent(userIntent:String) -> Bool

Discussion

Sets the user intent to a string value and passes that to Voiceflow processing. An application usually evaluates a speech recognition hypothesis to some user intent, and submits the user intent to Voiceflow processing to take action on. User intent is an intenral field named intent and is evaluated in a Voiceflow “audioDialog” or “audioListener” voice flow module type as follows:

     "userIntentCollection": [
         {
             "intent": "AIChatSubmitted",
             "goTo": "AIChat_AudioDialog_AIChatWait",
         },
         {
             "intent": "AudioListenerCommand",
             "goTo": "AIChat_Process_AudioListenerCommand",
         },
         {
             "intent": "TransitionToSleepMode",
             "goTo": "AIChat_Process_SleepModeRequested",
         },
    ]



Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult = bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()
bzVoiceFlowController.setVoiceFlowCallback(self)

 bzVoiceFlowController.loadAudioPromptModules(...)
 bzVoiceFlowController.loadVoiceflow(...)
 bzVoiceFlowController.runVoiceflow()

// Optional implementation of callback methods from BZVoiceFlowCallback protocol

func BZVFC_SRHypothesis(vfModuleID: String, srData: BZSRData) {
 if srData.srHypothesis != nil && !srData.srHypothesis!.isEmpty {
     if vfModuleID == "AIChat_AudioDialog_AIChat"  && srData.srHypothesis!.caseInsensitiveCompare("go to sleep") == .orderedSame{
        let bResult = bzVoiceFlowController.setUserIntent("TransitionToSleepMode")
        }
     }
 }


Parameters

userIntent

The intent of the user derived from a speech recognition hypothesis.

Return Value

Bool.

Declared In

BZVoiceFlowController.swift

setVoiceFlowCallback

public func setVoiceFlowCallback(voiceFlowCallback:BZVoiceFlowCallback) -> Bool

Discussion

Sets the Voiceflow call back object that is implementing the BZVoiceFlowCallback protocol in order for an application to receive callbacks from BZVoiceFlow framework.

A class initializing an instance of BZVoiceFlowController must be a subclass of BZVoiceFlowCallback in order to receive Voiceflow processing callbacks from BZVoiceFlow framework.

Note: VoiceFfow processing callbacks from BZVoiceFlow framework occur on the main thread of an application. The application should be careful not to tie its main thread with complex and time consuming tasks so these callbacks and events are received timely. Also the application should release the callback methods quickly without leverging these methods to execute complex and time comsuming tasks.

Sample implementation code:

public final class MyVoiceFlowClass: NSObject, BZVoiceFlowCallback {

    var bzVoiceFlowController: BZVoiceFlowController? = nil

    func InitializeBZVoiceFlowController () {
        bzVoiceFlowController = BZVoiceFlowController()
        var bzResult = bzVoiceFlowController.initialize()
        bzVoiceFlowController!.setVoiceFlowCallback(self)
    }

    // Optional implementation of callback methods from BZVoiceFlowCallback protocol
    func BZVFC_PreModuleStart(vfModuleID: String) {
    }

    func BZVFC_PreModuleEnd(vfModuleID: String) {
    }

    func BZVFC_SRHypothesis(vfModuleID: String, srData: BZSRData) {
    }

    func BZVFC_MediaEvent(vfModuleID: String, mediaItemID: String, mediaFunction:BZNotifyMediaFunction, mediaEvent:BZNotifyMediaEvent, mediaEventData: [AnyHashable : Any]) {
    }

    func BZVFC_PlayAudioSegmentData(vfModuleID: String, promptID:String, audioSegmentType:BZAudioSegmentType, audioFile: String?, textString: String?, textFile: String?) {
    }

    func BZVFC_PermissionEvent(permissionEvent:BZNotifyMediaEvent) {
    }
}

Parameters

voiceFlowCallback

Usually set to self when the implementing class subclasses the BZVoiceFlowCallback protocol.

Return Value

Boolean. false if BZVoiceFlowController not initialized, otherwise true.

Declared In

BZVoiceFlowController.swift

setVoiceflowRuntimeField

public func setVoiceflowRuntimeField(name:String, value:Any) -> Bool

Discussion

Sets the runtime value of a field name during Voiceflow processing. During Voiceflow processing, the interpretation of the JSON structure detects if a value of a JSON key (aka field name) is a dynamic value that needs to be retrieved from an internal runtime repository engine. An application sets this dynamic value and Voiceflow processing accesses this value when it requires it. The application usuallly sets runtime value for a field name during a Voiceflow callback to the application.

In a Voiceflow, a JSON value for a field name is a dynamic value that can be set at runtime by an application if the value is made up of a another shared key string surrounded by a $[ and a ]. For example, with "promptID": "$[Prompt_AIChat_WhatToChatAbout]", the value of field namepromptID is the value of the shared key Prompt_AIChat_WhatToChatAbout set by the application and retrieved by Voiceflow processing at runtime.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult = bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()
bzVoiceFlowController.setVoiceFlowCallback(self)

bzVoiceFlowController.loadAudioPromptModules(...)
bzVoiceFlowController.loadVoiceflow(...)
bzVoiceFlowController.runVoiceflow()

// Optional implementation of callback methods from BZVoiceFlowCallback protocol

func BZVFC_SRHypothesis(vfModuleID: String, srData: BZSRData) {
 if srData.srHypothesis != nil && !srData.srHypothesis!.isEmpty {
     if vfModuleID == "AIChat_AudioDialog_AIChat"  {
        let bResult = bzVoiceFlowController.setVoiceflowRuntimeField("ChatResponseText", "Thank you for your response")
        }
     }
 }


Parameters

name

The name of the shared field.

value

the value of the field name. The value must align to a value format used in JSON, for example, string, boolean and integer.

Return Value

Bool true if successful. otherwise false.

Declared In

BZVoiceFlowController.swift

stopVoiceflow

public func stopVoiceflow() -> BZ_RESULT

Discussion

Stops and ends active Voiceflow processing. If successful, this method executes asychronously. While stopping Voiceflow processing, events with event data are posted to an application using the callback methods provided by the BZVoiceFlowCallback protocol.

Sample implementation code:

let bzVoiceFlowController = BZVoiceFlowController()
var bzResult =  bzVoiceFlowController.initialize()
bzResult = bzVoiceFlowController.initializeDefaultAudioSession()
bzResult = bzVoiceFlowController.initializeDefaultMediaModules()

// At some point an application executes the following methods
bzResult = bzVoiceFlowController.loadAudioPromptModules(jsonData: bzAudioPromptModulesJSON)
bzResult = bzVoiceFlowController.loadVoiceflow(jsonData: bzVoiceFlowJSON)
bzResult = bzVoiceFlowController.runVoiceflow()

// Later, the application decides to stop Voiceflow processing before it completes on its own and calls the following method
bzResult = bzVoiceFlowController.stopVoiceflow()



Return Value

Declared In

BZVoiceFlowController.swift