enableAudioProcessor method
@detail api
@author gongzhengduo
@brief Enable audio frames callback for custom processing and set the format for the specified type of audio frames.
@param method The types of audio frames. See ByteRTCAudioFrameMethod{@link #ByteRTCAudioFrameMethod}. Set this parameter to process multiple types of audio.
With different values, you will receive the corresponding callback:
- For locally captured audio, you will receive onProcessRecordAudioFrame:{@link #ByteRTCAudioFrameProcessor#onProcessRecordAudioFrame}.
- For mixed remote audio, you will receive onProcessPlayBackAudioFrame:{@link #ByteRTCAudioFrameProcessor#onProcessPlayBackAudioFrame}.
- For audio from remote users, you will receive onProcessRemoteUserAudioFrame:info:audioFrame:{@link #ByteRTCAudioFrameProcessor#onProcessRemoteUserAudioFrame:info:audioFrame}.
- For SDK-level in-ear monitoring audio, you will receive onProcessEarMonitorAudioFrame:{@link #ByteRTCAudioFrameProcessor#onProcessEarMonitorAudioFrame} (Only on iOS).
- For shared-screen audio, you will receive onProcessScreenAudioFrame:{@link #ByteRTCAudioFrameProcessor#onProcessScreenAudioFrame}.
@param format The format of the returned audio frame. See ByteRTCAudioFormat{@link #ByteRTCAudioFormat}.
@return
- 0: Success.
- < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details
@note
- Before calling this API, call registerAudioProcessor:{@link #ByteRTCEngine#registerAudioProcessor} to register a processor.
- To disable custom audio processing, call disableAudioProcessor:{@link #ByteRTCEngine#disableAudioProcessor}.
Implementation
FutureOr<int> enableAudioProcessor(
ByteRTCAudioFrameMethod method, ByteRTCAudioFormat format) async {
return await nativeCall(
'enableAudioProcessor:audioFormat:', [method.$value, format]);
}