pushScreenAudioFrame method
@detail api
@author liyi.000
@brief Using a custom capture method, when capturing screen audio during screen sharing, push the audio frame to the RTC SDK for encoding and other processing.
@param audioFrame Audio data frame. See ByteRTCAudioFrame{@link #ByteRTCAudioFrame}
- The audio sampling format is S16. The data format within the audio buffer must be PCM data, and its capacity should be samples × frame.channel × 2.
- A specific sample rate and the number of channels must be specified; it is not supported to set them to automatic.
@return Method call result
- 0: Setup succeeded.
- < 0: Setup failed. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details.
@note
- You must call this API after calling setScreenAudioSourceType:{@link #ByteRTCEngine#setScreenAudioSourceType} to enable custom capture of the screen audio.
- You should call this method every 10 milliseconds to push a custom captured audio frame. A push audio frame should contain frame.sample _rate/100 audio sample points. For example, if the sampling rate is 48000Hz, 480 sampling points should be pushed each time.
- After calling this interface to push the custom captured audio frame to the RTC SDK, you must call publishScreenAudio: to push the captured screen audio to the remote end. Audio frame information pushed to the RTC SDK is lost before calling publishScreenAudio:.
Implementation
FutureOr<int> pushScreenAudioFrame(ByteRTCAudioFrame audioFrame) async {
return await nativeCall('pushScreenAudioFrame:', [audioFrame]);
}