RTCEngine class

Inheritance
  • Object
  • NativeClass
  • RTCEngine

Constructors

RTCEngine([NativeClassOptions? options])

Properties

$resource → NativeResource
no setterinherited
hashCode int
The hash code for this object.
no setterinherited
ready Future<void>
Whether the instance is initialized
no setterinherited
runtimeType Type
A representation of the runtime type of the object.
no setterinherited

Methods

clearVideoWatermark() FutureOr<int>
@detail api @hiddensdk(audiosdk) @author zhushufan.ref @brief Removes video watermark from designated video stream. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details
createGameRoom(String roomId, GameRoomConfig config) FutureOr<IGameRoom>
@detail api @author luomingkang @brief Create a game room instance.
This API only returns a room instance. You still need to call joinRoom{@link #RTCRoom#joinRoom} to actually create/join the room.
Each call of this API creates one RTCRoom{@link #RTCRoom} instance. Call this API as many times as the number of rooms you need, and then call joinRoom{@link #RTCRoom#joinRoom} of each RTCRoom instance to join multiple rooms at the same time.
In multi-room mode, a user can subscribe to media streams in the joined rooms at the same time. @param roomId The string matches the regular expression: [a-zA-Z0-9_\@\\-\\.]{1,128}. @param config The game room configuration. See GameRoomConfig{@link #GameRoomConfig}. @return RTCRoom{@link #RTCRoom} instance. If you get NULL instead of an RTCRoom instance, please ensure the roomId is valid. And the specified room is not yet created. @note - If the room that you wish to join already exists, you still need to call this API first to create the RTCRoom instance, and then call joinRoom{@link #RTCRoom#joinRoom}. - Do not create multiple rooms with the same roomId, otherwise the newly created room instance will replace the old one. - To forward streams to the other rooms, call startForwardStreamToRooms{@link #RTCRoom#startForwardStreamToRooms} instead of enabling Multi-room mode.
createRTCRoom(String roomId) FutureOr<RTCRoom>
@detail api @author shenpengliang @brief Create a RTCroom instance.
This API only returns a room instance. You still need to call joinRoom{@link #RTCRoom#joinRoom} to actually create/join the room.
Each call of this API creates one RTCRoom{@link #RTCRoom} instance. Call this API as many times as the number of rooms you need, and then call joinRoom{@link #RTCRoom#joinRoom} of each RTCRoom instance to join multiple rooms at the same time.
In multi-room mode, a user can subscribe to media streams in the joined rooms at the same time. @param roomId The string matches the regular expression: [a-zA-Z0-9_\@\\-\\.]{1,128}. @return RTCRoom{@link #RTCRoom} instance. If you get NULL instead of an RTCRoom instance, please ensure the roomId is valid. And the specified room is not yet created. @note - If the room that you wish to join already exists, you still need to call this API first to create the RTCRoom instance, and then call joinRoom{@link #RTCRoom#joinRoom}. - Do not create multiple rooms with the same roomId, otherwise the newly created room instance will replace the old one. - To forward streams to the other rooms, call startForwardStreamToRooms{@link #RTCRoom#startForwardStreamToRooms} instead of enabling Multi-room mode.
createRTSRoom(String roomId) FutureOr<RTSRoom>
@detail api @brief Create a RTSRoom{@link #RTSRoom} instance.
This API only returns a room instance. You still need to call joinRTSRoom{@link #RTSRoom#joinRTSRoom} to actually create/join the room.
Each call of this API creates one RTSRoom{@link #RTSRoom} instance. Call this API as many times as the number of rooms you need, and then call joinRTSRoom{@link #RTSRoom#joinRTSRoom} of each RTSRoom instance to join multiple rooms at the same time.
@param roomId The string matches the regular expression: [a-zA-Z0-9_\@\\-\\.]{1,128}. @return RTSRoom{@link #RTSRoom} instance. If you get NULL instead of an RTSRoom instance, please ensure the roomId is valid. And the specified room is not yet created. @note - If the room that you wish to join already exists, you still need to call this API first to create the RTSRoom instance, and then call joinRTSRoom{@link #RTSRoom#joinRTSRoom}. - Do not create multiple rooms with the same roomId, otherwise the newly created room instance will replace the old one.
destroy() → void
inherited
disableAlphaChannelVideoEncode() FutureOr<int>
@valid since 3.58 @detail api @hiddensdk(audiosdk) @region Video Management @brief Disables the Alpha channel encoding feature for externally captured video frames. @return Method call result:
- 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note This API must be called after stopping the publish of the video stream.
disableAudioFrameCallback(AudioFrameCallbackMethod method) FutureOr<int>
@detail api @author gongzhengduo @brief Disables audio data callback. @param method Audio data callback method. See AudioFrameCallbackMethod{@link #AudioFrameCallbackMethod}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note Call this API after calling enableAudioFrameCallback{@link #RTCEngine#enableAudioFrameCallback}.
disableAudioProcessor(AudioProcessorMethod method) FutureOr<int>
@detail api @author gongzhengduo @brief Disable custom audio processing. @param method Audio Frame type. See AudioProcessorMethod{@link #AudioProcessorMethod}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details
enableAlphaChannelVideoEncode(AlphaLayout alphaLayout) FutureOr<int>
@valid since 3.58 @detail api @hiddensdk(audiosdk) @region Video Management @brief Enables the Alpha channel encoding feature for custom captured video frames.
Suitable for scenarios where the video subject and background need to be separated at the push stream end, and the background can be custom rendered at the pull stream end. @param alphaLayout The relative position of the separated Alpha channel to the RGB channel information. Currently, only AlphaLayout.TOP is supported, which means it is placed above the RGB channel information. @return Method call result:
- 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - This API only applies to custom captured video frames that use the RGBA color model, including VideoPixelFormat.TEXTURE_2D, VideoPixelFormat.TEXTURE_OES, VideoPixelFormat.RGBA. - This API must be called before publishing the video stream. - After calling this API to enable Alpha channel encoding, you must call pushExternalVideoFrame{@link #RTCEngine#pushExternalVideoFrame} to push the custom captured video frames to the RTC SDK. If a video frame format that is not supported is pushed, calling pushExternalVideoFrame{@link #RTCEngine#pushExternalVideoFrame} will return the error code ReturnStatus.RETURN_STATUS_PARAMETER_ERR.
enableAudioAEDReport(int interval) FutureOr<int>
@hidden 3.60 for internal use only @detail api @author gengjunjie @brief Enables AED detection. After that, you will receive onAudioAEDStateUpdate{@link #IRTCEngineEventHandler#onAudioAEDStateUpdate}. @param interval Callback interval, in milliseconds.
+ <= 0: Disable AED detection. + [100, 3000]: Enable AED detection and set the callback interval to this value. It is recommended to set it to 2000. + Invalid interval value: If the value is less than 100, it is set to 100. If the value is greater than 3000, it is set to 3000. @return + 0: Success. + <0: Fail. See ReturnStatus{@link #ReturnStatus} for more details.
enableAudioDecoding(boolean enable) FutureOr<void>
@hidden for internal use only @region custom audio acquisition rendering @brief whether to use sdk audio decoding. @param enable whether to use audio decoding.
。 - true: audio decoding is turned on.(default) - false: audio decoding is turned off. @note - use before registerRemoteEncodedAudioFrameObserver.
enableAudioEncoding(boolean enable) FutureOr<void>
@hidden for internal use only @region custom audio acquisition rendering @brief whether to use sdk audio encoding. @param enable whether to use audio encoding.
。 - true: audio encoding is turned on.(default) - false: audio encoding is turned off. @note - use before pushExternalEncodedAudioFrame{@link #RTCEngine#pushExternalEncodedAudioFrame}.
enableAudioFrameCallback(AudioFrameCallbackMethod method, AudioFormat format) FutureOr<int>
@detail api @author gongzhengduo @brief Enable audio frames callback and set the format for the specified type of audio frames. @param method Audio data callback method. See AudioFrameCallbackMethod{@link #AudioFrameCallbackMethod}.
If method is set as AUDIO_FRAME_CALLBACK_RECORD(0), AUDIO_FRAME_CALLBACK_PLAYBACK(1), AUDIO_FRAME_CALLBACK_MIXED(2), or AUDIO_FRAME_CALLBACK_CAPTURE_MIXED(5), set format to the accurate value listed in the audio parameters format.
If method is set as AUDIO_FRAME_CALLBACK_REMOTE_USER(3), set format to auto. @param format Audio parameters format. See AudioFormat{@link #AudioFormat}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note After calling this API and registerAudioFrameObserver{@link #RTCEngine#registerAudioFrameObserver}, IAudioFrameObserver{@link #IAudioFrameObserver} will receive the corresponding audio data callback. However, these two APIs are independent of each other and the calling order is not restricted.
enableAudioProcessor(AudioProcessorMethod method, AudioFormat format) FutureOr<int>
@detail api @author gongzhengduo @brief Enable audio frames callback for custom processing and set the format for the specified type of audio frames. @param method The types of audio frames. See AudioProcessorMethod{@link #AudioProcessorMethod}. Set this parameter to process multiple types of audio.
With different values, you will receive the corresponding callback:
- For locally captured audio, you will receive onProcessRecordAudioFrame{@link #IAudioFrameProcessor#onProcessRecordAudioFrame}. - For mixed remote audio, you will receive onProcessPlayBackAudioFrame{@link #IAudioFrameProcessor#onProcessPlayBackAudioFrame}. - For audio from remote users, you will receive onProcessRemoteUserAudioFrame{@link #IAudioFrameProcessor#onProcessRemoteUserAudioFrame}. - For SDK-level in-ear monitoring audio, you will receive onProcessEarMonitorAudioFrame{@link #IAudioFrameProcessor#onProcessEarMonitorAudioFrame}. - For shared-screen audio, you will receive onProcessScreenAudioFrame{@link #IAudioFrameProcessor#onProcessScreenAudioFrame}. @param format The format of audio frames. See AudioFormat{@link #AudioFormat}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Before calling this API, call registerAudioProcessor{@link #RTCEngine#registerAudioProcessor} to register a processor. - To disable custom audio processing, call disableAudioProcessor{@link #RTCEngine#disableAudioProcessor}.
enableAudioPropertiesReport(AudioPropertiesConfig config) FutureOr<int>
@detail api @author wangjunzheng @brief Enable audio information prompts. After that, you will receive onLocalAudioPropertiesReport{@link #IRTCEngineEventHandler#onLocalAudioPropertiesReport}, onRemoteAudioPropertiesReport{@link #IRTCEngineEventHandler#onRemoteAudioPropertiesReport}, and onActiveSpeaker{@link #IRTCEngineEventHandler#onActiveSpeaker}. @param config See AudioPropertiesConfig{@link #AudioPropertiesConfig} @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details
enableAudioVADReport(int interval) FutureOr<int>
@hidden 3.60 for internal use only @detail api @author gengjunjie @brief Enables AED detection. After that, you will receive onAudioVADStateUpdate{@link #IRTCEngineEventHandler#onAudioVADStateUpdate}. @param interval Callback interval, in milliseconds.
+ <= 0: Disable AED detection. + [100, 3000]: Enable AED detection and set the callback interval to this value. + Invalid interval value: If the value is less than 100, it is set to 100. If the value is greater than 3000, it is set to 3000. @return + 0: Success. + <0: Fail. See ReturnStatus{@link #ReturnStatus} for more details.
enableCameraAutoExposureFaceMode(boolean enable) FutureOr<int>
@valid since 353 @detail api @author yinkaisheng @brief Enable or disable face auto exposure mode during internal video capture. This mode fixes the problem that the face is too dark under strong backlight; but it will also cause the problem of too bright/too dark in the area outside the ROI region. @param enable Whether to enable the mode. True by default for iOS, False by default for Android. @return - 0: Success. - < 0: Failure. @note You must call this API before calling startVideoCapture{@link #RTCEngine#startVideoCapture} to enable internal capture to make the setting valid.
enableEffectBeauty(boolean enable) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangjunlin.3182 @brief Enables/Disables basic beauty effects. @param enable Whether to enable basic beauty effects.
- true: Enables basic beauty effects. - false: (Default) Disables basic beauty effects. @return - 0: Success. - –1001: This method is not available for your current RTC SDK. - -12: This method is not available in the Audio SDK. - <0: Failure. Effect SDK internal error. For specific error code, see Error Code Table. @note - You cannot use the basic beauty effects and the advanced effect features at the same time. See how to use advanced effect features for more information. - You need to integrate Effect SDK before calling this API. Effect SDK v4.4.2+ is recommended. - Call setBeautyIntensity{@link #RTCEngine#setBeautyIntensity} to set the beauty effect intensity. If you do not set the intensity before calling this API, the default intensity will be enabled. The default values for the intensity of each beauty mode are as follows: 0.7 for brightning, 0.8 for smoothing, 0.5 for sharpening, and 0.7 for clarity. - This API is not applicable to screen capturing.
enableExternalSoundCard(boolean enable) FutureOr<int>
@detail api @author zhangyuanyuan.0101 @brief Enable the audio process mode for external sound card. @param enable
- true: enable - false: disable (by default) @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - When you use external sound card for audio capture, enable this mode for better audio quality. - When using the mode, you can only use earphones. If you need to use internal or external speaker, disable this mode.
enableLocalVoiceReverb(boolean enable) FutureOr<int>
@detail api @author wangjunzheng @brief Enable the reverb effect for the local captured voice. @param enable Whether to enable. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note Call setLocalVoiceReverbParam{@link #RTCEngine#setLocalVoiceReverbParam} to set the reverb effect.
enablePlaybackDucking(boolean enable) FutureOr<int>
@detail api @author majun.lvhiei @brief Enables/disables the playback ducking function. This function is usually used in scenarios where short videos or music will be played simultaneously during RTC calls.
With the function on, if remote voice is detected, the local media volume of RTC will be lowered to ensure the clarity of the remote voice. If remote voice disappears, the local media volume of RTC restores. @param enable Whether to enable playback ducking:
- true: Yes - false: No @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details
enableVocalInstrumentBalance(boolean enable) FutureOr<int>
@detail api @author majun.lvhiei @brief Enables/disables the loudness equalization function.
If you call this API with the parameter set to True, the loudness of user's voice will be adjusted to -16lufs. If then you also call setAudioMixingLoudness and import the original loudness of the audio data used in audio mixing, the loudness will be adjusted to -20lufs when the audio data starts to play. @param enable Whether to enable loudness equalization function:
- true: Yes - false: No @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note You must call this API before starting to play the audio file with start{@link #IAudioEffectPlayer#start}.
feedback(List<ProblemFeedbackOption> types, ProblemFeedbackInfo info) FutureOr<int>
@detail api @author wangzhanqiang @brief Report the user feedback to RTC. @param types List of preset problems. See ProblemFeedbackOption{@link #ProblemFeedbackOption} @param info Specific description of other problems other than the preset problem, room's information. ProblemFeedbackInfo{@link #ProblemFeedbackInfo} @return - 0: Success. - -3: Failure. @note If the user is in the room when reporting, the report leads to the room / rooms where the user is currently located;
If the user is not in the room when reporting, the report leads to the previously exited Room.
getAudioDeviceManager() FutureOr<IRTCAudioDeviceManager>
@detail api @author dixing @brief Gets audio device management API class. @return See IRTCAudioDeviceManager{@link #IRTCAudioDeviceManager}.
getAudioEffectPlayer() FutureOr<IAudioEffectPlayer>
@valid since 3.53 @detail api @brief Create an instance for audio effect player. @return See IAudioEffectPlayer{@link #IAudioEffectPlayer}.
getAudioRoute() FutureOr<AudioRoute>
@detail api @author dixing @brief Get the information of currently-using playback route. @return See AudioRoute{@link #AudioRoute}. @note To set the audio playback route, see setAudioRoute{@link #RTCEngine#setAudioRoute}.
getCameraZoomMaxRatio() FutureOr<float>
@detail api @author zhangzhenyu.samuel @brief Get the maximum zoom factor of the currently used camera (front/postcondition) @return Maximum zoom factor @note You must have called startVideoCapture{@link #RTCEngine#startVideoCapture} using the SDK internal capture module for video capture, the maximum zoom factor of the camera can be detected.
getMediaPlayer(int playerId) FutureOr<IMediaPlayer>
@valid since 3.53 @detail api @brief Create a media player instance. @param playerId Media player id. The range is [0, 3]. You can create up to 4 instances at the same time. If it exceeds the range, nullptr will be returned. @return Media player instance. See IMediaPlayer{@link #IMediaPlayer}.
getNativeHandle() FutureOr<long>
@detail api @brief Get IRTCEngine in C++ layer. @return - >0:Success. Return the address of IRTCEngine in C++ layer. - -1:Failure. @note In some scenarios, getting and working with IRTCEngine in C++ layer has much higher execution efficiency than through the Java encapsulation layer. Typical scenarios include: custom processing of video/audio frames, encryption of audio and video calls, etc.
getNetworkTimeInfo() FutureOr<NetworkTimeInfo>
@detail api @author songxiaomeng.19 @brief Obtain the synchronization network time information. @return See NetworkTimeInfo{@link #NetworkTimeInfo}. @note - When you call this API for the first time, you starts synchornizing the network time information and receive the return value 0. After the synchonization finishes, you will receive onNetworkTimeSynchronized{@link #IRTCEngineEventHandler#onNetworkTimeSynchronized}. After that, calling this API will get you the correct network time. - Under chorus scenario, participants shall start audio mixing at the same network time.
getPeerOnlineStatus(String peerUserID) FutureOr<int>
@detail api @author hanchenchen.c @brief Query the login status of the opposite or local user @param peerUserID The user ID to be queried @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - You must call login{@link #RTCEngine#login} to log in before calling this interface. - After calling this interface, the SDK notifies the query result using the onGetPeerOnlineStatus{@link #IRTCEngineEventHandler#onGetPeerOnlineStatus} callback. - Before sending an out-of-room message, the user can know whether the peer user is logged in through this interface to decide whether to send a message. You can also check your login status through this interface.
getVideoDeviceManager() FutureOr<IVideoDeviceManager>
@valid since 3.56 @detail api @author likai.666 @brief Create a video Facility Management instance @return Video Facility Management instance. See IVideoDeviceManager{@link #IVideoDeviceManager}
getVideoEffectInterface() FutureOr<IVideoEffect>
@detail api @author zhushufan.ref @brief Gets video effect interfaces. @return Video effect interfaces. See IVideoEffect{@link #IVideoEffect}.
getWTNStream() FutureOr<IWTNStream>
@detail api @author lihuan.wuti2ha @brief Get the WTN stream interfaces. @return WTN stream interfaces. See IWTNStream{@link #IWTNStream}.
isCameraExposurePositionSupported() FutureOr<boolean>
@detail api @author zhangzhenyu.samuel @brief Checks if manual exposure setting is available for the currently used camera. @return - true: Available. - false: Unavailable. @note You must call startVideoCapture{@link #RTCEngine#startVideoCapture} to start SDK internal video capturing before calling this API.
isCameraFocusPositionSupported() FutureOr<boolean>
@detail api @author zhangzhenyu.samuel @brief Checks if manual focus is available for the currently used camera. @return - true: Available. - false: Unavailable. @note You must call startVideoCapture{@link #RTCEngine#startVideoCapture} to start SDK internal video capturing before calling this API.
isCameraTorchSupported() FutureOr<boolean>
@detail api @author zhangzhenyu.samuel @brief Detect the currently used camera (front/postcondition), whether flash is supported. @return - true: Support - false: Not supported @note You must have called startVideoCapture{@link #RTCEngine#startVideoCapture} for video capture using the SDK internal capture module to detect flash capability.
isCameraZoomSupported() FutureOr<boolean>
@detail api @author zhangzhenyu.samuel @brief Detect whether the currently used camera (front/postcondition) supports zoom (digital/optical zoom). @return - true: Support - false: Not supported @note Camera zoom capability can only be detected if startVideoCapture{@link #RTCEngine#startVideoCapture} is used for video capture using the SDK internal capture module.
login(String token, String uid) FutureOr<int>
@detail api @author hanchenchen.c @brief Log in to call sendUserMessageOutsideRoom{@link #RTCEngine#sendUserMessageOutsideRoom} and sendServerMessage{@link #RTCEngine#sendServerMessage} to send P2P messages or send messages to a server without joining the RTC room.
To log out, call logout{@link #RTCEngine#logout}. @param token
Token is required during login for authentication.
This Token is different from that required by calling joinRoom. You can assign any value even null to roomId to generate a login token. During developing and testing, you can use temporary tokens generated on the console. Deploy the token generating application on your server. @param uid
User ID
User ID is unique within one appid. @return - 0: Success. - <0: Failure. See ReturnStatus{@link #ReturnStatus} for specific reasons. @note You will receive onLoginResult{@link #IRTCEngineEventHandler#onLoginResult} after calling this API and log in successfully. But remote users will not receive notification about that.
logout() FutureOr<int>
@detail After api @author hanchenchen.c @brief Call this method to log out, it is impossible to call methods related to out-of-room messages and end-to-server messages or receive related callbacks. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - After calling this interface to log out, you must first call login{@link #RTCEngine#login} to log in. - After local users call this method to log out, they will receive the result of the onLogout{@link #IRTCEngineEventHandler#onLogout} callback notification, and remote users will not receive the notification.
muteAudioCapture(boolean mute) FutureOr<int>
@valid since 3.58.1 @detail api @author shiyayun @brief Set whether to mute the recording signal (without changing the local hardware). @param mute Whether to mute audio capture.
- True: Mute (disable microphone) - False: (Default) Enable microphone @return - 0: Success. - < 0 : Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - Calling this API does not affect the status of SDK audio stream publishing. - Adjusting the volume by calling setCaptureVolume{@link #RTCEngine#setCaptureVolume} after muting will not cancel the mute state. The volume state will be retained until unmuted. - You can use this interface to set the capture volume before or after calling startAudioCapture{@link #RTCEngine#startAudioCapture} to enable audio capture.
muteScreenAudioCapture(boolean mute) FutureOr<int>
@valid since 3.60. @detail api @author shiyayun @brief Mutes/unmutes the audio captured when screen sharing.
Calling this method will send muted data instead of the screen audio data, and it does not affect the local audio device capture status and the SDK audio stream publishing status. @param mute Whether to mute the audio capture when screen sharing.
- True: Mute the audio capture when screen sharing.
- False: (Default) Unmute the audio capture when screen sharing. @return - 0: Success. - < 0 : Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - Adjusting the volume by calling setCaptureVolume{@link #RTCEngine#setCaptureVolume} after muting will not cancel the mute state. The volume state will be retained until unmuted. - You can use this interface to set the capture volume before or after calling startAudioCapture{@link #RTCEngine#startAudioCapture} to enable audio capture.
nativeCall<T>(String method, [List? args, NativeMethodMeta? meta]) Future<T>
Call instance method
inherited
noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
pullExternalAudioFrame(AudioFrame audioFrame) FutureOr<int>
@detail api @author gongzhengduo @brief Pulls audio data for external playback.
After calling this method, the SDK will actively fetch the audio data to play, including the decoded and mixed audio data from the remote source, for external playback. @param audioFrame Audio data frame. See AudioFrame{@link #AudioFrame} @return Method call result
- 0: Setup succeeded - < 0: Setup failed @note - Before pulling external audio data, setAudioRenderType{@link #RTCEngine#setAudioRenderType} must be called Enable custom audio rendering. - You should pull audio data every 10 milliseconds since the duration of a RTC SDK audio frame is 10 milliseconds. Samples x call frequency = audioFrame's sample rate. Assume that the sampling rate is set to 48000, call this API every 10 ms, so that 480 sampling points should be pulled each time. - The audio sampling format is S16. The data format in the audio buffer is PCM data, and its capacity size is audioFrame.samples × audioFrame.channel × 2.
pushClientMixedStreamExternalVideoFrame(String uid, VideoFrameData frame) FutureOr<int>
pushExternalAudioFrame(AudioFrame audioFrame) FutureOr<int>
@detail api @author gongzhengduo @brief Push custom captured audio data to the RTC SDK. @param audioFrame Audio data frame. See AudioFrame{@link #AudioFrame}
- The audio sampling format must be S16. The data format within the audio buffer must be PCM, and its capacity size should be audioFrame.samples × audioFrame.channel × 2. - Specific sample rates and the number of channels must be designated; automatic settings are not supported. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - Before pushing external audio data, you must call setAudioSourceType{@link #RTCEngine#setAudioSourceType} to enable custom audio capture. - You must push custom captured audio data every 10 milliseconds. The samples (number of audio sampling points) of a single push should be audioFrame.sample Rate/100. For example, when the sampling rate is set to 48000, data of 480 sampling points should be pushed each time.
pushExternalEncodedVideoFrame(int videoIndex, RTCEncodedVideoFrame encodedVideoFrame) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Push a custom encoded video stream @param videoIndex The corresponding encoded stream subscript, starting from 0, if the call setVideoEncoderConfig{@link #RTCEngine#setVideoEncoderConfig} sets multiple streams, the number here must be consistent with it @param encodedVideoFrame Coded Stream Video For frame information. See RTCEncodedVideoFrame{@link #RTCEncodedVideoFrame}. @return API call result:
- 0: Success. - <0: Failure. See ReturnStatus{@link #ReturnStatus} for specific reasons. @note - Currently, only video frames in H264 and ByteVC1 formats are supported, and the video stream protocol must be in an Annex B format. - This function runs within the user calling thread - Before pushing a custom encoded video frame, you must call setVideoSourceType{@link #RTCEngine#setVideoSourceType} to switch the video input source to the custom encoded video source.
pushExternalVideoFrame(VideoFrameData frame) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author liuyangyang @brief Pushes external video frames. @param frame The data information of the video frame @return API call result:
- 0: Success. - <0: Failure. See ReturnStatus{@link #ReturnStatus} for specific reasons. @note - This method actively encapsulates the video frame data with the VideoFrameData{@link #VideoFrameData} class and passes it to the SDK. - Make sure that setVideoSourceType{@link #RTCEngine#setVideoSourceType} is set to custom video capture before you call this method. - When using texture data, make sure eglContext in createRTCEngine{@link #RTCEngine#createRTCEngine} is sharedContext or the same as eglContext in frame, otherwise it will not be able to encode - Support for raw data in I420, NV12, RGBA, and texutre of Texture2D and TextureOES
pushReferenceAudioPCMData(AudioFrame audioFrame) FutureOr<int>
@detail api @region Custom Audio AEC Reference @author cuiyao @brief Push custom aec reference audio data to the RTC SDK. @param audioFrame Audio data frame. See AudioFrame{@link #AudioFrame} @return Method call result
+ 0: Success
+ <-1: Failure
@note
+ You should send audio data every 10 milliseconds since the duration of a RTC SDK audio frame is 10 milliseconds. Samples x call frequency = audioFrame's sample rate. Assume that the sampling rate is set to 48000, call this API every 10 ms, so that 480 sampling points should be pulled each time.
+ The audio sampling format is S16. The data format in the audio buffer is PCM data, and its capacity size is audioFrame.samples × audioFrame.channel × 2.
pushScreenAudioFrame(AudioFrame audioFrame) FutureOr<int>
@detail api @author liyi.000 @brief Using a custom capture method, when capturing screen audio during screen sharing, push the audio frame to the RTC SDK for encoding and other processing. @param audioFrame Audio data frame. See AudioFrame{@link #AudioFrame} @return Method call result
- 0: Setup succeeded. - < 0: Setup failed. See ReturnStatus{@link #ReturnStatus} for more details. @note - Before calling this API to push custom collected audio data, you must call setScreenAudioSourceType{@link #RTCEngine#setScreenAudioSourceType} to start custom capture of the screen audio. - You should call this method every 10 milliseconds to push a custom captured audio frame. A push audio frame should contain frame.sample _rate/100 audio sample points. For example, if the sampling rate is 48000Hz, 480 sampling points should be pushed each time. - The audio sampling format is S16. The data format in the audio buffer must be PCM data, and its capacity size should be samples × frame.channel × 2. - After calling this interface to push the custom captured audio frame to the RTC SDK, you must call publishScreenAudio to push the captured screen audio to the remote end. Audio frame information pushed to the RTC SDK is lost before calling publishScreenAudio. @order 8
registerAudioFrameObserver(IAudioFrameObserver observer) FutureOr<int>
@detail api @author gongzhengduo @brief Register an audio frame observer. @param observer Audio data callback observer. See IAudioFrameObserver{@link #IAudioFrameObserver}. Use null to cancel the registration. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note After calling this API and enableAudioFrameCallback{@link #RTCEngine#enableAudioFrameCallback}, IAudioFrameObserver{@link #IAudioFrameObserver} receives the corresponding audio data callback. You can retrieve the audio data and perform processing on it without affecting the audio that RTC SDK uses to encode or render.
registerAudioProcessor(IAudioFrameProcessor processor) FutureOr<int>
@detail api @author gongzhengduo @brief Register a custom audio preprocessor.
After that, you can call enableAudioProcessor{@link #RTCEngine#enableAudioProcessor} to process the audio streams that either captured locally or received from the remote side. RTC SDK then encodes or renders the processed data. @param processor Custom audio processor. See IAudioFrameProcessor{@link #IAudioFrameProcessor}。
SDK only holds weak references to the processor, you should guarantee its Life Time. To cancel registration, set the parameter to nullptr. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note
registerLocalEncodedVideoFrameObserver(ILocalEncodedVideoFrameObserver observer) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author liuyangyang @brief Register a local video frame observer.
This method applys to both internal capturing and custom capturing.
After calling this API, SDK triggers onLocalEncodedVideoFrame{@link #ILocalEncodedVideoFrameObserver#onLocalEncodedVideoFrame} whenever a video frame is captured. @param observer Local video frame observer. See ILocalEncodedVideoFrameObserver{@link #ILocalEncodedVideoFrameObserver}. You can cancel the registration by setting it to null. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note You can call this API before or after entering the RTC room. Calling this API before entering the room ensures that video frames are monitored and callbacks are triggered as early as possible.
registerLocalVideoProcessor(IVideoProcessor processor, VideoPreprocessorConfig config) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author zhushufan.ref @brief Set up a custom video preprocessor.
Using this video preprocessor, you can call processVideoFrame{@link #IVideoProcessor#processVideoFrame} to preprocess the video frames collected by the RTC SDK, and use the processed video frames for RTC audio & video communication. @param processor Custom video processor. See IVideoProcessor{@link #IVideoProcessor}. If null is passed in, the video frames captured by the RTC SDK are not preprocessed.
SDK only holds weak references to the processor, you should guarantee its Life Time. @param config Customize the settings applicable to the video preprocessor. See VideoPreprocessorConfig{@link #VideoPreprocessorConfig}.
Currently, the required_pixel_format in'config 'only supports:' I420 ',' TEXTURE_2D 'and'Unknown':
- When set to'Unknown', the RTC SDK gives the format of the video frame for processing by the processor, that is, the format of the capture. You can get the actual captured video frame format through pixelFormat{@link #IVideoFrame#pixelFormat}. The supported formats are: 'I420', 'TEXTURE_2D' and 'TEXTURE_OES'
- When set to 'I420' or 'TEXTURE_2D', the RTC SDK will convert the captured video into the corresponding format for pre-processing. This method call fails when - Is set to another value. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note After preprocessing, the video frame format returned to the RTC SDK only supports' I420 'and' TEXTURE_2D '.
registerRemoteEncodedVideoFrameObserver(IRemoteEncodedVideoFrameObserver observer) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Video data callback after registering remote encoding.
After registration, when the SDK detects a remote encoded video frame, it will trigger the onRemoteEncodedVideoFrame{@link #IRemoteEncodedVideoFrameObserver#onRemoteEncodedVideoFrame} callback @param observer Remote encoded video data monitor. See IRemoteEncodedVideoFrameObserver{@link #IRemoteEncodedVideoFrameObserver} @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - See Custom Video Encoding and Decoding for more details about custom video decoding. - This method applys to manual subscription mode and can be called either before or after entering the Room. It is recommended to call it before entering the room. - The engine needs to be unregistered before it is destroyed. Call this method to set the parameter to "null".
requestRemoteVideoKeyFrame(String streamId) FutureOr<int>
@detail api @brief After subscribing to the remote video stream, request the keyframe @param streamId Remote stream ID. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - This method is only suitable for manual subscription mode and is used after successful subscription to the remote flow. - This method is suitable for calling setVideoDecoderConfig{@link #RTCEngine#setVideoDecoderConfig} to turn on the custom decoding function, and the custom decoding fails
sendInstanceGet<T>(String property) Future<T>
Get instance property
inherited
sendInstancePropertiesGet(dynamic nativeClass) Future<Map<String, dynamic>>
Get instance properties
inherited
sendInstanceSet(String property, dynamic value) Future<void>
Set instance property
inherited
sendPublicStreamSEIMessage(int channelId, ArrayBuffer message, int repeatCount, SEICountPerFrame mode) FutureOr<int>
@hidden for internal use only @valid since 3.56 @detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief <span id="IRTCEngine-sendseimessage-2"></span> WTN stream sends SEI data. @param channelId SEI message channel id. The value range is 0 - 255. With this parameter, you can set different ChannelIDs for different recipients. In this way, different recipients can choose the SEI information based on the ChannelID received in the callback. @param message SEI data. @param repeatCount Number of times a message is sent repeatedly. The value range is 0, max{29, \%{video frame rate}-1}. Recommended range: 2,4.
After calling this API, the SEI data will be added to a consecutive repeat_count+1 number of video frames starting from the current frame. @param mode SEI sending mode. See SEICountPerFrame{@link #SEICountPerFrame}. @return - < 0:Failure - = 0: You are unable to send SEI as the current send queue is full. - > 0: Success, and the value represents the amount of sent SEI. @note - We recommend the number of SEI messages per second should not exceed the current video frame rate. - In a video call, the custom captured video frame can also be used for sending SEI data if the original video frame contains no SEI data, otherwise calling this method will not take effect. - Each video frame carrys only the SEI data received within 2s before and after. In a voice call scenario, if no SEI data is sent within 1min after calling this API, SDK will automatically cancel publishing black frames. - After the message is sent successfully, the remote user who subscribed your video stream will receive onWTNSEIMessageReceived{@link #IWTNStreamEventHandler#onWTNSEIMessageReceived}. - When the call fails, neither the local nor the remote side will receive a callback.
sendSEIMessage(ArrayBuffer message, int repeatCount, SEICountPerFrame mode) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief <span id="RTCEngine-sendseimessage-2"></span>Sends SEI data.
In a video call scenario, SEI is sent with the video frame, while in a voice call scenario, SDK will automatically publish a black frame with a resolution of 16 × 16 pixels to carry SEI data. @param message SEI data. No more than 4 KB SEI data per frame is recommended. @param repeatCount Number of times a message is sent repeatedly. The value range is 0, max{29, \%{video frame rate}-1}. Recommended range: 2,4.
After calling this API, the SEI data will be added to a consecutive \%{repeatCount}+1 number of video frames starting from the current frame. @param mode SEI sending mode. See SEICountPerFrame{@link #SEICountPerFrame}. @return - >= 0: The number of SEIs to be added to the video frame - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - We recommend the number of SEI messages per second should not exceed the current video frame rate. In a voice call, the blank-frame rate is 15 fps. - In a voice call, this API can be called to send SEI data only in internal capture mode. - In a video call, the custom captured video frame can also be used for sending SEI data if the original video frame contains no SEI data, otherwise calling this method will not take effect. - Each video frame carrys only the SEI data received within 2s before and after. In a voice call scenario, if no SEI data is sent within 1min after calling this API, SDK will automatically cancel publishing black frames. - After the message is sent successfully, the remote user who subscribed your video stream will receive onSEIMessageReceived{@link #IRTCEngineEventHandler#onSEIMessageReceived}. - When you switch from a voice call to a video call, SEI data will automatically start to be sent with normally captured video frames instead of black frames.
sendServerBinaryMessage(ArrayBuffer buffer) FutureOr<long>
@detail api @author hanchenchen.c @brief Client side sends binary messages to the application server (P2Server) @param buffer
Binary message content sent
Message does not exceed 46KB. @return - > 0: sent successfully, return the number of the sent message, increment from 1. - -1: Sent failed due to empty message. @note - Before sending a binary message to the application server, you must first call login{@link #RTCEngine#login} to complete the login, and then call setServerParams{@link #RTCEngine#setServerParams} Set up the application server. - After calling this interface, you will receive an onServerMessageSendResult{@link #IRTCEngineEventHandler#onServerMessageSendResult} callback to inform the message sender that the sending succeeded or failed; - If the binary message is sent successfully, the application server that previously called the setServerParams{@link #RTCEngine#setServerParams} setting will receive the message.
sendServerMessage(String message) FutureOr<long>
@detail api @author hanchenchen.c @brief The client side sends a text message to the application server (P2Server) @param message
The content of the text message sent
The message does not exceed 64 KB. @return - > 0: Sent successfully, return the number of the sent message, increment from 1. @note - Before sending a text message to the application server, you must first call login{@link #RTCEngine#login} to complete the login, and then call setServerParams{@link #RTCEngine#setServerParams} Set up the application server. - After calling this interface, you will receive an onServerMessageSendResult{@link #IRTCEngineEventHandler#onServerMessageSendResult} callback to inform the message sender whether the message was sent successfully. - If the text message is sent successfully, the application server that previously called the setServerParams{@link #RTCEngine#setServerParams} setting will receive the message.
sendStreamSyncInfo(ArrayBuffer data, StreamSyncInfoConfig config) FutureOr<int>
@detail api @author wangjunzheng @brief Send audio stream synchronization information. The message is sent to the remote end through the audio stream and synchronized with the audio stream. After the interface is successfully called, the remote user will receive a onStreamSyncInfoReceived{@link #IRTCEngineEventHandler#onStreamSyncInfoReceived} callback. @param data Message content. @param config Configuration related to audio stream synchronization information. See StreamSyncInfoConfig{@link #StreamSyncInfoConfig}. @return - > = 0: Message sent successfully. Returns the number of successful sends. - -1: Message sending failed. Message length greater than 16 bytes. - -2: Message sending failed. The content of the incoming message is empty. - -3: Message sending failed. This screen stream was not published when the message was synchronized through the screen stream. - -4: Message sending failed. This audio stream is not yet published when you synchronize messages with an audio stream captured by a microphone or custom device, as described in ErrorCode{@link #ErrorCode}. @note
sendUserBinaryMessageOutsideRoom(String uid, ArrayBuffer message, MessageConfig config) FutureOr<long>
@detail api @author hanchenchen.c @brief Send binary messages (P2P) to the specified user outside the room @param uid User ID of the message receiver @param buffer
Binary message content sent
Message does not exceed 46KB. @param config Message type, see MessageConfig{@link #MessageConfig}. @return - > 0: sent successfully, return the number of the sent message, increment from 1. - -1: Sent failed due to empty message. @note - Before sending out-of-room binary messages, you should call login{@link #RTCEngine#login} first. - After the user calls this interface to send a binary message, he will receive an onUserMessageSendResultOutsideRoom{@link #IRTCEngineEventHandler#onUserMessageSendResultOutsideRoom} callback to notify whether the message was sent successfully; - If the binary message is sent successfully, the user specified by uid will receive the message through the onUserBinaryMessageReceivedOutsideRoom{@link #IRTCEngineEventHandler#onUserBinaryMessageReceivedOutsideRoom} callback.
sendUserMessageOutsideRoom(String uid, String message, MessageConfig config) FutureOr<long>
@detail api @author hanchenchen.c @brief Send a text message (P2P) to a specified user outside the room @param uid User ID of the message receiver @param message
Text message content sent.
Message does not exceed 64 KB. @param config Message type, see MessageConfig{@link #MessageConfig}. @return - > 0: Sent successfully, return the number of the sent message, increment from 1. @note - Before sending an out-of-room text message, you must call login{@link #RTCEngine#login} to login. - After the user calls this interface to send a text message, he will receive an onUserMessageSendResultOutsideRoom{@link #IRTCEngineEventHandler#onUserMessageSendResultOutsideRoom} callback to know whether the message was successfully sent. - If the text message is sent successfully, the user specified by uid receives the message via the onUserMessageReceivedOutsideRoom{@link #IRTCEngineEventHandler#onUserMessageReceivedOutsideRoom} callback.
setAnsMode(AnsMode ansMode) FutureOr<int>
@valid since 3.52 @detail api @author liuchuang @brief Set the Active Noise Cancellation(ANC) mode during audio and video communications. @param ansMode ANC modes. See AnsMode{@link #AnsMode}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - You can call this API before or after entering a room. When you repeatedly call it, only the last call takes effect.
- The noise reduction algorithm includes both traditional noise reduction and AI noise reduction. Traditional noise reduction is primarily aimed at suppressing steady noises, such as the hum of air conditioners and the whir of fans. AI noise reduction, on the other hand, is mainly designed to suppress non-stationary noises, like the tapping of keyboards and the clattering of tables and chairs.
- The AI noise reduction can only be enabled through this interface when the following ChannelProfile{@link #ChannelProfile} scenarios are engaged:
- Gaming voice mode: CHANNEL_PROFILE_GAME(2) - High-fidelity gaming mode: CHANNEL_PROFILE_GAME_HD(8) - Cloud gaming mode: CHANNEL_PROFILE_CLOUD_GAME(3) - 1 vs 1 audio/video call: CHANNEL_PROFILE_CHAT(5) - Multi-client synchronized audio/video playback: CHANNEL_PROFILE_LW_TOGETHER(7) - Personal devices in cloud meetings: CHANNEL_PROFIEL_MEETING - Classroom interaction mode: CHANNEL_PROFILE_MEETING_ROOM(17) - Meeting room terminals in cloud meetings: CHANNEL_PROFILE_CLASSROOM(18)
setAudioAlignmentProperty(String streamId, AudioAlignmentMode mode) FutureOr<int>
@detail api @hidden internal use only @author majun.lvhiei @brief On the listener side, set all subscribed audio streams precisely timely aligned. @param streamId Stream ID, the remote audio stream used as the benchmark during time alignment. You are recommended to use the audio stream from the lead singer.
You must call this API after receiving onUserPublishStreamAudio{@link #IRTCRoomEventHandler#onUserPublishStreamAudio}. @param mode Whether to enable the alignment. Disabled by default. See AudioAlignmentMode{@link #AudioAlignmentMode}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - You must use the function when all participants set ChannelProfile{@link #ChannelProfile} to CHANNEL_PROFILE_CHORUS when joining the room. - All remote participants must call startAudioMixing to play background music and set syncProgressToRecordFrame of AudioMixingConfigto true. - If the subscribed audio stream is delayed too much, it may not be precisely aligned. - The chorus participants must not enable the alignment. If you wish to change the role from listener to participant, you should disable the alignment.
setAudioProfile(AudioProfileType audioProfile) FutureOr<int>
@detail api @author zhangyuanyuan.0101 @brief Sets the sound quality. Call this API to change the sound quality if the audio settings in the current ChannelProfile{@link #ChannelProfile} can not meet your requirements. @param audioProfile Sound quality. See AudioProfileType{@link #AudioProfileType} @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - This method can be called before and after entering the room. - Support dynamic switching of sound quality during a call.
setAudioRenderType(AudioRenderType type) FutureOr<int>
@detail api @author gongzhengduo @brief Switch the audio render type. @param type Audio output source type. See AudioRenderType{@link #AudioRenderType}.
Use internal audio render by default. The audio capture type and the audio render type may be different from each other. @return Method call result:
- =0: Success. - <0: Failure. @note - You can call this API before or after joining the room. - After calling this API to enable custom audio rendering, call pullExternalAudioFrame{@link #RTCEngine#pullExternalAudioFrame} for audio data.
setAudioRoute(AudioRoute audioRoute) FutureOr<int>
@detail api @author dixing @brief Set the current audio playback route. The default device is set via setDefaultAudioRoute{@link #RTCEngine#setDefaultAudioRoute}.
When the audio playback route changes, you will receive onAudioRouteChanged{@link #IRTCEngineEventHandler#onAudioRouteChanged}. @param audioRoute Audio route. Refer to AudioRoute{@link #AudioRoute}.
For Android device, the valid audio playback devices may vary due to different audio device connection status. See Set the Audio Route. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - You can implement most scenarios by calling setDefaultAudioRoute{@link #RTCEngine#setDefaultAudioRoute} and the default audio route switching strategy of the RTC SDK. For details about the strategy, see Set the Audio Route. You should use this API in a few exceptional scenarios like manually switching audio route with external audio device connected. - This interface is only supported in communication mode. - For the volume type in different audio scenarios, refer to AudioScenarioType{@link #AudioScenarioType}.
setAudioScenario(AudioScenarioType audioScenario) FutureOr<int>
@hidden(macOS,Windows,Linux) @valid since 3.60. @detail api @author gongzhengduo @brief Sets the audio scenarios.
After selecting the audio scenario, SDK will automatically switch to the proper volume modes (the call/media volume) according to the scenarios and the best audio configurations under such scenarios.
This API should not be used at the same time with the old one. @param audioScenario Audio scenarios. See AudioScenarioType{@link #AudioScenarioType}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - You can use this API both before and after joining the room. - Call volume is more suitable for calls, meetings and other scenarios that demand information accuracy. Call volume will activate the system hardware signal processor, making the sound clearer. The volume cannot be reduced to 0. - Media volume is more suitable for entertainment scenarios, which require musical expression. The volume can be reduced to 0.
setAudioSourceType(AudioSourceType type) FutureOr<int>
@detail api @author gongzhengduo @brief Switch the audio capture type. @param type Audio input source type. See AudioSourceType{@link #AudioSourceType}
Use internal audio capture by default. The audio capture type and the audio render type may be different from each other. @return Method call result:
- =0: Success. - <0: Failure. @note - You can call this API before or after joining the room. - If you call this API to switch from internal audio capture to custom capture, the internal audio capture is automatically disabled. You must call pushExternalAudioFrame{@link #RTCEngine#pushExternalAudioFrame} to push custom captured audio data to RTC SDK for transmission. - If you call this API to switch from custom capture to internal capture, you must then call startAudioCapture{@link #RTCEngine#startAudioCapture} to enable internal capture.
setBeautyIntensity(EffectBeautyMode beautyMode, float intensity) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangjunlin.3182 @brief Sets the beauty effect intensity. @param beautyMode Basic beauty effect. See EffectBeautyMode{@link #EffectBeautyMode}. @param intensity Beauty effect intensity in range of 0,1. When you set it to 0, the beauty effect will be turned off.
The default values for the intensity of each beauty mode are as follows: 0.7 for brightning, 0.8 for smoothing, 0.5 for sharpening, and 0.7 for clarity. @return - 0: Success. - –2: intensity is out of range. - –1001: This API is not available for your current RTC SDK. - <0: Failure. Effect SDK internal error. For specific error code, see error codes. @note - If you call this API before calling enableEffectBeauty{@link #RTCEngine#enableEffectBeauty}, the default settings of beauty effect intensity will adjust accordingly. - If you destroy the engine, the beauty effect settings will be invalid.
setBusinessId(String businessId) FutureOr<int>
@detail api @author wangzhanqiang @brief Sets the business ID
You can use businessId to distinguish different business scenarios. You can customize your businessId to serve as a sub AppId, which can share and refine the function of the AppId, but it does not need authentication. @param businessId
Your customized businessId
BusinessId is a tag, and you can customize its granularity. @return - 0: Success. - -2: The input is invalid. Legal characters include all lowercase letters, uppercase letters, numbers, and four other symbols, including '.', '-','_', and '@'. @note - You must call this API before entering the room, otherwise it will be invalid.
setCameraAdaptiveMinimumFrameRate(int framerate) FutureOr<int>
@hidden(macOS) @valid since 353 @detail api @brief Set the minimum frame rate of of the dynamic framerate mode during internal video capture. @param framerate The minimum value in fps. The default value is 7.
The maximum value of the dynamic framerate mode is set by calling setVideoCaptureConfig{@link #RTCEngine#setVideoCaptureConfig}. When minimum value exceeds the maximum value, the frame rate is set to a fixed value as the maximum value; otherwise, dynamic framerate mode is enabled. @return - 0: Success. - !0: Failure. @note - You must call this API before calling startVideoCapture{@link #RTCEngine#startVideoCapture} to enable internal capture to make the setting valid. - If the maximum frame rate changes due to performance degradation, static adaptation, etc., the set minimum frame rate value will be re-compared with the new maximum value. Changes in comparison results may cause switch between fixed and dynamic frame rate modes. - For Android, dynamic framerate mode is enabled. - For iOS, dynamic framerate mode is disabled.
setCameraExposureCompensation(float val) FutureOr<int>
@detail api @author zhangzhenyu.samuel @brief Sets the exposure compensation for the currently used camera. @param val Exposure compensation in range of -1, 1. Default to 0, which means no exposure compensation. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - You must call startVideoCapture{@link #RTCEngine#startVideoCapture} to start SDK internal video capturing, and use SDK internal rendering before calling this API. - The camera exposure compensation setting will be invalid after calling stopVideoCapture{@link #RTCEngine#stopVideoCapture} to stop internal capturing.
setCameraExposurePosition(float x, float y) FutureOr<int>
@detail api @author zhangzhenyu.samuel @brief Sets the manual exposure position for the currently used camera. @param x The x-coordinate of the exposure point in range of 0, 1. The upper-left corner of the canvas is set as the origin. @param y The y-coordinate of the focus point in range of 0, 1. The upper-left corner of the canvas is set as the origin. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - You must call startVideoCapture{@link #RTCEngine#startVideoCapture} to start SDK internal video capturing, and use SDK internal rendering before calling this API. - The exposure point setting will be canceled when you move the device. - The camera exposure point setting will be invalid after calling stopVideoCapture{@link #RTCEngine#stopVideoCapture} to stop internal capturing.
setCameraFocusPosition(float x, float y) FutureOr<int>
@detail api @author zhangzhenyu.samuel @brief Sets the manual focus position for the currently used camera. @param x The x-coordinate of the focus point in range of 0, 1. The upper-left corner of the canvas is set as the origin. @param y The y-coordinate of the focus point in range of 0, 1. The upper-left corner of the canvas is set as the origin. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - You must call startVideoCapture{@link #RTCEngine#startVideoCapture} to start SDK internal video capturing, and use SDK internal rendering before calling this API. - The focus point setting will be canceled when you move the device. - The camera focus point setting will be invalid after calling stopVideoCapture{@link #RTCEngine#stopVideoCapture} to stop internal capturing.
setCameraTorch(TorchState torchState) FutureOr<int>
@detail api @author zhangzhenyu.samuel @brief Turn on/off the flash state of the currently used camera (front/postcondition) @param torchState Flash state. Refer to TorchState{@link #TorchState} @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - The flash can only be set if you have called startVideoCapture{@link #RTCEngine#startVideoCapture} for video capture using the SDK internal capture module. - The setting result fails after calling stopVideoCapture{@link #RTCEngine#stopVideoCapture} to turn off internal collection.
setCameraZoomRatio(float zoom) FutureOr<int>
@detail api @author zhangzhenyu.samuel @brief Change the optical zoom magnification. @param zoom Zoom magnification of the currently used camera (front/postcondition). The value range is 1, < Maximum Zoom Multiplier >.
The maximum zoom factor can be obtained by calling getCameraZoomMaxRatio{@link #RTCEngine#getCameraZoomMaxRatio}. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - The camera zoom factor can only be set when startVideoCapture{@link #RTCEngine#startVideoCapture} is called for video capture using the SDK internal capture module. - The setting result fails after calling stopVideoCapture{@link #RTCEngine#stopVideoCapture} to turn off internal collection. - Call setVideoDigitalZoomConfig{@link #RTCEngine#setVideoDigitalZoomConfig} to set digital zoom. Call setVideoDigitalZoomControl{@link #RTCEngine#setVideoDigitalZoomControl} to perform digital zoom.
setCaptureVolume(int volume) FutureOr<int>
@detail api @author huangshouqin @brief Adjust the volume of the audio capture @param volume Ratio of capture volume to original volume.
This changes the volume property of the audio data other than the hardware volume.
Ranging: 0,400. Unit: %
To ensure the audio quality, we recommend setting the volume to 100.
- 0: Mute - 100: Original volume. To ensure the audio quality, we recommend 0, 100. - 400: Four times the original volume with signal-clipping protection. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note Call this API to set the volume of the audio capture before or during the audio capture.
setCellularEnhancement(MediaTypeEnhancementConfig config) FutureOr<int>
@detail api @hiddensdk(audiosdk) @brief Enable cellular network assisted communication to improve call quality. @param config See MediaTypeEnhancementConfig{@link #MediaTypeEnhancementConfig}. @return Method call result:
- 0: Success. - -1: Failure, internal error. - -2: Failure, invalid parameters. @note The function is off by default.
setClientMixedStreamObserver(IClientMixedStreamObserver observer) FutureOr<int>
setDefaultAudioRoute(AudioRoute route) FutureOr<int>
@detail api @author dixing @brief Set the speaker or earpiece as the default audio playback device. @param route Audio playback device. Refer to AudioRoute{@link #AudioRoute}. You can only use earpiece and speakerphone. @return - 0: Success. - < 0: failure. It fails when the device designated is neither a speaker nor an earpiece. @note For the default audio route switching strategy of the RTC SDK, see Set the Audio Route.
setDummyCaptureImagePath(String filePath) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author liuyangyang @brief Set an alternative image when the local internal video capture is not enabled.
When you call stopVideoCapture, an alternative image will be pushed. You can set the path to null or open the camera to stop publishing the image.
You can repeatedly call this API to update the image. @param filePath Set the path of the static image.
You can use the absolute path (file://xxx) or the asset directory path (/assets/xx.png). The maximum size for the path is 512 bytes.
You can upload a .JPG, .JPEG, .PNG, or .BMP file.
When the aspect ratio of the image is inconsistent with the video encoder configuration, the image will be proportionally resized, with the remaining pixels rendered black. The framerate and the bitrate are consistent with the video encoder configuration. @return - 0: Success. - -2: Failure. Ensure that the filePath is valid. - -12: This method is not available in the Audio SDK. @note - The API is only effective when publishing an internally captured video. - You cannot locally preview the image. - You can call this API before and after joining an RTC room. In the multi-room mode, the image can be only displayed in the room you publish the stream. - You cannot apply effects like filters and mirroring to the image, while you can watermark the image. - The image is not effective for a screen-sharing stream. - When you enable the simulcast mode, the image will be added to all video streams, and it will be proportionally scaled down to smaller encoding configurations.
setEarMonitorMode(EarMonitorMode mode, EarMonitorAudioFilter filter) FutureOr<int>
@detail api @author majun.lvhiei @brief Enables/disables in-ear monitoring. @param mode Whether to enable in-ear monitoring. See EarMonitorMode{@link #EarMonitorMode}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note - In-ear monitoring is effective for audios captured by the RTC SDK. - We recommend that you use wired earbuds/headphones for a low-latency experience. - The RTC SDK supports both hardware-level and SDK-level in-ear monitoring. Hardware-level monitoring typically offers lower latency and better audio quality. If your App is in the manufacturer's trusted list for this feature and the environment meets the required conditions, the RTC SDK will automatically default to hardware-level in-ear monitoring when enabled.
setEarMonitorVolume(int volume) FutureOr<int>
@detail api @author majun.lvhiei @brief Set the monitoring volume. @param volume The monitoring volume with the adjustment range between 0% and 100%. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Call setEarMonitorMode{@link #RTCEngine#setEarMonitorMode} before setting the volume.
setEncryptInfo(int aesType, String key) FutureOr<int>
@detail api @author wangjunlin.3182 @brief Sets the way to use built-in encryption when transmitting. @param aesType Encryption type. Optional parameters are 0, 1, 2, 3, and 4. The meaning is as follows:
0. Not encrypted.
1. AES-128-CBC
2. AES-256-CBC
3. AES-128-ECB
4. AES-256-ECB @param key Encryption key. The length is limited to 36 bits, and the excess will be truncated @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - This method is mutually exclusive with setCustomizeEncryptHandler{@link #RTCEngine#setCustomizeEncryptHandler}, that is, according to the order of invocation, the last method called is the final version that takes effect. - This method must be called before calling joinRoom{@link #RTCRoom#joinRoom}. It can be called repeatedly, with the last called parameter as the effective parameter - AES encryption algorithm limit, more than 36 bits of key will be truncated, only the first 36 bits
setExternalVideoEncoderEventHandler(IExternalVideoEncoderEventHandler handler) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Register custom coded frame push event callback @param handler Custom coded frame callback class. See IExternalVideoEncoderEventHandler{@link #IExternalVideoEncoderEventHandler} @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - This method needs to be called before entering the room. - The engine needs to be unregistered before it is destroyed. Call this method to set the parameter to "null".
setLocalProxy(List<LocalProxyConfiguration> configurations) FutureOr<int>
@detail api @author keshixing.rtc @brief Sets local proxy. @param configurations Local proxy configurations. Refer to LocalProxyConfiguration{@link #LocalProxyConfiguration}.
You can set both Http tunnel and Socks5 as your local proxies, or only set one of them based on your needs. If you set both Http tunnel and Socks5 as your local proxies, then media traffic and signaling are routed through Socks5 proxy and Http requests through Http tunnel proxy. If you set either Http tunnel or Socks5 as your local proxy, then media traffic, signaling and Http requests are all routed through the proxy you chose.
If you want to remove the existing local proxy configurations, you can call this API with the parameter set to null. @note - You must call this API before joining the room. - After calling this API, you will receive onLocalProxyStateChanged{@link #IRTCEngineEventHandler#onLocalProxyStateChanged} callback that informs you of the states of local proxy connection.
setLocalSimulcastMode(VideoSimulcastMode mode, Array<VideoEncoderConfig> streamConfig) FutureOr<int>
@valid since 3.60. @detail api @brief Enable the Simulcast feature and configure the lower-quality video streams settings. @param mode Whether to publish lower-quality streams and how many of them to be published. See VideoSimulcastMode{@link #VideoSimulcastMode}. By default, it is set to Single, where the publisher sends the video in a single profile. In the other modes, the low-quality stream is set to a default resolution of 160px × 90px with a bitrate of 50Kbps. @param streamConfig The specification of the lower quality stream. You can configure up to three low-quality streams for a video source. See VideoEncoderConfig{@link #VideoEncoderConfig}. The resolution of the lower quality stream must be smaller than the standard stream set via setVideoEncoderConfig{@link #RTCEngine#setVideoEncoderConfig}. The specifications in the array must be arranged in ascending order based on resolution. In the other modes, the low-quality stream is set to a default resolution of 160px × 90px with a bitrate of 50Kbps. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note - The default specification of the video stream is 640px × 360px @15fps. - The method applies to the camera video only. - Refer to Simulcasting for more information.
setLocalVideoCanvas(VideoCanvas videoCanvas) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author sunhang.io @brief Sets the view to be used for local video rendering and the rendering mode. @param videoCanvas View information and rendering mode. See VideoCanvas{@link #VideoCanvas}. @return - 0: Success. - -2: Invalid parameter. - -12: This method is not available in the Audio SDK. @note - You should bind your stream to a view before joining the room. This setting will remain in effect after you leave the room. - If you need to unbind the local video stream from the current view, you can call this API and set the videoCanvas to null.
setLocalVideoMirrorType(MirrorType mirrorType) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangjunlin.3182 @brief Sets the mirror mode for the captured video stream. @param mirrorType Mirror type. See MirrorType{@link #MirrorType}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Switching video streams does not affect the settings of the mirror type. - This API is not applicable to screen-sharing streams. - When using an external renderer, you can set mirrorType to 0 and 3, but you cannot set it to 1. - Before you call this API, the initial states of each video stream are as follows:
setLocalVideoSink(IVideoSink videoSink, int requiredFormat) FutureOr<int>
@valid since 3.57 @detail api @hiddensdk(audiosdk) @brief Binds the local video stream to a custom renderer.You can get video frame data at specified positions and formats through parameter settings. @param videoSink Custom video renderer. See IVideoSink{@link #IVideoSink}. @param config Local video frame callback configuration, see LocalVideoSinkConfig{@link #LocalVideoSinkConfig}。 @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - RTC SDK uses its own renderer (internal renderer) for video rendering by default. - Leaving the room will clear the binding status. - If you need to unbind the video stream from the custom renderer, you must set video_sink to null. - Generally, after receiving the onFirstLocalVideoFrameCaptured{@link #IRTCEngineEventHandler#onFirstLocalVideoFrameCaptured} callback notification that the first local video frame has been captured, call this method to bind a custom renderer to a video stream and join the room. @order 2
setLocalVoiceEqualization(VoiceEqualizationConfig voiceEqualizationConfig) FutureOr<int>
@detail api @author wangjunzheng @brief Set the equalization effect for the local captured audio. The audio includes both internal captured audio and external captured voice, but not the mixing audio file. @param voiceEqualizationConfig See VoiceEqualizationConfig{@link #VoiceEqualizationConfig}. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note According to the Nyquist acquisition rate, the audio acquisition rate must be greater than twice the set center frequency. Otherwise, the setting will not be effective.
setLocalVoicePitch(int pitch) FutureOr<int>
@detail api @author wangjunzheng @brief Change local voice to a different key, mostly used in Karaoke scenarios.
You can adjust the pitch of local voice such as ascending or descending with this method. @param pitch The value that is higher or lower than the original local voice within a range from -12 to 12. The default value is 0, i.e. No adjustment is made.
The difference in pitch between two adjacent values within the value range is a semitone, with positive values indicating an ascending tone and negative values indicating a descending tone, and the larger the absolute value set, the more the pitch is raised or lowered.
Out of the value range, the setting fails and triggers onWarning{@link #IRTCEngineEventHandler#onWarning} callback, indicating WARNING_CODE_SET_SCREEN_STREAM_INVALID_VOICE_PITCH for invalid value setting with WarningCode{@link #WarningCode}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details
setLocalVoiceReverbParam(VoiceReverbConfig config) FutureOr<int>
@detail api @author wangjunzheng @brief Set the reverb effect for the local captured audio. The audio includes both internal captured audio and external captured voice, but not the mixing audio file. @param config See VoiceReverbConfig{@link #VoiceReverbConfig}. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note Call enableLocalVoiceReverb{@link #RTCEngine#enableLocalVoiceReverb} to enable the reverb effect.
setPlaybackVolume(int volume) FutureOr<int>
@detail api @author huangshouqin @brief Adjusts the locally playing volume after mixing sounds of all remote users. You can call this API before or during the playback. @param volume Ratio(%) of playback volume to original volume, in the range 0, 400, with overflow protection.
To ensure the audio quality, we recommend setting the volume to 100.
- 0: mute - 100: original volume - 400: Four times the original volume with signal-clipping protection. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note Suppose a remote user A is always within the range of the target user whose playback volume will be adjusted, if you use both this method and setRemoteAudioPlaybackVolume{@link #RTCEngine#setRemoteAudioPlaybackVolume}/setRemoteRoomAudioPlaybackVolume{@link #RTCRoom#setRemoteRoomAudioPlaybackVolume}, the volume that the local user hears from user A is the overlay of both settings.
setPublishFallbackOption(PublishFallbackOption option) FutureOr<int>
@detail api @author panjian.fishing @brief Sets the fallback option for published audio & video streams.
You can call this API to set whether to automatically lower the resolution you set of the published streams under limited network conditions. @param option Fallback option, see PublishFallbackOption{@link #PublishFallbackOption}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - This API only works after you call setLocalSimulcastMode{@link #RTCEngine#setlocalsimulcastmode-2} to enable the mode of publishing multiple streams. - You must call this API before entering the room. - After calling this method, if there is a performance degradation or recovery due to poor performance or network conditions, the local end will receive early warnings through the onPerformanceAlarms{@link #IRTCEngineEventHandler#onPerformanceAlarms} callback to adjust the capture device. - After you allow video stream to fallback, your stream subscribers will receive onSimulcastSubscribeFallback{@link #IRTCEngineEventHandler#onSimulcastSubscribeFallback} when the resolution of your published stream are lowered or restored. - You can alternatively set fallback options with distrubutions from server side, which is of higher priority.
setRemoteAudioPlaybackVolume(String streamId, int volume) FutureOr<int>
@detail api @author huanghao @brief Set the audio volume of playing the received remote stream. You must join the room before calling the API. The validity of the setting is not associated with the publishing status of the stream. @param streamId Stream ID, used to specify the remote stream whose volume is to be adjusted. @param volume The ratio between the playing volume of the original volume. The range is [0, 400] with overflow protection. The unit is %.
For better audio quality, you are recommended to set the value to [0, 100]. @return result
- 0: Success. - <0: Failure. See ReturnStatus{@link #ReturnStatus}. @note Assume that a remote user A is always within the scope of the adjusted target users:
- When this API is used together with setRemoteRoomAudioPlaybackVolume{@link #RTCRoom#setRemoteRoomAudioPlaybackVolume}, the volume of local listening user A is the volume set by the API called later; - When this API is used together with the setPlaybackVolume{@link #RTCEngine#setPlaybackVolume}, the volume of local listening user A will be the superposition of the two set volume effects. - When you call this API to set the remote stream volume, if the remote user leaves the room, the setting will be invalid.
setRemoteUserPriority(String roomid, String uid, RemoteUserPriority priority) FutureOr<int>
@detail api @author panjian.fishing @brief Set user priority. @param roomid Room ID @param uid
The ID of the remote user. @param priority
Priority for remote users. See enumeration type RemoteUserPriority{@link #RemoteUserPriority}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note - This method is used with setSubscribeFallbackOption{@link #RTCEngine#setSubscribeFallbackOption}. - If the subscription flow fallback option is turned on, weak connections or insufficient performance will give priority to ensuring the quality of the flow received by high-priority users. - This method can be used before and after entering the room, and the priority of the remote user can be modified.
setRemoteVideoCanvas(String streamId, VideoCanvas videoCanvas) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author sunhang.io @brief Sets the view to be used for remote video rendering and the rendering mode.
To unbind the canvas, set videoCanvas to Null. @param streamId Stream ID, used to specify the video stream for which the view and rendering mode need to be set. @param videoCanvas View information and rendering mode. See VideoCanvas{@link #VideoCanvas}. Starting from version 3.56, you can set the rotation angle of the remote video rendering using renderRotation. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note When the local user leaves the room, the setting will be invalid. The remote user leaving the room does not affect the setting.
setRemoteVideoMirrorType(String streamId, RemoteMirrorType mirrorType) FutureOr<int>
@detail api @hiddensdk(audiosdk) @valid since 3.57 @region Video Management @brief When using internal rendering, enable mirroring for the remote stream. @param streamId Stream ID, used to specify the video stream that needs to be mirrored. @param mirrorType The mirror type for the remote stream, see RemoteMirrorType{@link #RemoteMirrorType}. @return - 0: Successful call. - < 0: Call failed, see ReturnStatus{@link #ReturnStatus} for more error details.
setRemoteVideoSink(String streamId, IVideoSink videoSink, int requiredFormat) FutureOr<int>
@valid since 3.57 @detail api @hiddensdk(audiosdk) @brief Binds the remote video stream to a custom renderer.You can get video frame data at specified positions and formats through parameter settings. @param streamId Stream ID, used to specify the video stream to be rendered. @param videoSink Custom video renderer. See IVideoSink{@link #IVideoSink}. @param config remote video frame callback configuration, see RemoteVideoSinkConfig{@link #RemoteVideoSinkConfig}。 @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - RTC SDK uses its own renderer (internal renderer) for video rendering by default. - This method can be called both before and after entering the room. If you are unable to obtain remote stream information in advance, you can call this method after joining the room and receiving the remote stream information through the onUserPublishStreamVideo{@link #IRTCRoomEventHandler#onUserPublishStreamVideo} callback. - Leaving the room will clear the binding status. - If you need to unbind the video stream from the custom renderer, you must set video_sink to null. @order 2
setRemoteVideoSuperResolution(String streamId, VideoSuperResolutionMode mode) FutureOr<int>
@hidden for internal use only @detail api @hiddensdk(audiosdk) @author yinkaisheng @brief Sets the super resolution mode for remote video stream. @param streamId Stream ID, used to specify the video stream for which the super resolution mode needs to be set. @param mode Super resolution mode. See VideoSuperResolutionMode{@link #VideoSuperResolutionMode}. @return.
- 0: RETURN_STATUS_SUCCESS. It does not indicate the actual status of the super resolution mode, you should refer to onRemoteVideoSuperResolutionModeChanged{@link #IRTCEngineEventHandler#onRemoteVideoSuperResolutionModeChanged} callback. - -1: RETURN_STATUS_NATIVE_IN_VALID. Native library is not loaded. - -2: RETURN_STATUS_PARAMETER_ERR. Invalid parameter. - -9: RETURN_STATUS_SCREEN_NOT_SUPPORT. Failure. Screen stream is not supported. See ReturnStatus{@link #ReturnStatus} for more return value indications. @note - Call this API after joining room. - The original resolution of the remote video stream should not exceed 640 × 360 pixels. - You can only turn on super-resolution mode for one stream.
setRtcVideoEventHandler(IRTCEngineEventHandler engineEventHandler) FutureOr<int>
@detail api @hidden for internal use only @author wangzhanqiang @brief The receiving class that sets engine event callbacks must inherit from IRTCEngineEventHandler{@link #IRTCEngineEventHandler}. @param engineEventHandler
Event processor interface class. See IRTCEngineEventHandler{@link #IRTCEngineEventHandler}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - The caller needs to implement a class that inherits from IRTCEngineEventHandler{@link #IRTCEngineEventHandler} and override the events that need attention. - The callback is asynchronous recall - All event callbacks will be triggered in a separate callback thread. Please pay attention to all operations related to the thread running environment when receiving callback events, such as operations that need to be performed in the UI thread. Do not directly operate in the implementation of the callback function.
setRuntimeParameters(dynamic params) FutureOr<int>
@detail api @author panjian.fishing @brief Setting runtime parameters @param params Preserved parameters. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note Call this API before joinRoom{@link #RTCRoom#joinRoom} and startAudioCapture{@link #RTCEngine#startAudioCapture}.
setScreenAudioSourceType(AudioSourceType sourceType) FutureOr<int>
@detail api @author liyi.000 @brief Sets the screen audio source type. (internal capture/custom capture) @param sourceType Screen audio source type. See AudioSourceType{@link #AudioSourceType}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - The default screen audio source type is RTC SDK internal capture. - You should call this API before calling publishScreenAudio. Otherwise, you will receive onWarning{@link #IRTCEngineEventHandler#onWarning} with 'WARNING_CODE_SET_SCREEN_AUDIO_SOURCE_TYPE_FAILED'. - When using internal capture, you need to call startScreenCapture to start capturing. After that, as you switch to an external source by calling this API, the internal capture will stop. - When using custom capture, you need to call pushScreenAudioFrame{@link #RTCEngine#pushScreenAudioFrame} to push the audio stream to the RTC SDK. - Whether you use internal capture or custom capture, you must call publishScreenAudio to publish the captured screen audio stream. @order 5
setScreenCaptureVolume(int volume) FutureOr<int>
@valid Available since 3.60. @detail api @author shiyayun @brief Adjusts the volume of audio captured during screen sharing.
This method only changes the volume of audio data and does not affect the hardware volume of the local device. @param volume The ratio of the capture volume to the original volume, in the range of 0, 400, in %, with built-in overflow protection.
To ensure better call quality, it is recommended to set the volume value to 0, 100.
- 0: Mute - 100: Original volume. To ensure the audio quality, we recommend 0, 100. - 400: Four times the original volume with signal-clipping protection. @return - 0: Success. - < 0: Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note You can use this interface to set the capture volume before or after enabling screen audio capture.
setServerParams(String signature, String url) FutureOr<int>
@detail api @author hanchenchen.c @brief Set application server parameters
Client side calls sendServerMessage{@link #RTCEngine#sendServerMessage} or sendServerBinaryMessage{@link #RTCEngine#sendServerBinaryMessage} Before sending a message to the application server, you must set a valid signature and application server address. @param signature Dynamic signature. The App server may use the signature to verify the source of messages.
You need to define the signature yourself. It can be any non-empty string. It is recommended to encode information such as UID into the signature.
The signature will be sent to the address set through the "url" parameter in the form of a POST request. @param url The address of the application server @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - The user must call login{@link #RTCEngine#login} to log in before calling this interface. - After calling this interface, the SDK will use onServerParamsSetResult{@link #IRTCEngineEventHandler#onServerParamsSetResult} to return the corresponding result.
setSubscribeFallbackOption(SubscribeFallbackOptions option) FutureOr<int>
@detail api @author panjian.fishing @brief Sets the fallback option for subscribed RTC streams.
You can call this API to set whether to lower the resolution of currently subscribed stream under limited network conditions. @param option Fallback option, see SubscribeFallbackOptions{@link #SubscribeFallbackOptions} for more details. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note - You must call this API before enterting the room. - After you enables the fallback, you will receive onSimulcastSubscribeFallback{@link #IRTCEngineEventHandler#onSimulcastSubscribeFallback} and onRemoteVideoSizeChanged{@link #IRTCEngineEventHandler#onRemoteVideoSizeChanged} when the resolution of your subscribed stream is lowered or restored. - You can alternatively set fallback options with distrubutions from server side, which is of higher priority.
setVideoCaptureConfig(VideoCaptureConfig videoCaptureConfig) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author liuyangyang @brief Set the video capture parameters for internal capture of the RTC SDK.
If your project uses the SDK internal capture module, you can specify the video capture parameters including preference, resolution and frame rate through this interface. @param videoCaptureConfig Video capture parameters. See: VideoCaptureConfig{@link #VideoCaptureConfig}. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note
setVideoCaptureRotation(VideoRotation rotation) FutureOr<int>
@detail api @hiddensdk(audiosdk) @brief Set the rotation of the video images captured from the local device.
Call this API to rotate the videos when the camera is fixed upside down or tilted. For rotating videos on a phone, we recommend to use setVideoRotationMode{@link #RTCEngine#setVideoRotationMode}. @param rotation It defaults to VIDEO_ROTATION_0(0), which means not to rotate. Refer to VideoRotation{@link #VideoRotation}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - For the videos captured by the internal module, the rotation will be combined with that set by calling setVideoRotationMode{@link #RTCEngine#setVideoRotationMode}. - This API affects the external-sourced videos. The final rotation would be the original rotation angles adding up with the rotation set by calling this API. - The elements added during the video pre-processing stage, such as video sticker and background applied using enableVirtualBackground{@link #IVideoEffect#enableVirtualBackground} will also be rotated by this API. - The rotation would be applied to both locally rendered video s and those sent out. However, if you need to rotate a video which is intended for pushing to CDN individually, use setVideoOrientation{@link #RTCEngine#setVideoOrientation}.
setVideoDecoderConfig(String streamId, VideoDecoderConfig config) FutureOr<int>
@detail api @brief Before subscribing to the remote video stream, set the remote video data decoding method @param streamId The remote stream ID specifies which video stream to decode. @param config Video decoding method. See VideoDecoderConfig{@link #VideoDecoderConfig}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - When you want to custom decode a remote stream, you need to call registerRemoteEncodedVideoFrameObserver{@link #RTCEngine#registerRemoteEncodedVideoFrameObserver} to register the remote video stream monitor, and then call the interface to set the decoding method to custom decoding. The monitored video data will be called back through onRemoteEncodedVideoFrame{@link #IRemoteEncodedVideoFrameObserver#onRemoteEncodedVideoFrame}. - Since version 3.56, for automatic subscription, you can set the RoomId and UserId of key as nullptr. In this case, the decoding settings set by calling the API applies to all remote main streams or screen sharing streams based on the StreamIndex value of key.
setVideoDenoiser(VideoDenoiseMode mode) FutureOr<int>
@hidden for internal use only @detail api @hiddensdk(audiosdk) @author Yujianli @brief Sets the video noise reduction mode. @param mode Video noise reduction mode. Refer to VideoDenoiseMode{@link #VideoDenoiseMode} for more details. @return - 0: Success. Please refer to onVideoDenoiseModeChanged{@link #IRTCEngineEventHandler#onVideoDenoiseModeChanged} callback for the actual state of video noise reduction mode. - < 0: Failure.
setVideoDigitalZoomConfig(ZoomConfigType type, float size) FutureOr<int>
@valid since 3.51 @detail api @hiddensdk(audiosdk) @author likai.666 @brief Set the step size for each digital zooming control to the local videos. @param type Required. Identifying which type the size is referring to. Refer to ZoomConfigType{@link #ZoomConfigType}. @param size Required. Reserved to three decimal places. It defaults to 0.
The meaning and range vary from different types. If the scale or moving distance exceeds the range, the limit is taken as the result.
- kZoomFocusOffset: Increasement or decrease to the scaling factor. Range: 0, 7. For example, when it is set to 0.5 and setVideoDigitalZoomControl{@link #RTCEngine#setVideoDigitalZoomControl} is called to zoom in, the scale will increase 0.5. The scale ranges 1,8 and defaults to 1, which means an original size. - kZoomMoveOffset:Ratio of the distance to the border of video images. It ranges 0, 0.5 and defaults to 0, which means no offset. When you call setVideoDigitalZoomControl{@link #RTCEngine#setVideoDigitalZoomControl} and choose CAMERA_MOVE_LEFT, the moving distance is size x original width. While for the CAMERA_MOVE_UP, the moving distance is size x original height. Suppose that a video spans 1080 px and the size is set to 0.5 so that the distance would be 0.5 x 1080 px = 540 px. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - Only one size can be set for a single call. You must call this API to pass values respectively if you intend to set multiple sizes. - As the default size is 0, you must call this API before performing any digital zoom control by calling setVideoDigitalZoomControl{@link #RTCEngine#setVideoDigitalZoomControl} or startVideoDigitalZoomControl{@link #RTCEngine#startVideoDigitalZoomControl}.
setVideoDigitalZoomControl(ZoomDirectionType direction) FutureOr<int>
@valid since 3.51 @detail api @hiddensdk(audiosdk) @author likai.666 @brief Digital zoom or move the local video image once. This action affects both the video preview locally and the stream published. @param direction Action of the digital zoom control. Refer to ZoomDirectionType{@link #ZoomDirectionType}. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - As the default offset is 0, you must call setVideoDigitalZoomConfig{@link #RTCEngine#setVideoDigitalZoomConfig} before this API. - You can only move video images after they are magnified via this API or startVideoDigitalZoomControl{@link #RTCEngine#startVideoDigitalZoomControl}. - When you request an out-of-range scale or movement, SDK will execute it with the limits. For example, when the image has been moved to the border, the image cannot be zoomed out, or has been magnified to 8x. - Call startVideoDigitalZoomControl{@link #RTCEngine#startVideoDigitalZoomControl} to have a continuous and repeatedly digital zoom control. - Refer to setCameraZoomRatio{@link #RTCEngine#setCameraZoomRatio} if you intend to have an optical zoom control to the camera.
setVideoEncoderConfig(VideoEncoderConfig encoderConfig, dynamic parameters) FutureOr<int>
@detail api @hiddensdk(audiosdk) @brief Sets the expected quality of the video stream by specifying the resolution, frame rate, bitrate, and the fallback strategy when the network is poor. @param encoderConfig See VideoEncoderConfig{@link #VideoEncoderConfig}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note - Since V3.61, this method can only set a single profile for the video stream. If you intend to publish the stream in multiple qualities, use setLocalSimulcastMode{@link #RTCEngine#setLocalSimulcastMode}. - Without calling this method, only one stream will be sent with a profile of 640px × 360px @15fps. The default encoding preference is frame rate-first. - If you use an external video source, you can also use this method to set the encoding parameters.
setVideoOrientation(VideoOrientation orientation) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangjunlin.3182 @brief Sets the orientation of the video frame before custom video processing and encoding. The default value is Adaptive.
You should set the orientation to Portrait when using video effects or custom processing.
You should set the orientation to Portrait or Landscape when pushing a single stream to the CDN. @param orientation Orientation of the video frame. See VideoOrientation{@link #VideoOrientation}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - The orientation setting is only applicable to internal captured video sources. For custom captured video sources, setting the video frame orientation may result in errors, such as swapping width and height. Screen sources do not support video frame orientation setting. - We recommend setting the orientation before joining room. The updates of encoding configurations and the orientation are asynchronous, therefore can cause a brief malfunction in preview if you change the orientation after joining room.
setVideoRotationMode(VideoRotationMode rotationMode) FutureOr<int>
@detail api @hiddensdk(audiosdk) @brief Set the orientation of the video capture. By default, the App direction is used as the orientation reference.
During rendering, the receiving client rotates the video in the same way as the sending client did. @param rotationMode Rotation reference can be the orientation of the App or gravity. Refer to VideoRotationMode{@link #VideoRotationMode} for details. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - The orientation setting is effective for internal video capture only. That is, the orientation setting is not effective to the custom video source or the screen-sharing stream. - If the video capture is on, the setting will be effective once you call this API. If the video capture is off, the setting will be effective on when capture starts.
setVideoSourceType(VideoSourceType type) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author liuyangyang @brief Set the video source, including the screen recordings.
The internal video capture is the default, which refers to capturing video using the built-in module. @param type Video source type. Refer to VideoSourceType{@link #VideoSourceType} for more details. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - You can call this API whether the user is in a room or not. - Calling this API to switch to the custom video source will stop the enabled internal video capture. - To switch to internal video capture, call this API to stop custom capture and then call startVideoCapture{@link #RTCEngine#startVideoCapture} to enable internal video capture. - To push custom encoded video frames to the SDK, call this API to switch VideoSourceType to VIDEO_SOURCE_TYPE_ENCODED_WITH_SIMULCAST(2) or VIDEO_SOURCE_TYPE_ENCODED_WITHOUT_SIMULCAST(3).
setVideoWatermark(String imagePath, RTCWatermarkConfig watermarkConfig) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author zhushufan.ref @brief Adds watermark to designated video stream. @param imagePath File path of the watermark image. You can use the absolute path, the asset path(/assets/xx.png), or the URI path(content://). The path should be less than 512 bytes.
The watermark image should be in PNG or JPG format. @param watermarkConfig Watermark configurations. See RTCWatermarkConfig{@link #RTCWatermarkConfig}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Call clearVideoWatermark{@link #RTCEngine#clearVideoWatermark} to remove the watermark on the designated video stream. - You can only add one watermark to one video stream. The newly added watermark replaces the previous one. You can call this API multiple times to add watermarks to different streams. - You can call this API before and after joining room. - If you mirror the preview, or the preview and the published stream, the watermark will also be mirrored locally, but the published watermark will not be mirrored. - When you enable simulcast mode, the watermark will be added to all video streams, and it will scale down to smaller encoding configurations accordingly.
setVoiceChangerType(VoiceChangerType voiceChanger) FutureOr<int>
@valid since 3.32 @detail api @author wangjunzheng @brief Set the sound change effect type @param voiceChanger The sound change effect type. See VoiceChangerType{@link #VoiceChangerType} @return API call result:
- 0: Success. - <0: Failure. See ReturnStatus{@link #ReturnStatus} for specific reasons. @note - To use this feature, you need to integrate the SAMI library. See On-Demand Plugin Integration. - You can call it before and after entering the room. - Effective for both internal and external audio source. - Only valid for mono-channel audio. - Mutually exclusive with setVoiceReverbType{@link #RTCEngine#setVoiceReverbType}, and the effects set later will override the effects set first.
setVoiceReverbType(VoiceReverbType voiceReverb) FutureOr<int>
@valid since 3.32 @detail api @author wangjunzheng @brief Set the reverb effect type @param voiceReverb Reverb effect type. See VoiceReverbType{@link #VoiceReverbType} @return API call result:
- 0: Success. - <0: Failure. See ReturnStatus{@link #ReturnStatus} for specific reasons. @note - You can call it before and after entering the room. - Effective for both internal and external audio source. - Only valid for mono-channel audio. - Mutually exclusive with setVoiceChangerType{@link #RTCEngine#setVoiceChangerType}, and the effects set later will override the effects set first.
startAudioCapture() FutureOr<int>
@detail api @author dixing @brief Start internal audio capture. The default is off.
Internal audio capture refers to: capturing audio using the built-in module.
The local client will be informed via onAudioDeviceStateChanged{@link #IRTCEngineEventHandler#onAudioDeviceStateChanged} after starting audio capture by calling this API.
The remote clients in the room will be informed of the state change via onUserStartAudioCapture{@link #IRTCEngineEventHandler#onUserStartAudioCapture} after the visible user starts audio capture by calling this API. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - To enable a microphone without the user's permission will trigger onWarning{@link #IRTCEngineEventHandler#onWarning}. - Call stopAudioCapture{@link #RTCEngine#stopAudioCapture} to stop the internal audio capture. Otherwise, the internal audio capture will sustain until you destroy the engine instance. - To mute and unmute microphones, we recommend using publishStreamAudio{@link #RTCRoom#publishStreamAudio}, other than stopAudioCapture{@link #RTCEngine#stopAudioCapture} and this API. Because starting and stopping capture devices often need some time waiting for the response of the device, that may lead to a short silence during the communication. - To switch from custom to internal audio capture, stop publishing before disabling the custom audio capture module and then call this API to enable the internal audio capture.
startAudioRecording(AudioRecordingConfig config) FutureOr<int>
@detail api @author huangshouqin @brief Starts recording audio communication, and generate the local file.
If you call this API before or after joining the room without internal audio capture, then the recording task can still begin but the data will not be recorded in the local files. Only when you call startAudioCapture{@link #RTCEngine#startAudioCapture} to enable internal audio capture, the data will be recorded in the local files. @param config See AudioRecordingConfig{@link #AudioRecordingConfig}. @return - 0: Success - -2: Invalid parameters - -3: Not valid in this SDK. Please contact the technical support. @note - All audio effects are valid in the file. Mixed audio file is not included in the file. - Call stopAudioRecording{@link #RTCEngine#stopAudioRecording} to stop recording. - You can call this API before and after joining the room. If this API is called before you join the room, you need to call stopAudioRecording{@link #RTCEngine#stopAudioRecording} to stop recording. If this API is called after you join the room, the recording task ends automatically. If you join multiple rooms, audio from all rooms are recorded in one file. - After calling the API, you'll receive onAudioRecordingStateUpdate{@link #IRTCEngineEventHandler#onAudioRecordingStateUpdate}.
startClientMixedStream(String taskId, MixedStreamConfig mixedConfig, ClientMixedStreamConfig extraConfig) FutureOr<int>
startCloudProxy(List<CloudProxyInfo> cloudProxiesInfo) FutureOr<int>
@detail api @author daining.nemo @brief Start cloud proxy @param cloudProxiesInfo cloud proxy informarion list. See CloudProxyInfo{@link #CloudProxyInfo}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Call this API before joining the room. - Start pre-call network detection after starting cloud proxy. - After starting cloud proxy and connects the cloud proxy server successfully, receives onCloudProxyConnected{@link #IRTCEngineEventHandler#onCloudProxyConnected}. - To stop cloud proxy, call stopCloudProxy{@link #RTCEngine#stopCloudProxy}.
startEchoTest(EchoTestConfig config, int delayTime) FutureOr<int>
@detail api @author qipengxiang @brief Starts a call test.
Before entering the room, you can call this API to test whether your local audio/video equipment as well as the upstream and downstream networks are working correctly.
Once the test starts, SDK will record your sound or video. If you receive the playback within the delay range you set, the test is considered normal. @param config Test configurations, see EchoTestConfig{@link #EchoTestConfig}. @param delayTime Delayed audio/video playback time specifying how long you expect to receive the playback after starting the. The range of the value is 2,10 in seconds and the default value is 2. @return API call result:
startFileRecording(RecordingConfig config, RecordingType recordingType) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief This method records the audio & video data during the call to a local file. @param config Local recording parameter configuration. See RecordingConfig{@link #RecordingConfig} @param recordingType Locally recorded media type, see RecordingType{@link #RecordingType}
Note:Screen stream only supports recording video (RECORD_VIDEO_ONLY);Main stream supports recording all types. @return 0: Normal
-1: Parameter setting exception
-2: The current version of the SDK does not support this feature, please contact technical support staff @note - You must join a room before calling this method. - Tune When you use this method, you get an onRecordingStateUpdate{@link #IRTCEngineEventHandler#onRecordingStateUpdate} callback. - If the recording is normal, the system will notify the recording progress through the onRecordingProgressUpdate{@link #IRTCEngineEventHandler#onRecordingProgressUpdate} callback every second.
startHardwareEchoDetection(String testAudioFilePath) FutureOr<int>
@detail api @author zhangcaining @brief Start echo detection before joining a room. @param testAudioFilePath Absolute path of the music file for the detection. It is expected to encode with UTF-8. The following files are supported: mp3, aac, m4a, 3gp, wav.
We recommend to assign a music file whose duration is between 10 to 20 seconds.
Do not pass a Silent file. @return Method call result:
- 0: Success. - -1: Failure due to the onging process of the previous detection. Call stopHardwareEchoDetection{@link #RTCEngine#stopHardwareEchoDetection} to stop it before calling this API again. - -2: Failure due to an invalid file path or file format. @note - You can use this feature only when ChannelProfile{@link #ChannelProfile} is set to CHANNEL_PROFIEL_MEETING or CHANNEL_PROFILE_MEETING_ROOM. - Before calling this API, ask the user for the permissions to access the local audio devices. - Before calling this api, make sure the audio devices are activate and keep the capture volume and the playback volume within a reasonable range. - The detection result is passed as the argument of onHardwareEchoDetectionResult. - During the detection, the SDK is not able to response to the other testing APIs, such as startEchoTest{@link #RTCEngine#startEchoTest}, startAudioDeviceRecordTest{@link #IRTCAudioDeviceManager#startAudioDeviceRecordTest} or startAudioPlaybackDeviceTest{@link #IRTCAudioDeviceManager#startAudioPlaybackDeviceTest}. - Call stopHardwareEchoDetection{@link #RTCEngine#stopHardwareEchoDetection} to stop the detection and release the audio devices.
startNetworkDetection(boolean isTestUplink, int expectedUplinkBitrate, boolean isTestDownlink, int expectedDownlinkBitrate) FutureOr<int>
@detail api @author hanchenchen.c @brief Enable pre-call network detection @param isTestUplink Whether to detect uplink bandwidth @param expectedUplinkBitrate Expected uplink bandwidth in kbps, unit: kbps
Range: {0, [100-10000]}, 0: Auto, that RTC will set the highest bite rate. @param isTestDownlink Whether to detect downlink bandwidth @param expectedDownlinkBitrate Expected downlink bandwidth in kbps, unit: kbps
Range: {0, [100-10000]}, 0: Auto, that RTC will set the highest bite rate. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - After successfully calling this interface, you will receive onNetworkDetectionResult{@link #IRTCEngineEventHandler#onNetworkDetectionResult} within 3s and every 2s thereafter notifying the probe results; - If the probe stops, you will receive onNetworkDetectionStopped{@link #IRTCEngineEventHandler#onNetworkDetectionStopped} to notify the probe to stop.
startPushMixedStream(String taskId, MixedStreamPushTargetConfig pushTargetConfig, MixedStreamConfig mixedConfig) FutureOr<int>
@hidden(Linux) @valid since 3.60. Since version 3.60, this interface replaces the startPushMixedStreamToCDN and startPushPublicStream methods for the functions described below. If you have upgraded to version 3.60 or later and are still using these two methods, please migrate to this interface. @detail api @hiddensdk(audiosdk) @author lizheng @brief Specify the streams to be mixed and initiates the task to push the mixed stream to CDN or WTN. @param taskId Task ID. The length should not exceed 127 bytes.
You may want to push more than one mixed stream to CDN from the same room. When you do that, use different ID for corresponding tasks; if you will start only one task, use an empty string.
When PushTargetType = 1 (WTN stream), this parameter is invalid. Pass an empty string. @param pushTargetConfig Push target config, such as the push URL and WTN stream ID. See MixedStreamPushTargetConfig{@link #MixedStreamPushTargetConfig}. @param mixedConfig Configurations to be set when pushing streams to CDN or WTN. See MixedStreamConfig{@link #MixedStreamConfig}. @return - 0: Success. You can get notified the result of the task and the events in the process of pushing the stream to CDN via onMixedStreamEvent{@link #IRTCEngineEventHandler#onMixedStreamEvent}. - !0: Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note - Subscribe to the Push-to-CDN and the WTN stream notifications in the console to receive notifications about task status changes. When calling this API repeatedly, subsequent calls to this API will trigger both TranscodeStarted and TranscodeUpdated callbacks. - Call stopPushMixedStream{@link #RTCEngine#stopPushMixedStream} to stop pushing streams to CDN. - Call updatePushMixedStream{@link #RTCEngine#updatePushMixedStream} to update part of the configurations of the task. - Call startPushSingleStream{@link #RTCEngine#startPushSingleStream} to push a single stream to CDN.
startPushSingleStream(String taskId, PushSingleStreamParam param) FutureOr<int>
@hidden(Linux) @valid since 3.60. @detail api @hiddensdk(audiosdk) @brief Pushes a single media stream to CDN or RTC room. @param taskId Task ID.
You may want to start more than one task to push streams to CDN. When you do that, use different IDs for corresponding tasks; if you will start only one task, use an empty string. @param param Configurations for pushing a single stream to CDN. See PushSingleStreamParam{@link #PushSingleStreamParam}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note - After calling this API, you will be informed of the result and errors during the pushing process with onSingleStreamEvent{@link #IRTCEngineEventHandler#onSingleStreamEvent}. - Subscribe to the Push-to-CDN and the WTN stream notifications in the console to receive notifications about task status changes. When calling this API repeatedly, subsequent calls to this API will trigger both TranscodeStarted and TranscodeUpdated callbacks. - Call stopPushSingleStream{@link #RTCEngine#stopPushSingleStream} to stop the task. - Since this API does not perform encoding and decoding, the video stream pushed to RTMP will change according to the resolution, encoding method, and turning off the camera of the end of pushing streams.
startScreenCapture(ScreenMediaType type, dynamic mediaProjectionResultData) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangqianqian.1104 @brief The RTC SDK start capturing the screen audio and/or video stream internally. @param type Media type. See ScreenMediaType{@link #ScreenMediaType} @param mediaProjectionResultData The Intent obtained after applying for screen sharing permission from the Android device. See getMediaProjection. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - The call of this API takes effects only when you are using RTC SDK to record screen. You will get a warning by onVideoDeviceWarning{@link #IRTCEngineEventHandler#onVideoDeviceWarning} or onAudioDeviceWarning{@link #IRTCEngineEventHandler#onAudioDeviceWarning} after calling this API when the source is set to an external recorder. - After capturing, you need to call publishStreamAudio{@link #RTCRoom#publishStreamAudio} and/or publishStreamVideo{@link #RTCRoom#publishStreamVideo} to push to the remote end. - You will receive onVideoDeviceStateChanged{@link #IRTCEngineEventHandler#onVideoDeviceStateChanged} and onAudioDeviceStateChanged{@link #IRTCEngineEventHandler#onAudioDeviceStateChanged} when the capturing is started. - To stop capturing, call stopScreenCapture{@link #RTCEngine#stopScreenCapture}.
startVideoCapture() FutureOr<int>
@detail api @hiddensdk(audiosdk) @author zhangzhenyu.samuel @brief Enable internal video capture immediately. The default setting is off.
Internal video capture refers to: capturing video using the built-in module.
The local client will be informed via onVideoDeviceStateChanged{@link #IRTCEngineEventHandler#onVideoDeviceStateChanged} after starting video capture by calling this API.
The remote clients in the room will be informed of the state change via onUserStartVideoCapture{@link #IRTCEngineEventHandler#onUserStartVideoCapture} after the visible client starts video capture by calling this API. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Since the upgrade in v3.37.0, you need to add Kotlin plugin to Gradle in the project to use this API. - Call stopVideoCapture{@link #RTCEngine#stopVideoCapture} to stop the internal video capture. Otherwise, the internal video capture will sustain until you destroy the engine instance. - Once you create the engine instance, you can start internal video capture regardless of the video publishing state. The video stream will start publishing only after the video capture starts. - To switch from custom to internal video capture, stop publishing before disabling the custom video capture module and then call this API to enable the internal video capture. - Call switchCamera{@link #RTCEngine#switchCamera} to switch the camera used by the internal video capture module. - If the default video format can not meet your requirement, contact our technical specialist to help you with Cloud Config. After that, you can push and apply these configurations to Android clients at any time.
startVideoDigitalZoomControl(ZoomDirectionType direction) FutureOr<int>
@valid since 3.51 @detail api @hiddensdk(audiosdk) @author likai.666 @brief Continuous and repeatedly digital zoom control. This action effect both the video preview locally and the stream published. @param direction Action of the digital zoom control. Refer to ZoomDirectionType{@link #ZoomDirectionType}. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - As the default offset is 0, you must call setVideoDigitalZoomConfig{@link #RTCEngine#setVideoDigitalZoomConfig} before this API. - You can only move video images after they are magnified via this API or setVideoDigitalZoomControl{@link #RTCEngine#setVideoDigitalZoomControl}. - The control process stops when the scale reaches the limit, or the images have been moved to the border. if the next action exceeds the scale or movement range, SDK will execute it with the limits. - Call stopVideoDigitalZoomControl{@link #RTCEngine#stopVideoDigitalZoomControl} to stop the ongoing zoom control. - Call setVideoDigitalZoomControl{@link #RTCEngine#setVideoDigitalZoomControl} to have a one-time digital zoom control. - Refer to setCameraZoomRatio{@link #RTCEngine#setCameraZoomRatio} if you intend to have an optical zoom control to the camera.
stopAudioCapture() FutureOr<int>
@detail api @author dixing @brief Stop internal audio capture. The default is off.
Internal audio capture refers to: capturing audio using the built-in module.
The local client will be informed via onAudioDeviceStateChanged{@link #IRTCEngineEventHandler#onAudioDeviceStateChanged} after stopping audio capture by calling this API.
The remote clients in the room will be informed of the state change via onUserStopAudioCapture{@link #IRTCEngineEventHandler#onUserStopAudioCapture} after the visible client stops audio capture by calling this API. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Call startAudioCapture{@link #RTCEngine#startAudioCapture} to enable the internal audio capture. - Without calling this API the internal audio capture will sustain until you destroy the engine instance.
stopAudioRecording() FutureOr<int>
@detail api @author huangshouqin @brief Stop audio recording. @return - 0: Success - <0: Failure @note Call startAudioRecording{@link #RTCEngine#startAudioRecording} to start the recording task.
stopChorusCacheSync() FutureOr<int>
@hidden internal use only @detail api @hiddensdk(audiosdk) @brief Stop aligning RTC data by cache. @return See ReturnStatus{@link #ReturnStatus}.
stopClientMixedStream(String taskId) FutureOr<int>
@hidden for internal use only @hiddensdk(audiosdk)
stopCloudProxy() FutureOr<int>
@detail api @author daining.nemo @brief Stop cloud proxy @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note To start cloud proxy, call startCloudProxy{@link #RTCEngine#startCloudProxy}.
stopEchoTest() FutureOr<int>
@detail api @author qipengxiang @brief Stop the current call test.
After calling startEchoTest{@link #RTCEngine#startEchoTest}, you must call this API to stop the test. @return API call result:
- 0: Success. - -3: Failure, no test is in progress. @note After stopping the test with this API, all the system devices and streams are restored to the state they were in before the test.
stopFileRecording() FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Stop local recording @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Call startFileRecording{@link #RTCEngine#startFileRecording} After starting local recording, you must call this method to stop recording. - After calling this method, you will receive an onRecordingStateUpdate{@link #IRTCEngineEventHandler#onRecordingStateUpdate} callback prompting you to record the result.
stopHardwareEchoDetection() FutureOr<int>
@detail api @author zhangcaining @brief Stop the echo detection before joining a room. @return Method call result:
- 0: Success. - -1: Failure. @note - Refer to startHardwareEchoDetection{@link #RTCEngine#startHardwareEchoDetection} for information on how to start a echo detection. - We recommend calling this API to stop the detection once getting the detection result from onHardwareEchoDetectionResult{@link #IRTCEngineEventHandler#onHardwareEchoDetectionResult}. - You must stop the echo detection to release the audio devices before the user joins a room. Otherwise, the detection may interfere with the call.
stopNetworkDetection() FutureOr<int>
@detail api @author hanchenchen.c @brief Stop pre-call network probe @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - After calling this interface, you will receive an onNetworkDetectionStopped{@link #IRTCEngineEventHandler#onNetworkDetectionStopped} callback to notify the probe to stop.
stopPushMixedStream(String taskId, MixedStreamPushTargetType targetType) FutureOr<int>
@valid since 3.60. Since version 3.60, this interface replaces the stopPushStreamToCDN method for stopping the push of mixed streams to CDN. If you have upgraded to version 3.60 or later and are still using this method, please migrate to this interface. @detail api @hiddensdk(audiosdk) @brief Stops the task started via startPushMixedStream{@link #RTCEngine#startPushMixedStream}. @param taskId Task ID. Specifys of which pushing task you want to update the parameters. @param targetType See MixedStreamPushTargetType{@link #MixedStreamPushTargetType}. @return - 0: Success - !0: Fail. See ReturnStatus{@link #ReturnStatus} for more details.
stopPushSingleStream(String taskId) FutureOr<int>
@valid since 3.60. Since version 3.60, this interface replaces the stopPushStreamToCDN method for stopping the push of single media streams to CDN. If you have upgraded to version 3.60 or later and are still using this method, please migrate to this interface. @detail api @hiddensdk(audiosdk) @author liujingchao @brief Stops the task of pushing a single media stream to CDN started via startPushSingleStream{@link #RTCEngine#startPushSingleStream}. @param taskId Task ID. Specifys the task you want to stop. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details
stopScreenCapture() FutureOr<int>
stopVideoCapture() FutureOr<int>
@detail api @hiddensdk(audiosdk) @author zhangzhenyu.samuel @brief Disable internal video capture immediately. The default is off.
Internal video capture refers to: capturing video using the built-in module.
The local client will be informed via onVideoDeviceStateChanged{@link #IRTCEngineEventHandler#onVideoDeviceStateChanged} after stopping video capture by calling this API.
The remote clients in the room will be informed of the state change via onUserStopVideoCapture{@link #IRTCEngineEventHandler#onUserStopVideoCapture} after the visible client stops video capture by calling this API. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Call startVideoCapture {@link #RTCEngine#startVideoCapture} to enable the internal video capture. - Without calling this API the internal video capture will sustain until you destroy the engine instance.
stopVideoDigitalZoomControl() FutureOr<int>
@valid since 3.51 @detail api @hiddensdk(audiosdk) @author likai.666 @brief Stop the ongoing digital zoom control instantly. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note Refer to startVideoDigitalZoomControl{@link #RTCEngine#startVideoDigitalZoomControl} for starting digital zooming.
switchCamera(CameraId cameraId) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author zhangzhenyu.samuel @brief Switch to the front-facing/back-facing camera used in the internal video capture
The local client will be informed via onVideoDeviceStateChanged{@link #IRTCEngineEventHandler#onVideoDeviceStateChanged} after calling this API. @param cameraId Camera ID. Refer to CameraId{@link #CameraId} for more details. @return - 0: Success - < 0: Failure @note - Front-facing camera is the default camera. - If the internal video capturing is on, the switch is effective once you call this API. If the internal video capturing is off, the setting will be effective when capture starts.
takeLocalSnapshot(ISnapshotResultCallback callback) FutureOr<long>
@detail api @hiddensdk(audiosdk) @author wangfujun.911 @brief Take a snapshot of the local video. @param callback See ISnapshotResultCallback{@link #ISnapshotResultCallback}. @return The index of the local snapshot task, starting from 1. @note - The snapshot is taken with all video effects on, like rotation, and mirroring. - You can take the snapshot either using SDK internal video capture or customized capture.
takeLocalSnapshotToFile(String filePath) FutureOr<long>
@detail api @author wangfujun.911 @brief Takes a snapshot of the local/remote video stream and save it as a JPG file at the specified local path.
After calling this method, the SDK triggers onLocalSnapshotTakenToFile{@link #IRTCEngineEventHandler#onLocalSnapshotTakenToFile} to report whether the snapshot is taken successfully and provide details of the snapshot image. @param filePath The absolute file path where the snapshot JPG file will be saved. The file extension must be .jpg. Ensure that the directory exists and is writable. Example: /sdcard/Pictures/snapshot.jpg. @return The index of the remote snapshot task, starting from 1. The index can be used to track the task status or perform other management operations.
takeRemoteSnapshot(String streamId, ISnapshotResultCallback callback) FutureOr<long>
@detail api @hiddensdk(audiosdk) @author wangfujun.911 @brief Takes a snapshot of the local video. @param streamId The streamId of the remote user. @param callback See ISnapshotResultCallback{@link #ISnapshotResultCallback}. @return The index of the local snapshot task, starting from 1. @note - The snapshot is taken with all video effects on, like rotation, and mirroring. - You can take the snapshot either using SDK internal video capture or customized capture.
takeRemoteSnapshotToFile(String streamId, String filePath) FutureOr<long>
@detail api @author wangfujun.911 @brief Takes snapshot of the remote video stream and save it as a JPG file at the specified local path.
After calling this method, the SDK triggers onRemoteSnapshotTakenToFile{@link #IRTCEngineEventHandler#onRemoteSnapshotTakenToFile} to report whether the snapshot is taken successfully and provide details of the snapshot image. @param streamId ID of the remote video stream. @param filePath The absolute file path where the snapshot JPG file will be saved. The file extension must be .jpg. Ensure that the directory exists and is writable. Example: /sdcard/Pictures/snapshot.jpg. @return The index of the remote snapshot task, starting from 1. The index can be used to track the task status or perform other management operations.
toString() String
A string representation of this object.
inherited
updateClientMixedStream(String taskId, MixedStreamConfig mixedConfig, ClientMixedStreamConfig extraConfig) FutureOr<int>
@hidden for internal use only @hiddensdk(audiosdk)
updateLocalVideoCanvas(int renderMode, int backgroundColor) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangfujun.911 @brief Update the render mode and background color of local video rendering @param renderMode See VideoCanvas{@link #VideoCanvas}.renderMode @param backgroundColor See VideoCanvas{@link #VideoCanvas}.backgroundColor @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note Calling this API during local video rendering will be effective immediately.
updateLoginToken(String token) FutureOr<int>
@detail api @author hanchenchen.c @brief Update the Token
Token used by the user for login has a certain valid period. When the Token expires, you need to call this method to update the login Token information.
When calling the login{@link #RTCEngine#login} method to log in, if an expired token is used, the login will fail and you will receive an onLoginResult{@link #IRTCEngineEventHandler#onLoginResult} callback notification with an error code of 'LOGIN_ERROR_CODE_INVALID_TOKEN'. You need to reacquire the token and call this method to update the token. @param token
Updated dynamic key @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - If the token is invalid and the login fails, the SDK will automatically log in again after updating the token by calling this method, and the user does not need to call the login{@link #RTCEngine#login} method himself. - Token expires, if you have successfully logged in, it will not be affected. An expired Token error will be notified to the user the next time you log in with an expired Token, or when you log in again due to a disconnection due to poor local network conditions.
updatePushMixedStream(String taskId, MixedStreamPushTargetConfig pushTargetConfig, MixedStreamConfig mixedConfig) FutureOr<int>
@valid since 3.60. Since version 3.60, this interface replaces the updatePushMixedStreamToCDN and updatePublicStreamParam methods for the functions described below. If you have upgraded to version 3.60 or later and are still using these two methods, please migrate to this interface. @detail api @hiddensdk(audiosdk) @author lizheng @brief Update the task configuration during the task started via startPushMixedStream{@link #RTCEngine#startPushMixedStream}. You will be informed of the change via the onMixedStreamEvent{@link #IRTCEngineEventHandler#onMixedStreamEvent} callback. @param taskId Task ID. Specifys of the task to be updated. When MixedStreamConfig{@link #MixedStreamConfig} is set to PushTargetType = 0, this ID is used to identify the task. @param pushTargetConfig Push target config, such as the push url and WTN stream ID. See MixedStreamPushTargetConfig{@link #MixedStreamPushTargetConfig}. @param mixedConfig Configurations that you want to update. See MixedStreamConfig{@link #MixedStreamConfig} for specific indications. You can update any property for the task unless it is specified as unavailable for updates.
If you left some properties blank, you can expect these properties to be set to their default values. @return - 0: Success. - !0: Fail. See ReturnStatus{@link #ReturnStatus} for more details.
updateRemoteStreamVideoCanvas(String streamId, int renderMode, int backgroundColor) FutureOr<int>
@valid since 3.56 @detail api @hiddensdk(audiosdk) @author zhongshenyou @brief Modifies remote video frame rendering settings, including render mode, background color, and rotation angle, while using the internal rendering of the SDK. @param streamId Stream ID, used to specify the video stream for which the rendering settings need to be modified. @param remoteVideoRenderConfig Video rendering settings. See RemoteVideoRenderConfig{@link #RemoteVideoRenderConfig}. @return - 0: Success. - < 0 : Failure. See ReturnStatus{@link #ReturnStatus} for more details @note - After setting the rendering configuration for the remote video frame with setRemoteVideoCanvas{@link #RTCEngine#setRemoteVideoCanvas}, you can call this API to update settings including render mode, background color, and rotation angle. - Calling this API during remote video rendering will be effective immediately.
updateResource(NativeResource resource) → void
inherited
updateScreenCapture(ScreenMediaType type) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangqianqian.1104 @brief Updates the media type of the internal screen capture. @param type Media type. See ScreenMediaType{@link #ScreenMediaType}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note Call this API after calling startScreenCapture{@link #RTCEngine#startScreenCapture}.

Operators

operator ==(Object other) bool
The equality operator.
inherited

Static Properties

codegen_$namespace → dynamic
no setter

Static Methods

createRTCEngine(EngineConfig config, IRTCEngineEventHandler handler) FutureOr<RTCEngine>
@detail api @author wangzhanqiang @brief Creates an engine instance.
This is the very first API that you must call if you want to use all the RTC capabilities.
If there is no engine instance in current process, calling this API will create one. If an engine instance has been created, calling this API again will have the created engine instance returned. @param config SDK Engine Config,see EngineConfig{@link #EngineConfig} @param handler Handler sent from SDK to App. See IRTCEngineEventHandler{@link #IRTCEngineEventHandler} @return - RTCEngine: A successfully created engine instance. - Null: EngineConfig is inValid see EngineConfig{@link #EngineConfig}. Failed to load the so file @note The lifecycle of the handler must be longer than that of the RTCEngine, i.e. the handler must be created before calling createRTCEngine{@link #RTCEngine#createRTCEngine} and destroyed after calling destroyRTCEngine{@link #RTCEngine#destroyRTCEngine}.
destroyRTCEngine() FutureOr<void>
@detail api @author wangzhanqiang @brief Destroy the engine instance created by createRTCEngine{@link #RTCEngine#createRTCEngine}, and release all related resources. @note - Call this API after all business scenarios related to the engine instance are destroyed. - When the API is called, RTC SDK destroys all memory associated with the engine instance and stops any interaction with the media server. - Calling this API will start the SDK exit logic. The engine thread is held until the exit logic is complete. The engine thread is retained until the exit logic is complete. Therefore, do not call this API directly in the callback thread, or it will cause a deadlock. This function takes a long time to execute, so it's not recommended to call this API in the main thread, or the main thread may be blocked.
getSDKVersion() FutureOr<String>
@detail api @author wangzhanqiang @brief Get the current version number of the SDK. @return The current SDK version number.
setLogConfig(RTCLogConfig logConfig) FutureOr<int>
@detail api @author caofanglu @brief Configures the local log parameters of RTC SDK, including the logging level, directory, the limits for total log file size, and the prefix to the log file. @param logConfig Local log parameters. See RTCLogConfig{@link #RTCLogConfig}. @return - 0: Success. - –1: Failure. This API must be called before creating engine. - –2: Failure. Invalid parameters. @note This API must be called before createRTCEngine{@link #RTCEngine#createRTCEngine}.