RTCEngine class

Engine class

Inheritance

Constructors

RTCEngine()
constructor

Properties

$instance → dynamic
no setterinherited
audioEffectPlayer AudioEffectPlayer
Get audio effect player interface
no setter
delegate FutureOr<id<ByteRTCEngineDelegate>?>
@platform ios @detail callback
getter/setter pairinherited
hashCode int
The hash code for this object.
no setterinherited
monitorDelegate FutureOr<id<ByteRTCMonitorDelegate>?>
@platform ios @platform ios @hidden @deprecated
getter/setter pairinherited
runtimeType Type
A representation of the runtime type of the object.
no setterinherited
videoEffectInterface VideoEffect
Get video effect interface
no setter
wtnStream WTNStream
Get public stream interface
no setter

Methods

$createAudioEffectPlayer() Future
ignored, inner method.
$createInstance(List args) → dynamic
Factory method for creating instances
override
$createRTCVideoEffect() → dynamic
ignored, inner method.
$createWTNStream() Future
ignored, inner method.
$destroy() → void
inherited
$init(List args) → void
inherited
android_setRtcVideoEventHandler(IRTCEngineEventHandler engineEventHandler) Future<int?>
@platform android @detail api @hidden for internal use only @author wangzhanqiang @brief The receiving class that sets engine event callbacks must inherit from IRTCEngineEventHandler{@link #IRTCEngineEventHandler}. @param engineEventHandler
Event processor interface class. See IRTCEngineEventHandler{@link #IRTCEngineEventHandler}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - The caller needs to implement a class that inherits from IRTCEngineEventHandler{@link #IRTCEngineEventHandler} and override the events that need attention. - The callback is asynchronous recall - All event callbacks will be triggered in a separate callback thread. Please pay attention to all operations related to the thread running environment when receiving callback events, such as operations that need to be performed in the UI thread. Do not directly operate in the implementation of the callback function.
inherited
clearVideoWatermark() Future<int?>
@detail api @hiddensdk(audiosdk) @author zhushufan.ref @brief Removes video watermark from designated video stream. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details
inherited
createGameRoom(string roomId, GameRoomConfig config) Future<IGameRoom?>
@detail api @author luomingkang @brief Create a game room instance.
This API only returns a room instance. You still need to call joinRoom{@link #RTCRoom#joinRoom} to actually create/join the room.
Each call of this API creates one RTCRoom{@link #RTCRoom} instance. Call this API as many times as the number of rooms you need, and then call joinRoom{@link #RTCRoom#joinRoom} of each RTCRoom instance to join multiple rooms at the same time.
In multi-room mode, a user can subscribe to media streams in the joined rooms at the same time. @param roomId The string matches the regular expression: [a-zA-Z0-9_\@\\-\\.]{1,128}. @param config The game room configuration. See GameRoomConfig{@link #GameRoomConfig}. @return RTCRoom{@link #RTCRoom} instance. If you get NULL instead of an RTCRoom instance, please ensure the roomId is valid. And the specified room is not yet created. @note - If the room that you wish to join already exists, you still need to call this API first to create the RTCRoom instance, and then call joinRoom{@link #RTCRoom#joinRoom}. - Do not create multiple rooms with the same roomId, otherwise the newly created room instance will replace the old one. - To forward streams to the other rooms, call startForwardStreamToRooms{@link #RTCRoom#startForwardStreamToRooms} instead of enabling Multi-room mode.
inherited
createRTCRoom(String roomId, {bool autoInitRangeAudio = false, bool autoInitSpatialAudio = false}) Future<RTCRoom?>
@brief Create room instance @param roomId Room ID @param autoInitRangeAudio Whether to automatically create a single stream push object, default is not to create @param autoInitSpatialAudio Whether to automatically create a spatial audio object, default is not to create @return Room instance
override
destroy() → void
Destroy engine instance
disableAlphaChannelVideoEncode() Future<int?>
@valid since 3.58 @detail api @hiddensdk(audiosdk) @region Video Management @brief Disables the Alpha channel encoding feature for externally captured video frames. @return Method call result:
- 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note This API must be called after stopping the publish of the video stream.
inherited
disableAudioFrameCallback(AudioFrameCallbackMethod method) Future<int?>
@detail api @author gongzhengduo @brief Disables audio data callback. @param method Audio data callback method. See AudioFrameCallbackMethod{@link #AudioFrameCallbackMethod}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note Call this API after calling enableAudioFrameCallback{@link #RTCEngine#enableAudioFrameCallback}.
inherited
enableAlphaChannelVideoEncode({required AlphaLayout alphaLayout}) Future<int?>
@valid since 3.58 @detail api @hiddensdk(audiosdk) @region Video Management @brief Enables the Alpha channel encoding feature for custom captured video frames.
Suitable for scenarios where the video subject and background need to be separated at the push stream end, and the background can be custom rendered at the pull stream end. @param alphaLayout The relative position of the separated Alpha channel to the RGB channel information. Currently, only AlphaLayout.TOP is supported, which means it is placed above the RGB channel information. @return Method call result:
- 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - This API only applies to custom captured video frames that use the RGBA color model, including VideoPixelFormat.TEXTURE_2D, VideoPixelFormat.TEXTURE_OES, VideoPixelFormat.RGBA. - This API must be called before publishing the video stream. - After calling this API to enable Alpha channel encoding, you must call pushExternalVideoFrame{@link #RTCEngine#pushExternalVideoFrame} to push the custom captured video frames to the RTC SDK. If a video frame format that is not supported is pushed, calling pushExternalVideoFrame{@link #RTCEngine#pushExternalVideoFrame} will return the error code ReturnStatus.RETURN_STATUS_PARAMETER_ERR.
inherited
enableAudioAEDReport(int interval) Future<int?>
@hidden 3.60 for internal use only @detail api @author gengjunjie @brief Enables AED detection. After that, you will receive onAudioAEDStateUpdate{@link #IRTCEngineEventHandler#onAudioAEDStateUpdate}. @param interval Callback interval, in milliseconds.
+ <= 0: Disable AED detection. + [100, 3000]: Enable AED detection and set the callback interval to this value. It is recommended to set it to 2000. + Invalid interval value: If the value is less than 100, it is set to 100. If the value is greater than 3000, it is set to 3000. @return + 0: Success. + <0: Fail. See ReturnStatus{@link #ReturnStatus} for more details.
inherited
enableAudioDecoding(bool enable) Future
@hidden for internal use only @region custom audio acquisition rendering @brief whether to use sdk audio decoding. @param enable whether to use audio decoding.
。 - true: audio decoding is turned on.(default) - false: audio decoding is turned off. @note - use before registerRemoteEncodedAudioFrameObserver.
inherited
enableAudioEncoding(bool enable) Future
@hidden for internal use only @region custom audio acquisition rendering @brief whether to use sdk audio encoding. @param enable whether to use audio encoding.
。 - true: audio encoding is turned on.(default) - false: audio encoding is turned off. @note - use before pushExternalEncodedAudioFrame{@link #RTCEngine#pushExternalEncodedAudioFrame}.
inherited
enableAudioFrameCallback({required AudioFrameCallbackMethod method, required AudioFormat format}) Future<int?>
@detail api @author gongzhengduo @brief Enable audio frames callback and set the format for the specified type of audio frames. @param method Audio data callback method. See AudioFrameCallbackMethod{@link #AudioFrameCallbackMethod}.
If method is set as AUDIO_FRAME_CALLBACK_RECORD(0), AUDIO_FRAME_CALLBACK_PLAYBACK(1), AUDIO_FRAME_CALLBACK_MIXED(2), or AUDIO_FRAME_CALLBACK_CAPTURE_MIXED(5), set format to the accurate value listed in the audio parameters format.
If method is set as AUDIO_FRAME_CALLBACK_REMOTE_USER(3), set format to auto. @param format Audio parameters format. See AudioFormat{@link #AudioFormat}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note After calling this API and registerAudioFrameObserver{@link #RTCEngine#registerAudioFrameObserver}, IAudioFrameObserver{@link #IAudioFrameObserver} will receive the corresponding audio data callback. However, these two APIs are independent of each other and the calling order is not restricted.
inherited
enableAudioPropertiesReport(AudioPropertiesConfig config) Future<int?>
@detail api @author wangjunzheng @brief Enable audio information prompts. After that, you will receive onLocalAudioPropertiesReport{@link #IRTCEngineEventHandler#onLocalAudioPropertiesReport}, onRemoteAudioPropertiesReport{@link #IRTCEngineEventHandler#onRemoteAudioPropertiesReport}, and onActiveSpeaker{@link #IRTCEngineEventHandler#onActiveSpeaker}. @param config See AudioPropertiesConfig{@link #AudioPropertiesConfig} @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details
inherited
enableAudioVADReport(int interval) Future<int?>
@hidden 3.60 for internal use only @detail api @author gengjunjie @brief Enables AED detection. After that, you will receive onAudioVADStateUpdate{@link #IRTCEngineEventHandler#onAudioVADStateUpdate}. @param interval Callback interval, in milliseconds.
+ <= 0: Disable AED detection. + [100, 3000]: Enable AED detection and set the callback interval to this value. + Invalid interval value: If the value is less than 100, it is set to 100. If the value is greater than 3000, it is set to 3000. @return + 0: Success. + <0: Fail. See ReturnStatus{@link #ReturnStatus} for more details.
inherited
enableCameraAutoExposureFaceMode(bool enable) Future<int?>
@valid since 353 @detail api @author yinkaisheng @brief Enable or disable face auto exposure mode during internal video capture. This mode fixes the problem that the face is too dark under strong backlight; but it will also cause the problem of too bright/too dark in the area outside the ROI region. @param enable Whether to enable the mode. True by default for iOS, False by default for Android. @return - 0: Success. - < 0: Failure. @note You must call this API before calling startVideoCapture{@link #RTCEngine#startVideoCapture} to enable internal capture to make the setting valid.
inherited
enableEffectBeauty(bool enable) Future<int?>
@detail api @hiddensdk(audiosdk) @author wangjunlin.3182 @brief Enables/Disables basic beauty effects. @param enable Whether to enable basic beauty effects.
- true: Enables basic beauty effects. - false: (Default) Disables basic beauty effects. @return - 0: Success. - –1001: This method is not available for your current RTC SDK. - -12: This method is not available in the Audio SDK. - <0: Failure. Effect SDK internal error. For specific error code, see Error Code Table. @note - You cannot use the basic beauty effects and the advanced effect features at the same time. See how to use advanced effect features for more information. - You need to integrate Effect SDK before calling this API. Effect SDK v4.4.2+ is recommended. - Call setBeautyIntensity{@link #RTCEngine#setBeautyIntensity} to set the beauty effect intensity. If you do not set the intensity before calling this API, the default intensity will be enabled. The default values for the intensity of each beauty mode are as follows: 0.7 for brightning, 0.8 for smoothing, 0.5 for sharpening, and 0.7 for clarity. - This API is not applicable to screen capturing.
inherited
enableExternalSoundCard(bool enable) Future<int?>
@detail api @author zhangyuanyuan.0101 @brief Enable the audio process mode for external sound card. @param enable
- true: enable - false: disable (by default) @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - When you use external sound card for audio capture, enable this mode for better audio quality. - When using the mode, you can only use earphones. If you need to use internal or external speaker, disable this mode.
inherited
enableLocalVoiceReverb(bool enable) Future<int?>
@detail api @author wangjunzheng @brief Enable the reverb effect for the local captured voice. @param enable Whether to enable. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note Call setLocalVoiceReverbParam{@link #RTCEngine#setLocalVoiceReverbParam} to set the reverb effect.
inherited
enablePlaybackDucking(bool enable) Future<int?>
@detail api @author majun.lvhiei @brief Enables/disables the playback ducking function. This function is usually used in scenarios where short videos or music will be played simultaneously during RTC calls.
With the function on, if remote voice is detected, the local media volume of RTC will be lowered to ensure the clarity of the remote voice. If remote voice disappears, the local media volume of RTC restores. @param enable Whether to enable playback ducking:
- true: Yes - false: No @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details
inherited
enableVocalInstrumentBalance(bool enable) Future<int?>
@detail api @author majun.lvhiei @brief Enables/disables the loudness equalization function.
If you call this API with the parameter set to True, the loudness of user's voice will be adjusted to -16lufs. If then you also call setAudioMixingLoudness and import the original loudness of the audio data used in audio mixing, the loudness will be adjusted to -20lufs when the audio data starts to play. @param enable Whether to enable loudness equalization function:
- true: Yes - false: No @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note You must call this API before starting to play the audio file with start{@link #IAudioEffectPlayer#start}.
inherited
feedback({required List<ProblemFeedbackOption> types, ProblemFeedbackInfo? info}) Future<int>
Feedback, used for problem reporting.
findOverrideIndices(List args, List<List<int>> indicesList) List<int>
查找重载参数下标列表 @desc android 构造函数存在重载 此方法通过实际传入参数与构造函数参数列表集合的比对,来获取当前实际的需要使用的参数列表
inherited
fn2AndroidClass(Function callback, dynamic nativeClass(), String methodName) → dynamic
与 ts runtime 中的 fn2AndroidClass 功能一致 将 Dart 函数转换为 Android 回调类实例供 Android 侧使用
inherited
getAudioDeviceManager() Future<AudioDeviceManager?>
Get audio device manager
override
getAudioEffectPlayer() Future<AudioEffectPlayer?>
Get audio effect manager, can also be set to automatically create after engine creation
getAudioRoute() Future<AudioRoute?>
Get audio route
override
getCameraZoomMaxRatio() Future<float?>
@detail api @author zhangzhenyu.samuel @brief Get the maximum zoom factor of the currently used camera (front/postcondition) @return Maximum zoom factor @note You must have called startVideoCapture{@link #RTCEngine#startVideoCapture} using the SDK internal capture module for video capture, the maximum zoom factor of the camera can be detected.
inherited
getMediaPlayer(int playerId) Future<MediaPlayer?>
Get media player
getNetworkTimeInfo() Future<NetworkTimeInfo?>
@detail api @author songxiaomeng.19 @brief Obtain the synchronization network time information. @return See NetworkTimeInfo{@link #NetworkTimeInfo}. @note - When you call this API for the first time, you starts synchornizing the network time information and receive the return value 0. After the synchonization finishes, you will receive onNetworkTimeSynchronized{@link #IRTCEngineEventHandler#onNetworkTimeSynchronized}. After that, calling this API will get you the correct network time. - Under chorus scenario, participants shall start audio mixing at the same network time.
inherited
getPeerOnlineStatus(string peerUserID) Future<int?>
@detail api @author hanchenchen.c @brief Query the login status of the opposite or local user @param peerUserID The user ID to be queried @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - You must call login{@link #RTCEngine#login} to log in before calling this interface. - After calling this interface, the SDK notifies the query result using the onGetPeerOnlineStatus{@link #IRTCEngineEventHandler#onGetPeerOnlineStatus} callback. - Before sending an out-of-room message, the user can know whether the peer user is logged in through this interface to decide whether to send a message. You can also check your login status through this interface.
inherited
getVideoDeviceManager() Future<IVideoDeviceManager?>
@valid since 3.56 @detail api @author likai.666 @brief Create a video Facility Management instance @return Video Facility Management instance. See IVideoDeviceManager{@link #IVideoDeviceManager}
inherited
getVideoEffectInterface() Future<IVideoEffect?>
@detail api @author zhushufan.ref @brief Gets video effect interfaces. @return Video effect interfaces. See IVideoEffect{@link #IVideoEffect}.
inherited
getVideoEffectPlayer() Future<VideoEffect?>
Get video effect manager, can also be set to automatically create after engine creation
getWTNStream() Future<WTNStream?>
Get public stream interface, can also be set to automatically create after engine creation
override
ios_enableAGC(BOOL enable) Future<int?>
@platform ios @hidden(iOS) @valid since 3.51 @detail api @author liuchuang @brief Turns on/ off AGC(Analog Automatic Gain Control).
After AGC is enabled, SDK can automatically adjust mircrophone pickup volume to keep the output volume at a steady level. @param enable whether to turn on AGC.
- true: AGC is turned on. - false: AGC is turned off,with DAGC(Digtal Automatic Gain Control) still on. @return - 0: Success. - -1: Failure. @note You can call this method before and after joining the room. To turn on AGC before joining the room, you need to contact the technical support to get a private parameter to set ByteRTCRoomProfile{@link #ByteRTCRoomProfile}.
To enable AGC after joining the room, you must set ByteRTCRoomProfile{@link #ByteRTCRoomProfile} to ByteRTCRoomProfileMeeting, ByteRTCRoomProfileMeetingRoom or ByteRTCRoomProfileClassroom.
It is not recommended to call setAudioCaptureDeviceVolume: to adjust mircrophone pickup volume with AGC on.
inherited
ios_getScreenCaptureSourceList() Future<ByteRTCScreenCaptureSourceInfo?>
@platform ios @hidden(iOS) @detail api @author liyi.000 @brief Get the list of shared objects (application windows and screens). @return The list of shared objects. See ByteRTCScreenCaptureSourceInfo{@link #ByteRTCScreenCaptureSourceInfo}。
The enumerated value can be used for startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters}. @note Only valid for PC and macOS.
inherited
ios_getThumbnail(ByteRTCScreenCaptureSourceType sourceType, intptr_t sourceId, int maxWidth, int maxHeight) Future
@platform ios @hidden(iOS) @detail api @author liyi.000 @brief Get the thumbnail of the screen @param sourceType Type of the screen capture object. See ByteRTCScreenCaptureSourceType{@link #ByteRTCScreenCaptureSourceType}. @param sourceId ID of the screen-shared object. You can get the ID from ByteRTCScreenCaptureSourceInfo returned by calling getScreenCaptureSourceList{@link #ByteRTCEngine#getScreenCaptureSourceList}. @param maxWidth Maximum width. RTC will scale the thumbnail to fit the given size while maintaining the original aspect ratio. If the aspect ratio of the given size does not match the sharing object, the thumbnail will have blank borders. @param maxHeight Maximum height. Refer to the note for maxWidth. @return The thumbnail of the sharing object.
The thumbnail is of the same width-height ratio of the object. The size of the trumbnail is no larger than the specified size.
inherited
ios_getWindowAppIcon(intptr_t sourceId, int width, int height) Future
@platform ios @hidden(iOS) @brief Gets application window preview thumbnail for screen sharing. @region Screen Sharing @author liyi.000 @param sourceId ID of the screen-sharing object. You can get the ID from ByteRTCScreenCaptureSourceInfo returned by calling getScreenCaptureSourceList{@link #ByteRTCEngine#getScreenCaptureSourceList}. @param width Maximum width of the App icon. The width is always equal to the height. SDK will set the height and width to the smaller value if the given values are unequal. RTC will return nullptr if you set the value with a number out of the valid range, 32, 256. The default size is 100 x 100. @param height Maximum height of the app icon. Refer to the note for width. @return Application icon thumbnail. You can call this API when the item to be shared is an application. If not, the return value will be nullptr.
inherited
ios_registerRemoteEncodedAudioFrameObserver(id<ByteRTCRemoteEncodedAudioFrameObserver> observer) Future
@platform ios @detail api @hidden for internal use only @brief Register the remote audio frame monitor.
After calling this method, every time the SDK detects a remote audio frame, it will call back the audio frame information to the user through onRemoteEncodedAudioFrame. @param observer Remote AudioFrame Monitor. See IRemoteEncodedAudioFrameObserver. @note - This method is recommended to be called before entering the room. - Setting the parameter to nullptr cancels registration. - Before calling, call enableAudioDecoding{@link #ByteRTCEngine#enableAudioDecoding} to close audio decode.
inherited
ios_sendScreenCaptureExtensionMessage(NSData messsage) Future<int?>
@platform ios @hidden(macOS) @detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Sends message to screen capture Extension @param messsage Message sent to the Extension @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Call this API after calling startScreenCapture:bundleId:{@link #ByteRTCEngine#startScreenCapture:bundleId}. - The extension will receive onReceiveMessageFromApp:{@link #ByteRtcScreenCapturerExtDelegate#onReceiveMessageFromApp} when the message is sent.
inherited
ios_setBluetoothMode(ByteRTCBluetoothMode mode) Future<int?>
@platform ios @hidden(macOS) @detail api @author dixing @brief On iOS, you can change the Bluetooth profile when the media volume is set in all scenarios. @param mode The Bluetooth profiles. See ByteRTCBluetoothMode{@link #ByteRTCBluetoothMode}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note You will receive rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning} in the following scenarios: 1) You cannot change the Bluetooth profile to HFP.;2) The media volume is not set in all scenarios. We suggest that you call setAudioScenario:{@link #ByteRTCEngine#setAudioScenario} to set the media volume scenario before calling this API.
inherited
ios_setCustomizeEncryptHandler(id<ByteRTCEncryptHandler> handler) Future<int?>
@platform ios @detail api @author wangjunlin.3182 @brief Sets custom encryption and decryption methods. @param handler Custom encryption handler, which needs to implement the encryption and decryption method. See ByteRTCEncryptHandler{@link #ByteRTCEncryptHandler}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The method and setEncryptInfo:key:{@link #ByteRTCEngine#setEncryptInfo:key} are mutually exclusive relationships, that is, according to the call order, the last call method takes effect version. - This method must be called before calling joinRoom:userInfo:userVisibility:roomConfig:{@link #ByteRTCRoom#joinRoom:userInfo:userVisibility:roomConfig}, which can be called repeatedly, taking the last called parameter as the effective parameter. - Whether encrypted or decrypted, the length of the modified data needs to be controlled under 180%. That is, if the input data is 100 bytes, the processed data must be less than 180 bytes. If the encryption or decryption result exceeds the limit, the audio & video frame may be discarded. - Data encryption/decryption is performed serially, so depending on the implementation The method may affect the final rendering efficiency. Whether to use this method needs to be carefully evaluated by the user.
inherited
ios_setLowLightAdjusted(ByteRTCVideoEnhancementMode mode) Future<int?>
@platform ios @hidden(iOS) @valid since 3.57 @detail api @hiddensdk(audiosdk) @author zhoubohui @brief Sets the video lowlight enhancement mode.
It can significantly improve image quality in scenarios with insufficient light, contrast lighting, or backlit situations. @param mode It defaults to Disable. Refer to ByteRTCVideoEnhancementMode{@link #ByteRTCVideoEnhancementMode} for more details. @return - 0: Success. After you call this method, it will take action immediately. But it may require some time for downloads and detection processes before you can see the enhancement. - < 0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - Turning on this mode will impact device performance. This feature should be activated only when required and the device performance is adequate. - Functionality applies to videos captured by the internal module as well as those from custom collections.
inherited
ios_setScreenAudioChannel(ByteRTCAudioChannel channel) Future<int?>
@platform ios @hidden(iOS) @detail api @author zhangcaining @brief Set the audio channel of the screen-sharing audio stream @param channel The number of Audio channels. See ByteRTCAudioChannel{@link #ByteRTCAudioChannel}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note When you call setScreenAudioStreamIndex:to mix the microphone audio stream and the screen-sharing audio stream, the audio channel is set by setAudioProfile:{@link #ByteRTCEngine#setAudioProfile} rather than this API.
inherited
ios_startChorusCacheSync(ByteRTCChorusCacheSyncConfig config, id<ByteRTCChorusCacheSyncObserver> observer) Future<int?>
@platform ios @hidden internal use only @detail api @hiddensdk(audiosdk) @brief Start aligning RTC data by cache. Received RTC data from different sources will be cached, and aligned based on the included timestamps. This feature compromizes the real-time nature of RTC data consumption. @param config See ByteRTCChorusCacheSyncConfig{@link #ByteRTCChorusCacheSyncConfig}. @param observer Event and data observer. See ByteRTCChorusCacheSyncObserver{@link #ByteRTCChorusCacheSyncObserver}. @return See ByteRTCReturnStatus{@link #ByteRTCReturnStatus}. @note To disable the feature, call stopChorusCacheSync{@link #ByteRTCEngine#stopChorusCacheSync}.
inherited
ios_startScreenAudioCapture(string deviceId) Future<int?>
@platform ios @hidden(iOS) @detail api @author yezijian.me @brief Starts using RTC SDK internal capture to capture screen audio during screen sharing. @param deviceId ID of the virtual device @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The call of this API takes effects only when you are using RTC SDK to record screen. You will get a warning by rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning} after calling this API when the video source is set to an external recorder. - You also need to call publishScreenAudio: to publish the captured screen audio. - To disable screen audio internal capture, call stopScreenAudioCapture{@link #ByteRTCEngine#stopScreenAudioCapture}.
inherited
ios_startScreenVideoCapture(ByteRTCScreenCaptureSourceInfo sourceInfo, ByteRTCScreenCaptureParam captureParameters) Future<int?>
@platform ios @hidden(iOS) @detail api @hiddensdk(audiosdk) @author liyi.000 @brief Capture screen video stream for sharing. Screen video stream includes: content displayed on the screen, or content in the application window. @param sourceInfo Screen capture source information. See ByteRTCScreenCaptureSourceInfo{@link #ByteRTCScreenCaptureSourceInfo}.
Call getScreenCaptureSourceList{@link #ByteRTCEngine#getScreenCaptureSourceList} to get all the screen sources that can be shared. @param captureParameters Screen capture parameters. See ByteRTCScreenCaptureParam{@link #ByteRTCScreenCaptureParam}. @return - 0: Success; - -1: Failure; @note - The call of this API takes effects only when you are using RTC SDK to record screen. You will get a warning by rtcEngine:onVideoDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceWarning:deviceType:deviceWarning} after calling this API when the video source is set to an external recorder. - This API only starts screen capturing but does not publish the captured video. Call publishScreenVideo:{@link #ByteRTCRoom#publishScreenVideo} to publish the captured video. - To turn off screen video capture, call stopScreenVideoCapture{@link #ByteRTCEngine#stopScreenVideoCapture}. - Local users will receive rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error} on the state of screen capturing such as start, pause, resume, and error. - After successfully calling this API, local users will receive rtcEngine:onFirstLocalVideoFrameCaptured:withFrameInfo:{@link #ByteRTCEngineDelegate#rtcEngine:onFirstLocalVideoFrameCaptured:withFrameInfo}. - Before calling this API, you can call setScreenVideoEncoderConfig:{@link #ByteRTCEngine#setScreenVideoEncoderConfig} to set the frame rate and encoding resolution of the screen video stream. - After receiving rtcEngine:onFirstLocalVideoFrameCaptured:withFrameInfo:{@link #ByteRTCEngineDelegate#rtcEngine:onFirstLocalVideoFrameCaptured:withFrameInfo}, you can set the local screen sharing view by calling setLocalVideoCanvas:withCanvas:{@link #ByteRTCEngine#setLocalVideoCanvas:withCanvas} or setLocalVideoSink:withSink:withPixelFormat:{@link #ByteRTCEngine#setLocalVideoSink:withSink:withPixelFormat}. - After you start capturing screen video stream for sharing,you can call updateScreenCaptureHighlightConfig:{@link #ByteRTCEngine#updateScreenCaptureHighlightConfig} to update border highlighting settings, updateScreenCaptureMouseCursor:{@link #ByteRTCEngine#updateScreenCaptureMouseCursor} to update the processing settings for the mouse, and updateScreenCaptureFilterConfig:{@link #ByteRTCEngine#updateScreenCaptureFilterConfig} to set the window that needs to be filtered on PC clinets.
inherited
ios_stopScreenAudioCapture() Future<int?>
@platform ios @hidden(iOS) @detail api @author liyi.000 @brief Stops using RTC SDK internal capture to capture screen audio during screen sharing. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The call of this API takes effects only when you are using RTC SDK to record screen. You will get a warning by rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning} after calling this API when the video source is set to an external recorder. - This API can only stop the screen capture by the RTC SDK. If the video source has been set to external recorder, the call of this API will fail with a warning message. You need to stop it in the external recorder. - To enable the screen audio internal capture, call startScreenAudioCapture:{@link #ByteRTCEngine#startScreenAudioCapture}.
inherited
ios_stopScreenVideoCapture() Future<int?>
@platform ios @hidden(iOS) @detail api @hiddensdk(audiosdk) @author liyi.000 @brief Stops capturing screen video stream. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The call of this API takes effects only when you are using RTC SDK to record screen. You will get a warning by rtcEngine:onVideoDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceWarning:deviceType:deviceWarning} after calling this API when the video source is set to an external recorder. - To enable screen video stream capture, calling startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters}. - You will receive rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error} after calling this API. - This API has no effect on screen video stream publishing.
inherited
ios_updateScreenCaptureFilterConfig(NSArray<NSNumber> excludedWindowList) Future<int?>
@platform ios @hidden(iOS) @detail api @hiddensdk(audiosdk) @author liyi.000 @brief When capturing screen video streams through the capture module provided by the RTC SDK, set the windows to be filtered out. @param excludedWindowList The windows to be filtered out. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Before calling this API, you must have turned on internal screen capture by calling startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters}. - This function only works when the screen source is a screen rather than an application window. See: ByteRTCScreenCaptureSourceType{@link #ByteRTCScreenCaptureSourceType}. - When you call this API to exclude specific windows, the frame rate of the shared-screen stream will be lower than 30fps。
inherited
ios_updateScreenCaptureHighlightConfig(ByteRTCHighlightConfig config) Future<int?>
@platform ios @hidden(iOS) @detail api @hiddensdk(audiosdk) @author liyi.000 @brief Update border highlighting settings when capturing screen video streams through the internal capture module. The border is shown by default. @param config Border highlighting settings. See ByteRTCHighlightConfig{@link #ByteRTCHighlightConfig}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note Before calling this API, you must have turned on internal screen capture by calling startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters}.
inherited
ios_updateScreenCaptureMouseCursor(ByteRTCMouseCursorCaptureState mouseCursorCaptureState) Future<int?>
@platform ios @hidden(iOS) @detail api @hiddensdk(audiosdk) @author liyi.000 @brief Update the processing settings for the mouse when capturing screen video streams through the capture module provided by the RTC SDK. The mouse is shown by default. @param mouseCursorCaptureState See ByteRTCMouseCursorCaptureState{@link #ByteRTCMouseCursorCaptureState}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note Before calling this API, you must have turned on internal screen capture by calling startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters}.
inherited
ios_updateScreenCaptureRegion(dynamic regionRect) Future<int?>
@platform ios @hidden(iOS) @detail api @hiddensdk(audiosdk) @author liyi.000 @brief Update the capture area when capturing screen video streams through the internal capture module . @param regionRect The relative capture area to the area set by startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note Before calling this API, you must call startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters} to start internal screen stream capture.
inherited
isCameraExposurePositionSupported() Future<bool?>
@detail api @author zhangzhenyu.samuel @brief Checks if manual exposure setting is available for the currently used camera. @return - true: Available. - false: Unavailable. @note You must call startVideoCapture{@link #RTCEngine#startVideoCapture} to start SDK internal video capturing before calling this API.
inherited
isCameraFocusPositionSupported() Future<bool?>
@detail api @author zhangzhenyu.samuel @brief Checks if manual focus is available for the currently used camera. @return - true: Available. - false: Unavailable. @note You must call startVideoCapture{@link #RTCEngine#startVideoCapture} to start SDK internal video capturing before calling this API.
inherited
isCameraTorchSupported() Future<bool?>
@detail api @author zhangzhenyu.samuel @brief Detect the currently used camera (front/postcondition), whether flash is supported. @return - true: Support - false: Not supported @note You must have called startVideoCapture{@link #RTCEngine#startVideoCapture} for video capture using the SDK internal capture module to detect flash capability.
inherited
isCameraZoomSupported() Future<bool?>
@detail api @author zhangzhenyu.samuel @brief Detect whether the currently used camera (front/postcondition) supports zoom (digital/optical zoom). @return - true: Support - false: Not supported @note Camera zoom capability can only be detected if startVideoCapture{@link #RTCEngine#startVideoCapture} is used for video capture using the SDK internal capture module.
inherited
login({required string token, required string uid}) Future<int?>
@detail api @author hanchenchen.c @brief Log in to call sendUserMessageOutsideRoom{@link #RTCEngine#sendUserMessageOutsideRoom} and sendServerMessage{@link #RTCEngine#sendServerMessage} to send P2P messages or send messages to a server without joining the RTC room.
To log out, call logout{@link #RTCEngine#logout}. @param token
Token is required during login for authentication.
This Token is different from that required by calling joinRoom. You can assign any value even null to roomId to generate a login token. During developing and testing, you can use temporary tokens generated on the console. Deploy the token generating application on your server. @param uid
User ID
User ID is unique within one appid. @return - 0: Success. - <0: Failure. See ReturnStatus{@link #ReturnStatus} for specific reasons. @note You will receive onLoginResult{@link #IRTCEngineEventHandler#onLoginResult} after calling this API and log in successfully. But remote users will not receive notification about that.
inherited
logout() Future<int?>
@detail After api @author hanchenchen.c @brief Call this method to log out, it is impossible to call methods related to out-of-room messages and end-to-server messages or receive related callbacks. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - After calling this interface to log out, you must first call login{@link #RTCEngine#login} to log in. - After local users call this method to log out, they will receive the result of the onLogout{@link #IRTCEngineEventHandler#onLogout} callback notification, and remote users will not receive the notification.
inherited
muteAudioCapture({required bool mute}) Future<int?>
@valid since 3.58.1 @detail api @author shiyayun @brief Set whether to mute the recording signal (without changing the local hardware). @param mute Whether to mute audio capture.
- True: Mute (disable microphone) - False: (Default) Enable microphone @return - 0: Success. - < 0 : Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - Calling this API does not affect the status of SDK audio stream publishing. - Adjusting the volume by calling setCaptureVolume{@link #RTCEngine#setCaptureVolume} after muting will not cancel the mute state. The volume state will be retained until unmuted. - You can use this interface to set the capture volume before or after calling startAudioCapture{@link #RTCEngine#startAudioCapture} to enable audio capture.
inherited
muteScreenAudioCapture(bool mute) Future<int?>
@valid since 3.60. @detail api @author shiyayun @brief Mutes/unmutes the audio captured when screen sharing.
Calling this method will send muted data instead of the screen audio data, and it does not affect the local audio device capture status and the SDK audio stream publishing status. @param mute Whether to mute the audio capture when screen sharing.
- True: Mute the audio capture when screen sharing.
- False: (Default) Unmute the audio capture when screen sharing. @return - 0: Success. - < 0 : Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - Adjusting the volume by calling setCaptureVolume{@link #RTCEngine#setCaptureVolume} after muting will not cancel the mute state. The volume state will be retained until unmuted. - You can use this interface to set the capture volume before or after calling startAudioCapture{@link #RTCEngine#startAudioCapture} to enable audio capture.
inherited
noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
pullExternalAudioFrame(AudioFrame audioFrame) Future<int?>
@detail api @author gongzhengduo @brief Pulls audio data for external playback.
After calling this method, the SDK will actively fetch the audio data to play, including the decoded and mixed audio data from the remote source, for external playback. @param audioFrame Audio data frame. See AudioFrame{@link #AudioFrame} @return Method call result
- 0: Setup succeeded - < 0: Setup failed @note - Before pulling external audio data, setAudioRenderType{@link #RTCEngine#setAudioRenderType} must be called Enable custom audio rendering. - You should pull audio data every 10 milliseconds since the duration of a RTC SDK audio frame is 10 milliseconds. Samples x call frequency = audioFrame's sample rate. Assume that the sampling rate is set to 48000, call this API every 10 ms, so that 480 sampling points should be pulled each time. - The audio sampling format is S16. The data format in the audio buffer is PCM data, and its capacity size is audioFrame.samples × audioFrame.channel × 2.
inherited
pushClientMixedStreamExternalVideoFrame(string uid, VideoFrameData frame) Future<int?>
inherited
pushReferenceAudioPCMData(AudioFrame audioFrame) Future<int?>
@detail api @region Custom Audio AEC Reference @author cuiyao @brief Push custom aec reference audio data to the RTC SDK. @param audioFrame Audio data frame. See AudioFrame{@link #AudioFrame} @return Method call result
+ 0: Success
+ <-1: Failure
@note
+ You should send audio data every 10 milliseconds since the duration of a RTC SDK audio frame is 10 milliseconds. Samples x call frequency = audioFrame's sample rate. Assume that the sampling rate is set to 48000, call this API every 10 ms, so that 480 sampling points should be pulled each time.
+ The audio sampling format is S16. The data format in the audio buffer is PCM data, and its capacity size is audioFrame.samples × audioFrame.channel × 2.
inherited
registerAudioFrameObserver(IAudioFrameObserver observer) Future<int?>
@detail api @author gongzhengduo @brief Register an audio frame observer. @param observer Audio data callback observer. See IAudioFrameObserver{@link #IAudioFrameObserver}. Use null to cancel the registration. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note After calling this API and enableAudioFrameCallback{@link #RTCEngine#enableAudioFrameCallback}, IAudioFrameObserver{@link #IAudioFrameObserver} receives the corresponding audio data callback. You can retrieve the audio data and perform processing on it without affecting the audio that RTC SDK uses to encode or render.
inherited
registerAudioProcessor(IAudioFrameProcessor processor) Future<int?>
@detail api @author gongzhengduo @brief Register a custom audio preprocessor.
After that, you can call enableAudioProcessor{@link #RTCEngine#enableAudioProcessor} to process the audio streams that either captured locally or received from the remote side. RTC SDK then encodes or renders the processed data. @param processor Custom audio processor. See IAudioFrameProcessor{@link #IAudioFrameProcessor}。
SDK only holds weak references to the processor, you should guarantee its Life Time. To cancel registration, set the parameter to nullptr. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note
inherited
registerLocalEncodedVideoFrameObserver(ILocalEncodedVideoFrameObserver observer) Future<int?>
@detail api @hiddensdk(audiosdk) @author liuyangyang @brief Register a local video frame observer.
This method applys to both internal capturing and custom capturing.
After calling this API, SDK triggers onLocalEncodedVideoFrame{@link #ILocalEncodedVideoFrameObserver#onLocalEncodedVideoFrame} whenever a video frame is captured. @param observer Local video frame observer. See ILocalEncodedVideoFrameObserver{@link #ILocalEncodedVideoFrameObserver}. You can cancel the registration by setting it to null. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note You can call this API before or after entering the RTC room. Calling this API before entering the room ensures that video frames are monitored and callbacks are triggered as early as possible.
inherited
registerLocalVideoProcessor(VideoProcessor processor, VideoPreprocessorConfig config) Future<int?>
@detail api @hiddensdk(audiosdk) @author zhushufan.ref @brief Set up a custom video preprocessor.
Using this video preprocessor, you can call processVideoFrame{@link #IVideoProcessor#processVideoFrame} to preprocess the video frames collected by the RTC SDK, and use the processed video frames for RTC audio & video communication. @param processor Custom video processor. See IVideoProcessor{@link #IVideoProcessor}. If null is passed in, the video frames captured by the RTC SDK are not preprocessed.
SDK only holds weak references to the processor, you should guarantee its Life Time. @param config Customize the settings applicable to the video preprocessor. See VideoPreprocessorConfig{@link #VideoPreprocessorConfig}.
Currently, the required_pixel_format in'config 'only supports:' I420 ',' TEXTURE_2D 'and'Unknown':
- When set to'Unknown', the RTC SDK gives the format of the video frame for processing by the processor, that is, the format of the capture. You can get the actual captured video frame format through pixelFormat{@link #IVideoFrame#pixelFormat}. The supported formats are: 'I420', 'TEXTURE_2D' and 'TEXTURE_OES'
- When set to 'I420' or 'TEXTURE_2D', the RTC SDK will convert the captured video into the corresponding format for pre-processing. This method call fails when - Is set to another value. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note After preprocessing, the video frame format returned to the RTC SDK only supports' I420 'and' TEXTURE_2D '.
inherited
registerRemoteEncodedVideoFrameObserver(IRemoteEncodedVideoFrameObserver observer) Future<int?>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Video data callback after registering remote encoding.
After registration, when the SDK detects a remote encoded video frame, it will trigger the onRemoteEncodedVideoFrame{@link #IRemoteEncodedVideoFrameObserver#onRemoteEncodedVideoFrame} callback @param observer Remote encoded video data monitor. See IRemoteEncodedVideoFrameObserver{@link #IRemoteEncodedVideoFrameObserver} @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - See Custom Video Encoding and Decoding for more details about custom video decoding. - This method applys to manual subscription mode and can be called either before or after entering the Room. It is recommended to call it before entering the room. - The engine needs to be unregistered before it is destroyed. Call this method to set the parameter to "null".
inherited
removeLocalVideo() Future<int>
Remove local video
removePublicStreamVideo(String publicStreamId) Future<int?>
Remove public stream video
removeRemoteVideo({required String streamId}) Future<int>
Remove remote video
requestRemoteVideoKeyFrame(string streamId) Future<int?>
@detail api @brief After subscribing to the remote video stream, request the keyframe @param streamId Remote stream ID. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - This method is only suitable for manual subscription mode and is used after successful subscription to the remote flow. - This method is suitable for calling setVideoDecoderConfig{@link #RTCEngine#setVideoDecoderConfig} to turn on the custom decoding function, and the custom decoding fails
inherited
sendPublicStreamSEIMessage(int channelId, ArrayBuffer message, int repeatCount, SEICountPerFrame mode) Future<int?>
@hidden for internal use only @valid since 3.56 @detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief <span id="IRTCEngine-sendseimessage-2"></span> WTN stream sends SEI data. @param channelId SEI message channel id. The value range is 0 - 255. With this parameter, you can set different ChannelIDs for different recipients. In this way, different recipients can choose the SEI information based on the ChannelID received in the callback. @param message SEI data. @param repeatCount Number of times a message is sent repeatedly. The value range is 0, max{29, \%{video frame rate}-1}. Recommended range: 2,4.
After calling this API, the SEI data will be added to a consecutive repeat_count+1 number of video frames starting from the current frame. @param mode SEI sending mode. See SEICountPerFrame{@link #SEICountPerFrame}. @return - < 0:Failure - = 0: You are unable to send SEI as the current send queue is full. - > 0: Success, and the value represents the amount of sent SEI. @note - We recommend the number of SEI messages per second should not exceed the current video frame rate. - In a video call, the custom captured video frame can also be used for sending SEI data if the original video frame contains no SEI data, otherwise calling this method will not take effect. - Each video frame carrys only the SEI data received within 2s before and after. In a voice call scenario, if no SEI data is sent within 1min after calling this API, SDK will automatically cancel publishing black frames. - After the message is sent successfully, the remote user who subscribed your video stream will receive onWTNSEIMessageReceived{@link #IWTNStreamEventHandler#onWTNSEIMessageReceived}. - When the call fails, neither the local nor the remote side will receive a callback.
inherited
sendScreenCaptureExtensionMessage(Uint8List message) Future<int>
Only iOS
sendSEIMessage(ArrayBuffer message, int repeatCount, SEICountPerFrame mode) Future<int?>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief <span id="RTCEngine-sendseimessage-2"></span>Sends SEI data.
In a video call scenario, SEI is sent with the video frame, while in a voice call scenario, SDK will automatically publish a black frame with a resolution of 16 × 16 pixels to carry SEI data. @param message SEI data. No more than 4 KB SEI data per frame is recommended. @param repeatCount Number of times a message is sent repeatedly. The value range is 0, max{29, \%{video frame rate}-1}. Recommended range: 2,4.
After calling this API, the SEI data will be added to a consecutive \%{repeatCount}+1 number of video frames starting from the current frame. @param mode SEI sending mode. See SEICountPerFrame{@link #SEICountPerFrame}. @return - >= 0: The number of SEIs to be added to the video frame - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - We recommend the number of SEI messages per second should not exceed the current video frame rate. In a voice call, the blank-frame rate is 15 fps. - In a voice call, this API can be called to send SEI data only in internal capture mode. - In a video call, the custom captured video frame can also be used for sending SEI data if the original video frame contains no SEI data, otherwise calling this method will not take effect. - Each video frame carrys only the SEI data received within 2s before and after. In a voice call scenario, if no SEI data is sent within 1min after calling this API, SDK will automatically cancel publishing black frames. - After the message is sent successfully, the remote user who subscribed your video stream will receive onSEIMessageReceived{@link #IRTCEngineEventHandler#onSEIMessageReceived}. - When you switch from a voice call to a video call, SEI data will automatically start to be sent with normally captured video frames instead of black frames.
inherited
sendServerBinaryMessage(ArrayBuffer buffer) Future<int?>
@detail api @author hanchenchen.c @brief Client side sends binary messages to the application server (P2Server) @param buffer
Binary message content sent
Message does not exceed 46KB. @return - > 0: sent successfully, return the number of the sent message, increment from 1. - -1: Sent failed due to empty message. @note - Before sending a binary message to the application server, you must first call login{@link #RTCEngine#login} to complete the login, and then call setServerParams{@link #RTCEngine#setServerParams} Set up the application server. - After calling this interface, you will receive an onServerMessageSendResult{@link #IRTCEngineEventHandler#onServerMessageSendResult} callback to inform the message sender that the sending succeeded or failed; - If the binary message is sent successfully, the application server that previously called the setServerParams{@link #RTCEngine#setServerParams} setting will receive the message.
inherited
sendServerMessage(string message) Future<int?>
@detail api @author hanchenchen.c @brief The client side sends a text message to the application server (P2Server) @param message
The content of the text message sent
The message does not exceed 64 KB. @return - > 0: Sent successfully, return the number of the sent message, increment from 1. @note - Before sending a text message to the application server, you must first call login{@link #RTCEngine#login} to complete the login, and then call setServerParams{@link #RTCEngine#setServerParams} Set up the application server. - After calling this interface, you will receive an onServerMessageSendResult{@link #IRTCEngineEventHandler#onServerMessageSendResult} callback to inform the message sender whether the message was sent successfully. - If the text message is sent successfully, the application server that previously called the setServerParams{@link #RTCEngine#setServerParams} setting will receive the message.
inherited
sendStreamSyncInfo({required ArrayBuffer data, required StreamSyncInfoConfig config}) Future<int?>
@detail api @author wangjunzheng @brief Send audio stream synchronization information. The message is sent to the remote end through the audio stream and synchronized with the audio stream. After the interface is successfully called, the remote user will receive a onStreamSyncInfoReceived{@link #IRTCEngineEventHandler#onStreamSyncInfoReceived} callback. @param data Message content. @param config Configuration related to audio stream synchronization information. See StreamSyncInfoConfig{@link #StreamSyncInfoConfig}. @return - > = 0: Message sent successfully. Returns the number of successful sends. - -1: Message sending failed. Message length greater than 16 bytes. - -2: Message sending failed. The content of the incoming message is empty. - -3: Message sending failed. This screen stream was not published when the message was synchronized through the screen stream. - -4: Message sending failed. This audio stream is not yet published when you synchronize messages with an audio stream captured by a microphone or custom device, as described in ErrorCode{@link #ErrorCode}. @note
inherited
sendUserBinaryMessageOutsideRoom({required string uid, required ArrayBuffer message, required MessageConfig config}) Future<int?>
@detail api @author hanchenchen.c @brief Send binary messages (P2P) to the specified user outside the room @param uid User ID of the message receiver @param buffer
Binary message content sent
Message does not exceed 46KB. @param config Message type, see MessageConfig{@link #MessageConfig}. @return - > 0: sent successfully, return the number of the sent message, increment from 1. - -1: Sent failed due to empty message. @note - Before sending out-of-room binary messages, you should call login{@link #RTCEngine#login} first. - After the user calls this interface to send a binary message, he will receive an onUserMessageSendResultOutsideRoom{@link #IRTCEngineEventHandler#onUserMessageSendResultOutsideRoom} callback to notify whether the message was sent successfully; - If the binary message is sent successfully, the user specified by uid will receive the message through the onUserBinaryMessageReceivedOutsideRoom{@link #IRTCEngineEventHandler#onUserBinaryMessageReceivedOutsideRoom} callback.
inherited
sendUserMessageOutsideRoom({required string uid, required string message, required MessageConfig config}) Future<int?>
@detail api @author hanchenchen.c @brief Send a text message (P2P) to a specified user outside the room @param uid User ID of the message receiver @param message
Text message content sent.
Message does not exceed 64 KB. @param config Message type, see MessageConfig{@link #MessageConfig}. @return - > 0: Sent successfully, return the number of the sent message, increment from 1. @note - Before sending an out-of-room text message, you must call login{@link #RTCEngine#login} to login. - After the user calls this interface to send a text message, he will receive an onUserMessageSendResultOutsideRoom{@link #IRTCEngineEventHandler#onUserMessageSendResultOutsideRoom} callback to know whether the message was successfully sent. - If the text message is sent successfully, the user specified by uid receives the message via the onUserMessageReceivedOutsideRoom{@link #IRTCEngineEventHandler#onUserMessageReceivedOutsideRoom} callback.
inherited
setAnsMode(AnsMode ansMode) Future<int?>
@valid since 3.52 @detail api @author liuchuang @brief Set the Active Noise Cancellation(ANC) mode during audio and video communications. @param ansMode ANC modes. See AnsMode{@link #AnsMode}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - You can call this API before or after entering a room. When you repeatedly call it, only the last call takes effect.
- The noise reduction algorithm includes both traditional noise reduction and AI noise reduction. Traditional noise reduction is primarily aimed at suppressing steady noises, such as the hum of air conditioners and the whir of fans. AI noise reduction, on the other hand, is mainly designed to suppress non-stationary noises, like the tapping of keyboards and the clattering of tables and chairs.
- The AI noise reduction can only be enabled through this interface when the following ChannelProfile{@link #ChannelProfile} scenarios are engaged:
- Gaming voice mode: CHANNEL_PROFILE_GAME(2) - High-fidelity gaming mode: CHANNEL_PROFILE_GAME_HD(8) - Cloud gaming mode: CHANNEL_PROFILE_CLOUD_GAME(3) - 1 vs 1 audio/video call: CHANNEL_PROFILE_CHAT(5) - Multi-client synchronized audio/video playback: CHANNEL_PROFILE_LW_TOGETHER(7) - Personal devices in cloud meetings: CHANNEL_PROFIEL_MEETING - Classroom interaction mode: CHANNEL_PROFILE_MEETING_ROOM(17) - Meeting room terminals in cloud meetings: CHANNEL_PROFILE_CLASSROOM(18)
inherited
setAudioAlignmentProperty({required string streamId, required AudioAlignmentMode mode}) Future<int?>
@detail api @hidden internal use only @author majun.lvhiei @brief On the listener side, set all subscribed audio streams precisely timely aligned. @param streamId Stream ID, the remote audio stream used as the benchmark during time alignment. You are recommended to use the audio stream from the lead singer.
You must call this API after receiving onUserPublishStreamAudio{@link #IRTCRoomEventHandler#onUserPublishStreamAudio}. @param mode Whether to enable the alignment. Disabled by default. See AudioAlignmentMode{@link #AudioAlignmentMode}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - You must use the function when all participants set ChannelProfile{@link #ChannelProfile} to CHANNEL_PROFILE_CHORUS when joining the room. - All remote participants must call startAudioMixing to play background music and set syncProgressToRecordFrame of AudioMixingConfigto true. - If the subscribed audio stream is delayed too much, it may not be precisely aligned. - The chorus participants must not enable the alignment. If you wish to change the role from listener to participant, you should disable the alignment.
inherited
setAudioProfile(AudioProfileType audioProfile) Future<int?>
@detail api @author zhangyuanyuan.0101 @brief Sets the sound quality. Call this API to change the sound quality if the audio settings in the current ChannelProfile{@link #ChannelProfile} can not meet your requirements. @param audioProfile Sound quality. See AudioProfileType{@link #AudioProfileType} @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - This method can be called before and after entering the room. - Support dynamic switching of sound quality during a call.
inherited
setAudioRenderType(AudioRenderType type) Future<int?>
@detail api @author gongzhengduo @brief Switch the audio render type. @param type Audio output source type. See AudioRenderType{@link #AudioRenderType}.
Use internal audio render by default. The audio capture type and the audio render type may be different from each other. @return Method call result:
- =0: Success. - <0: Failure. @note - You can call this API before or after joining the room. - After calling this API to enable custom audio rendering, call pullExternalAudioFrame{@link #RTCEngine#pullExternalAudioFrame} for audio data.
inherited
setAudioRoute(AudioRoute audioRoute) Future<int?>
@detail api @author dixing @brief Set the current audio playback route. The default device is set via setDefaultAudioRoute{@link #RTCEngine#setDefaultAudioRoute}.
When the audio playback route changes, you will receive onAudioRouteChanged{@link #IRTCEngineEventHandler#onAudioRouteChanged}. @param audioRoute Audio route. Refer to AudioRoute{@link #AudioRoute}.
For Android device, the valid audio playback devices may vary due to different audio device connection status. See Set the Audio Route. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - You can implement most scenarios by calling setDefaultAudioRoute{@link #RTCEngine#setDefaultAudioRoute} and the default audio route switching strategy of the RTC SDK. For details about the strategy, see Set the Audio Route. You should use this API in a few exceptional scenarios like manually switching audio route with external audio device connected. - This interface is only supported in communication mode. - For the volume type in different audio scenarios, refer to AudioScenarioType{@link #AudioScenarioType}.
inherited
setAudioScenario(AudioScenarioType audioScenario) Future<int?>
@hidden(macOS,Windows,Linux) @valid since 3.60. @detail api @author gongzhengduo @brief Sets the audio scenarios.
After selecting the audio scenario, SDK will automatically switch to the proper volume modes (the call/media volume) according to the scenarios and the best audio configurations under such scenarios.
This API should not be used at the same time with the old one. @param audioScenario Audio scenarios. See AudioScenarioType{@link #AudioScenarioType}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - You can use this API both before and after joining the room. - Call volume is more suitable for calls, meetings and other scenarios that demand information accuracy. Call volume will activate the system hardware signal processor, making the sound clearer. The volume cannot be reduced to 0. - Media volume is more suitable for entertainment scenarios, which require musical expression. The volume can be reduced to 0.
inherited
setAudioSourceType(AudioSourceType type) Future<int?>
@detail api @author gongzhengduo @brief Switch the audio capture type. @param type Audio input source type. See AudioSourceType{@link #AudioSourceType}
Use internal audio capture by default. The audio capture type and the audio render type may be different from each other. @return Method call result:
- =0: Success. - <0: Failure. @note - You can call this API before or after joining the room. - If you call this API to switch from internal audio capture to custom capture, the internal audio capture is automatically disabled. You must call pushExternalAudioFrame{@link #RTCEngine#pushExternalAudioFrame} to push custom captured audio data to RTC SDK for transmission. - If you call this API to switch from custom capture to internal capture, you must then call startAudioCapture{@link #RTCEngine#startAudioCapture} to enable internal capture.
inherited
setBeautyIntensity({required EffectBeautyMode beautyMode, required float intensity}) Future<int?>
@detail api @hiddensdk(audiosdk) @author wangjunlin.3182 @brief Sets the beauty effect intensity. @param beautyMode Basic beauty effect. See EffectBeautyMode{@link #EffectBeautyMode}. @param intensity Beauty effect intensity in range of 0,1. When you set it to 0, the beauty effect will be turned off.
The default values for the intensity of each beauty mode are as follows: 0.7 for brightning, 0.8 for smoothing, 0.5 for sharpening, and 0.7 for clarity. @return - 0: Success. - –2: intensity is out of range. - –1001: This API is not available for your current RTC SDK. - <0: Failure. Effect SDK internal error. For specific error code, see error codes. @note - If you call this API before calling enableEffectBeauty{@link #RTCEngine#enableEffectBeauty}, the default settings of beauty effect intensity will adjust accordingly. - If you destroy the engine, the beauty effect settings will be invalid.
inherited
setBluetoothMode(BluetoothMode mode) Future<int?>
Set bluetooth mode, only available on iOS.
setBusinessId(string businessId) Future<int?>
@detail api @author wangzhanqiang @brief Sets the business ID
You can use businessId to distinguish different business scenarios. You can customize your businessId to serve as a sub AppId, which can share and refine the function of the AppId, but it does not need authentication. @param businessId
Your customized businessId
BusinessId is a tag, and you can customize its granularity. @return - 0: Success. - -2: The input is invalid. Legal characters include all lowercase letters, uppercase letters, numbers, and four other symbols, including '.', '-','_', and '@'. @note - You must call this API before entering the room, otherwise it will be invalid.
inherited
setCameraAdaptiveMinimumFrameRate(int framerate) Future<int?>
@hidden(macOS) @valid since 353 @detail api @brief Set the minimum frame rate of of the dynamic framerate mode during internal video capture. @param framerate The minimum value in fps. The default value is 7.
The maximum value of the dynamic framerate mode is set by calling setVideoCaptureConfig{@link #RTCEngine#setVideoCaptureConfig}. When minimum value exceeds the maximum value, the frame rate is set to a fixed value as the maximum value; otherwise, dynamic framerate mode is enabled. @return - 0: Success. - !0: Failure. @note - You must call this API before calling startVideoCapture{@link #RTCEngine#startVideoCapture} to enable internal capture to make the setting valid. - If the maximum frame rate changes due to performance degradation, static adaptation, etc., the set minimum frame rate value will be re-compared with the new maximum value. Changes in comparison results may cause switch between fixed and dynamic frame rate modes. - For Android, dynamic framerate mode is enabled. - For iOS, dynamic framerate mode is disabled.
inherited
setCameraExposureCompensation(float val) Future<int?>
@detail api @author zhangzhenyu.samuel @brief Sets the exposure compensation for the currently used camera. @param val Exposure compensation in range of -1, 1. Default to 0, which means no exposure compensation. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - You must call startVideoCapture{@link #RTCEngine#startVideoCapture} to start SDK internal video capturing, and use SDK internal rendering before calling this API. - The camera exposure compensation setting will be invalid after calling stopVideoCapture{@link #RTCEngine#stopVideoCapture} to stop internal capturing.
inherited
setCameraExposurePosition(Offset position) Future<int>
Set camera exposure position
setCameraFocusPosition(Offset position) Future<int>
Set camera focus position
setCameraTorch(TorchState torchState) Future<int?>
@detail api @author zhangzhenyu.samuel @brief Turn on/off the flash state of the currently used camera (front/postcondition) @param torchState Flash state. Refer to TorchState{@link #TorchState} @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - The flash can only be set if you have called startVideoCapture{@link #RTCEngine#startVideoCapture} for video capture using the SDK internal capture module. - The setting result fails after calling stopVideoCapture{@link #RTCEngine#stopVideoCapture} to turn off internal collection.
inherited
setCameraZoomRatio(float zoom) Future<int?>
@detail api @author zhangzhenyu.samuel @brief Change the optical zoom magnification. @param zoom Zoom magnification of the currently used camera (front/postcondition). The value range is 1, < Maximum Zoom Multiplier >.
The maximum zoom factor can be obtained by calling getCameraZoomMaxRatio{@link #RTCEngine#getCameraZoomMaxRatio}. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - The camera zoom factor can only be set when startVideoCapture{@link #RTCEngine#startVideoCapture} is called for video capture using the SDK internal capture module. - The setting result fails after calling stopVideoCapture{@link #RTCEngine#stopVideoCapture} to turn off internal collection. - Call setVideoDigitalZoomConfig{@link #RTCEngine#setVideoDigitalZoomConfig} to set digital zoom. Call setVideoDigitalZoomControl{@link #RTCEngine#setVideoDigitalZoomControl} to perform digital zoom.
inherited
setCaptureVolume({required int volume}) Future<int?>
@detail api @author huangshouqin @brief Adjust the volume of the audio capture @param volume Ratio of capture volume to original volume.
This changes the volume property of the audio data other than the hardware volume.
Ranging: 0,400. Unit: %
To ensure the audio quality, we recommend setting the volume to 100.
- 0: Mute - 100: Original volume. To ensure the audio quality, we recommend 0, 100. - 400: Four times the original volume with signal-clipping protection. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note Call this API to set the volume of the audio capture before or during the audio capture.
inherited
setCellularEnhancement(MediaTypeEnhancementConfig config) Future<int?>
@detail api @hiddensdk(audiosdk) @brief Enable cellular network assisted communication to improve call quality. @param config See MediaTypeEnhancementConfig{@link #MediaTypeEnhancementConfig}. @return Method call result:
- 0: Success. - -1: Failure, internal error. - -2: Failure, invalid parameters. @note The function is off by default.
inherited
setClientMixedStreamObserver(IClientMixedStreamObserver observer) Future<int?>
inherited
setDefaultAudioRoute(AudioRoute route) Future<int?>
@detail api @author dixing @brief Set the speaker or earpiece as the default audio playback device. @param route Audio playback device. Refer to AudioRoute{@link #AudioRoute}. You can only use earpiece and speakerphone. @return - 0: Success. - < 0: failure. It fails when the device designated is neither a speaker nor an earpiece. @note For the default audio route switching strategy of the RTC SDK, see Set the Audio Route.
inherited
setDummyCaptureImagePath(string filePath) Future<int?>
@detail api @hiddensdk(audiosdk) @author liuyangyang @brief Set an alternative image when the local internal video capture is not enabled.
When you call stopVideoCapture, an alternative image will be pushed. You can set the path to null or open the camera to stop publishing the image.
You can repeatedly call this API to update the image. @param filePath Set the path of the static image.
You can use the absolute path (file://xxx) or the asset directory path (/assets/xx.png). The maximum size for the path is 512 bytes.
You can upload a .JPG, .JPEG, .PNG, or .BMP file.
When the aspect ratio of the image is inconsistent with the video encoder configuration, the image will be proportionally resized, with the remaining pixels rendered black. The framerate and the bitrate are consistent with the video encoder configuration. @return - 0: Success. - -2: Failure. Ensure that the filePath is valid. - -12: This method is not available in the Audio SDK. @note - The API is only effective when publishing an internally captured video. - You cannot locally preview the image. - You can call this API before and after joining an RTC room. In the multi-room mode, the image can be only displayed in the room you publish the stream. - You cannot apply effects like filters and mirroring to the image, while you can watermark the image. - The image is not effective for a screen-sharing stream. - When you enable the simulcast mode, the image will be added to all video streams, and it will be proportionally scaled down to smaller encoding configurations.
inherited
setEarMonitorMode(EarMonitorMode mode, EarMonitorAudioFilter filter) Future<int?>
@detail api @author majun.lvhiei @brief Enables/disables in-ear monitoring. @param mode Whether to enable in-ear monitoring. See EarMonitorMode{@link #EarMonitorMode}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note - In-ear monitoring is effective for audios captured by the RTC SDK. - We recommend that you use wired earbuds/headphones for a low-latency experience. - The RTC SDK supports both hardware-level and SDK-level in-ear monitoring. Hardware-level monitoring typically offers lower latency and better audio quality. If your App is in the manufacturer's trusted list for this feature and the environment meets the required conditions, the RTC SDK will automatically default to hardware-level in-ear monitoring when enabled.
inherited
setEarMonitorVolume(int volume) Future<int?>
@detail api @author majun.lvhiei @brief Set the monitoring volume. @param volume The monitoring volume with the adjustment range between 0% and 100%. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Call setEarMonitorMode{@link #RTCEngine#setEarMonitorMode} before setting the volume.
inherited
setEncryptInfo({required EncryptType aesType, required String key}) Future<int?>
Set encrypt info
setExtensionConfig({required String groupId, required String bundleId}) Future<int?>
@note Only available on iOS, call when using screen sharing. You can also set it through the iOS ByteRTCHelper setExtensionConfig:bundleId: method.
setExternalVideoEncoderEventHandler(IExternalVideoEncoderEventHandler handler) Future<int?>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Register custom coded frame push event callback @param handler Custom coded frame callback class. See IExternalVideoEncoderEventHandler{@link #IExternalVideoEncoderEventHandler} @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - This method needs to be called before entering the room. - The engine needs to be unregistered before it is destroyed. Call this method to set the parameter to "null".
inherited
setLocalProxy(List<LocalProxyConfiguration> configurations) Future<int?>
@detail api @author keshixing.rtc @brief Sets local proxy. @param configurations Local proxy configurations. Refer to LocalProxyConfiguration{@link #LocalProxyConfiguration}.
You can set both Http tunnel and Socks5 as your local proxies, or only set one of them based on your needs. If you set both Http tunnel and Socks5 as your local proxies, then media traffic and signaling are routed through Socks5 proxy and Http requests through Http tunnel proxy. If you set either Http tunnel or Socks5 as your local proxy, then media traffic, signaling and Http requests are all routed through the proxy you chose.
If you want to remove the existing local proxy configurations, you can call this API with the parameter set to null. @note - You must call this API before joining the room. - After calling this API, you will receive onLocalProxyStateChanged{@link #IRTCEngineEventHandler#onLocalProxyStateChanged} callback that informs you of the states of local proxy connection.
inherited
setLocalSimulcastMode(VideoSimulcastMode mode, List<VideoEncoderConfig> streamConfig) Future<int?>
@valid since 3.60. @detail api @brief Enable the Simulcast feature and configure the lower-quality video streams settings. @param mode Whether to publish lower-quality streams and how many of them to be published. See VideoSimulcastMode{@link #VideoSimulcastMode}. By default, it is set to Single, where the publisher sends the video in a single profile. In the other modes, the low-quality stream is set to a default resolution of 160px × 90px with a bitrate of 50Kbps. @param streamConfig The specification of the lower quality stream. You can configure up to three low-quality streams for a video source. See VideoEncoderConfig{@link #VideoEncoderConfig}. The resolution of the lower quality stream must be smaller than the standard stream set via setVideoEncoderConfig{@link #RTCEngine#setVideoEncoderConfig}. The specifications in the array must be arranged in ascending order based on resolution. In the other modes, the low-quality stream is set to a default resolution of 160px × 90px with a bitrate of 50Kbps. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note - The default specification of the video stream is 640px × 360px @15fps. - The method applies to the camera video only. - Refer to Simulcasting for more information.
inherited
setLocalVideoCanvas(VideoCanvas videoCanvas) Future<int?>
@detail api @hiddensdk(audiosdk) @author sunhang.io @brief Sets the view to be used for local video rendering and the rendering mode. @param videoCanvas View information and rendering mode. See VideoCanvas{@link #VideoCanvas}. @return - 0: Success. - -2: Invalid parameter. - -12: This method is not available in the Audio SDK. @note - You should bind your stream to a view before joining the room. This setting will remain in effect after you leave the room. - If you need to unbind the local video stream from the current view, you can call this API and set the videoCanvas to null.
inherited
setLocalVideoMirrorType(MirrorType mirrorType) Future<int?>
@detail api @hiddensdk(audiosdk) @author wangjunlin.3182 @brief Sets the mirror mode for the captured video stream. @param mirrorType Mirror type. See MirrorType{@link #MirrorType}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Switching video streams does not affect the settings of the mirror type. - This API is not applicable to screen-sharing streams. - When using an external renderer, you can set mirrorType to 0 and 3, but you cannot set it to 1. - Before you call this API, the initial states of each video stream are as follows:
inherited
setLocalVoiceEqualization(VoiceEqualizationConfig voiceEqualizationConfig) Future<int?>
@detail api @author wangjunzheng @brief Set the equalization effect for the local captured audio. The audio includes both internal captured audio and external captured voice, but not the mixing audio file. @param voiceEqualizationConfig See VoiceEqualizationConfig{@link #VoiceEqualizationConfig}. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note According to the Nyquist acquisition rate, the audio acquisition rate must be greater than twice the set center frequency. Otherwise, the setting will not be effective.
inherited
setLocalVoicePitch(int pitch) Future<int?>
@detail api @author wangjunzheng @brief Change local voice to a different key, mostly used in Karaoke scenarios.
You can adjust the pitch of local voice such as ascending or descending with this method. @param pitch The value that is higher or lower than the original local voice within a range from -12 to 12. The default value is 0, i.e. No adjustment is made.
The difference in pitch between two adjacent values within the value range is a semitone, with positive values indicating an ascending tone and negative values indicating a descending tone, and the larger the absolute value set, the more the pitch is raised or lowered.
Out of the value range, the setting fails and triggers onWarning{@link #IRTCEngineEventHandler#onWarning} callback, indicating WARNING_CODE_SET_SCREEN_STREAM_INVALID_VOICE_PITCH for invalid value setting with WarningCode{@link #WarningCode}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details
inherited
setLocalVoiceReverbParam(VoiceReverbConfig config) Future<int?>
@detail api @author wangjunzheng @brief Set the reverb effect for the local captured audio. The audio includes both internal captured audio and external captured voice, but not the mixing audio file. @param config See VoiceReverbConfig{@link #VoiceReverbConfig}. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note Call enableLocalVoiceReverb{@link #RTCEngine#enableLocalVoiceReverb} to enable the reverb effect.
inherited
setPlaybackVolume(int volume) Future<int?>
@detail api @author huangshouqin @brief Adjusts the locally playing volume after mixing sounds of all remote users. You can call this API before or during the playback. @param volume Ratio(%) of playback volume to original volume, in the range 0, 400, with overflow protection.
To ensure the audio quality, we recommend setting the volume to 100.
- 0: mute - 100: original volume - 400: Four times the original volume with signal-clipping protection. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note Suppose a remote user A is always within the range of the target user whose playback volume will be adjusted, if you use both this method and setRemoteAudioPlaybackVolume{@link #RTCEngine#setRemoteAudioPlaybackVolume}/setRemoteRoomAudioPlaybackVolume{@link #RTCRoom#setRemoteRoomAudioPlaybackVolume}, the volume that the local user hears from user A is the overlay of both settings.
inherited
setPublishFallbackOption(PublishFallbackOption option) Future<int?>
@detail api @author panjian.fishing @brief Sets the fallback option for published audio & video streams.
You can call this API to set whether to automatically lower the resolution you set of the published streams under limited network conditions. @param option Fallback option, see PublishFallbackOption{@link #PublishFallbackOption}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - This API only works after you call setLocalSimulcastMode{@link #RTCEngine#setlocalsimulcastmode-2} to enable the mode of publishing multiple streams. - You must call this API before entering the room. - After calling this method, if there is a performance degradation or recovery due to poor performance or network conditions, the local end will receive early warnings through the onPerformanceAlarms{@link #IRTCEngineEventHandler#onPerformanceAlarms} callback to adjust the capture device. - After you allow video stream to fallback, your stream subscribers will receive onSimulcastSubscribeFallback{@link #IRTCEngineEventHandler#onSimulcastSubscribeFallback} when the resolution of your published stream are lowered or restored. - You can alternatively set fallback options with distrubutions from server side, which is of higher priority.
inherited
setRemoteAudioPlaybackVolume({required string streamId, required int volume}) Future<int?>
@detail api @author huanghao @brief Set the audio volume of playing the received remote stream. You must join the room before calling the API. The validity of the setting is not associated with the publishing status of the stream. @param streamId Stream ID, used to specify the remote stream whose volume is to be adjusted. @param volume The ratio between the playing volume of the original volume. The range is [0, 400] with overflow protection. The unit is %.
For better audio quality, you are recommended to set the value to [0, 100]. @return result
- 0: Success. - <0: Failure. See ReturnStatus{@link #ReturnStatus}. @note Assume that a remote user A is always within the scope of the adjusted target users:
- When this API is used together with setRemoteRoomAudioPlaybackVolume{@link #RTCRoom#setRemoteRoomAudioPlaybackVolume}, the volume of local listening user A is the volume set by the API called later; - When this API is used together with the setPlaybackVolume{@link #RTCEngine#setPlaybackVolume}, the volume of local listening user A will be the superposition of the two set volume effects. - When you call this API to set the remote stream volume, if the remote user leaves the room, the setting will be invalid.
inherited
setRemoteUserPriority({required String roomId, required String uid, required RemoteUserPriority priority}) Future<int>
Set remote user priority
setRemoteVideoCanvas(string streamId, VideoCanvas videoCanvas) Future<int?>
@detail api @hiddensdk(audiosdk) @author sunhang.io @brief Sets the view to be used for remote video rendering and the rendering mode.
To unbind the canvas, set videoCanvas to Null. @param streamId Stream ID, used to specify the video stream for which the view and rendering mode need to be set. @param videoCanvas View information and rendering mode. See VideoCanvas{@link #VideoCanvas}. Starting from version 3.56, you can set the rotation angle of the remote video rendering using renderRotation. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note When the local user leaves the room, the setting will be invalid. The remote user leaving the room does not affect the setting.
inherited
setRemoteVideoMirrorType(string streamId, RemoteMirrorType mirrorType) Future<int?>
@detail api @hiddensdk(audiosdk) @valid since 3.57 @region Video Management @brief When using internal rendering, enable mirroring for the remote stream. @param streamId Stream ID, used to specify the video stream that needs to be mirrored. @param mirrorType The mirror type for the remote stream, see RemoteMirrorType{@link #RemoteMirrorType}. @return - 0: Successful call. - < 0: Call failed, see ReturnStatus{@link #ReturnStatus} for more error details.
inherited
setRemoteVideoSuperResolution({required string streamId, required VideoSuperResolutionMode mode}) Future<int?>
@hidden for internal use only @detail api @hiddensdk(audiosdk) @author yinkaisheng @brief Sets the super resolution mode for remote video stream. @param streamId Stream ID, used to specify the video stream for which the super resolution mode needs to be set. @param mode Super resolution mode. See VideoSuperResolutionMode{@link #VideoSuperResolutionMode}. @return.
- 0: RETURN_STATUS_SUCCESS. It does not indicate the actual status of the super resolution mode, you should refer to onRemoteVideoSuperResolutionModeChanged{@link #IRTCEngineEventHandler#onRemoteVideoSuperResolutionModeChanged} callback. - -1: RETURN_STATUS_NATIVE_IN_VALID. Native library is not loaded. - -2: RETURN_STATUS_PARAMETER_ERR. Invalid parameter. - -9: RETURN_STATUS_SCREEN_NOT_SUPPORT. Failure. Screen stream is not supported. See ReturnStatus{@link #ReturnStatus} for more return value indications. @note - Call this API after joining room. - The original resolution of the remote video stream should not exceed 640 × 360 pixels. - You can only turn on super-resolution mode for one stream.
inherited
setRTCEngineEventHandler(IRTCEngineEventHandler handler) Future<int>
Set RTC Engine event handler
setRuntimeParameters(dynamic params) Future<int?>
@detail api @author panjian.fishing @brief Setting runtime parameters @param params Preserved parameters. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note Call this API before joinRoom{@link #RTCRoom#joinRoom} and startAudioCapture{@link #RTCEngine#startAudioCapture}.
inherited
setScreenAudioSourceType(AudioSourceType sourceType) Future<int?>
@detail api @author liyi.000 @brief Sets the screen audio source type. (internal capture/custom capture) @param sourceType Screen audio source type. See AudioSourceType{@link #AudioSourceType}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - The default screen audio source type is RTC SDK internal capture. - You should call this API before calling publishScreenAudio. Otherwise, you will receive onWarning{@link #IRTCEngineEventHandler#onWarning} with 'WARNING_CODE_SET_SCREEN_AUDIO_SOURCE_TYPE_FAILED'. - When using internal capture, you need to call startScreenCapture to start capturing. After that, as you switch to an external source by calling this API, the internal capture will stop. - When using custom capture, you need to call pushScreenAudioFrame{@link #RTCEngine#pushScreenAudioFrame} to push the audio stream to the RTC SDK. - Whether you use internal capture or custom capture, you must call publishScreenAudio to publish the captured screen audio stream. @order 5
inherited
setScreenCaptureVolume(int volume) Future<int?>
@valid Available since 3.60. @detail api @author shiyayun @brief Adjusts the volume of audio captured during screen sharing.
This method only changes the volume of audio data and does not affect the hardware volume of the local device. @param volume The ratio of the capture volume to the original volume, in the range of 0, 400, in %, with built-in overflow protection.
To ensure better call quality, it is recommended to set the volume value to 0, 100.
- 0: Mute - 100: Original volume. To ensure the audio quality, we recommend 0, 100. - 400: Four times the original volume with signal-clipping protection. @return - 0: Success. - < 0: Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note You can use this interface to set the capture volume before or after enabling screen audio capture.
inherited
setServerParams({required string signature, required string url}) Future<int?>
@detail api @author hanchenchen.c @brief Set application server parameters
Client side calls sendServerMessage{@link #RTCEngine#sendServerMessage} or sendServerBinaryMessage{@link #RTCEngine#sendServerBinaryMessage} Before sending a message to the application server, you must set a valid signature and application server address. @param signature Dynamic signature. The App server may use the signature to verify the source of messages.
You need to define the signature yourself. It can be any non-empty string. It is recommended to encode information such as UID into the signature.
The signature will be sent to the address set through the "url" parameter in the form of a POST request. @param url The address of the application server @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - The user must call login{@link #RTCEngine#login} to log in before calling this interface. - After calling this interface, the SDK will use onServerParamsSetResult{@link #IRTCEngineEventHandler#onServerParamsSetResult} to return the corresponding result.
inherited
setSubscribeFallbackOption(SubscribeFallbackOptions option) Future<int?>
@detail api @author panjian.fishing @brief Sets the fallback option for subscribed RTC streams.
You can call this API to set whether to lower the resolution of currently subscribed stream under limited network conditions. @param option Fallback option, see SubscribeFallbackOptions{@link #SubscribeFallbackOptions} for more details. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note - You must call this API before enterting the room. - After you enables the fallback, you will receive onSimulcastSubscribeFallback{@link #IRTCEngineEventHandler#onSimulcastSubscribeFallback} and onRemoteVideoSizeChanged{@link #IRTCEngineEventHandler#onRemoteVideoSizeChanged} when the resolution of your subscribed stream is lowered or restored. - You can alternatively set fallback options with distrubutions from server side, which is of higher priority.
inherited
setVideoCaptureConfig(VideoCaptureConfig videoCaptureConfig) Future<int?>
Set video capture config
override
setVideoCaptureRotation(VideoRotation rotation) Future<int?>
@detail api @hiddensdk(audiosdk) @brief Set the rotation of the video images captured from the local device.
Call this API to rotate the videos when the camera is fixed upside down or tilted. For rotating videos on a phone, we recommend to use setVideoRotationMode{@link #RTCEngine#setVideoRotationMode}. @param rotation It defaults to VIDEO_ROTATION_0(0), which means not to rotate. Refer to VideoRotation{@link #VideoRotation}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - For the videos captured by the internal module, the rotation will be combined with that set by calling setVideoRotationMode{@link #RTCEngine#setVideoRotationMode}. - This API affects the external-sourced videos. The final rotation would be the original rotation angles adding up with the rotation set by calling this API. - The elements added during the video pre-processing stage, such as video sticker and background applied using enableVirtualBackground{@link #IVideoEffect#enableVirtualBackground} will also be rotated by this API. - The rotation would be applied to both locally rendered video s and those sent out. However, if you need to rotate a video which is intended for pushing to CDN individually, use setVideoOrientation{@link #RTCEngine#setVideoOrientation}.
inherited
setVideoDecoderConfig({required string streamId, required VideoDecoderConfig config}) Future<int?>
@detail api @brief Before subscribing to the remote video stream, set the remote video data decoding method @param streamId The remote stream ID specifies which video stream to decode. @param config Video decoding method. See VideoDecoderConfig{@link #VideoDecoderConfig}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - When you want to custom decode a remote stream, you need to call registerRemoteEncodedVideoFrameObserver{@link #RTCEngine#registerRemoteEncodedVideoFrameObserver} to register the remote video stream monitor, and then call the interface to set the decoding method to custom decoding. The monitored video data will be called back through onRemoteEncodedVideoFrame{@link #IRemoteEncodedVideoFrameObserver#onRemoteEncodedVideoFrame}. - Since version 3.56, for automatic subscription, you can set the RoomId and UserId of key as nullptr. In this case, the decoding settings set by calling the API applies to all remote main streams or screen sharing streams based on the StreamIndex value of key.
inherited
setVideoDenoiser({required VideoDenoiseMode mode}) Future<int?>
@hidden for internal use only @detail api @hiddensdk(audiosdk) @author Yujianli @brief Sets the video noise reduction mode. @param mode Video noise reduction mode. Refer to VideoDenoiseMode{@link #VideoDenoiseMode} for more details. @return - 0: Success. Please refer to onVideoDenoiseModeChanged{@link #IRTCEngineEventHandler#onVideoDenoiseModeChanged} callback for the actual state of video noise reduction mode. - < 0: Failure.
inherited
setVideoDigitalZoomConfig({required ZoomConfigType type, required float size}) Future<int?>
@valid since 3.51 @detail api @hiddensdk(audiosdk) @author likai.666 @brief Set the step size for each digital zooming control to the local videos. @param type Required. Identifying which type the size is referring to. Refer to ZoomConfigType{@link #ZoomConfigType}. @param size Required. Reserved to three decimal places. It defaults to 0.
The meaning and range vary from different types. If the scale or moving distance exceeds the range, the limit is taken as the result.
- kZoomFocusOffset: Increasement or decrease to the scaling factor. Range: 0, 7. For example, when it is set to 0.5 and setVideoDigitalZoomControl{@link #RTCEngine#setVideoDigitalZoomControl} is called to zoom in, the scale will increase 0.5. The scale ranges 1,8 and defaults to 1, which means an original size. - kZoomMoveOffset:Ratio of the distance to the border of video images. It ranges 0, 0.5 and defaults to 0, which means no offset. When you call setVideoDigitalZoomControl{@link #RTCEngine#setVideoDigitalZoomControl} and choose CAMERA_MOVE_LEFT, the moving distance is size x original width. While for the CAMERA_MOVE_UP, the moving distance is size x original height. Suppose that a video spans 1080 px and the size is set to 0.5 so that the distance would be 0.5 x 1080 px = 540 px. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - Only one size can be set for a single call. You must call this API to pass values respectively if you intend to set multiple sizes. - As the default size is 0, you must call this API before performing any digital zoom control by calling setVideoDigitalZoomControl{@link #RTCEngine#setVideoDigitalZoomControl} or startVideoDigitalZoomControl{@link #RTCEngine#startVideoDigitalZoomControl}.
inherited
setVideoDigitalZoomControl(ZoomDirectionType direction) Future<int?>
@valid since 3.51 @detail api @hiddensdk(audiosdk) @author likai.666 @brief Digital zoom or move the local video image once. This action affects both the video preview locally and the stream published. @param direction Action of the digital zoom control. Refer to ZoomDirectionType{@link #ZoomDirectionType}. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - As the default offset is 0, you must call setVideoDigitalZoomConfig{@link #RTCEngine#setVideoDigitalZoomConfig} before this API. - You can only move video images after they are magnified via this API or startVideoDigitalZoomControl{@link #RTCEngine#startVideoDigitalZoomControl}. - When you request an out-of-range scale or movement, SDK will execute it with the limits. For example, when the image has been moved to the border, the image cannot be zoomed out, or has been magnified to 8x. - Call startVideoDigitalZoomControl{@link #RTCEngine#startVideoDigitalZoomControl} to have a continuous and repeatedly digital zoom control. - Refer to setCameraZoomRatio{@link #RTCEngine#setCameraZoomRatio} if you intend to have an optical zoom control to the camera.
inherited
setVideoEncoderConfig(VideoEncoderConfig encoderConfig) Future<int?>
@detail api @brief Sets the expected quality of the video stream by specifying the resolution, frame rate, bitrate, and the fallback strategy when the network is poor. @param encoderConfig See VideoEncoderConfig{@link #VideoEncoderConfig}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note - Since V3.61, this method can only set a single profile for the video stream. If you intend to publish the stream in multiple qualities, use setLocalSimulcastMode{@link #RTCEngine#setLocalSimulcastMode}. - Without calling this method, only one stream will be sent with a profile of 640px × 360px @15fps. The default encoding preference is frame rate-first. - If you use an external video source, you can also use this method to set the encoding parameters.
setVideoOrientation(VideoOrientation orientation) Future<int?>
@detail api @hiddensdk(audiosdk) @author wangjunlin.3182 @brief Sets the orientation of the video frame before custom video processing and encoding. The default value is Adaptive.
You should set the orientation to Portrait when using video effects or custom processing.
You should set the orientation to Portrait or Landscape when pushing a single stream to the CDN. @param orientation Orientation of the video frame. See VideoOrientation{@link #VideoOrientation}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - The orientation setting is only applicable to internal captured video sources. For custom captured video sources, setting the video frame orientation may result in errors, such as swapping width and height. Screen sources do not support video frame orientation setting. - We recommend setting the orientation before joining room. The updates of encoding configurations and the orientation are asynchronous, therefore can cause a brief malfunction in preview if you change the orientation after joining room.
inherited
setVideoRotationMode(VideoRotationMode rotationMode) Future<int?>
@detail api @hiddensdk(audiosdk) @brief Set the orientation of the video capture. By default, the App direction is used as the orientation reference.
During rendering, the receiving client rotates the video in the same way as the sending client did. @param rotationMode Rotation reference can be the orientation of the App or gravity. Refer to VideoRotationMode{@link #VideoRotationMode} for details. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - The orientation setting is effective for internal video capture only. That is, the orientation setting is not effective to the custom video source or the screen-sharing stream. - If the video capture is on, the setting will be effective once you call this API. If the video capture is off, the setting will be effective on when capture starts.
inherited
setVideoSourceType({required VideoSourceType type}) Future<int?>
@detail api @hiddensdk(audiosdk) @author liuyangyang @brief Set the video source, including the screen recordings.
The internal video capture is the default, which refers to capturing video using the built-in module. @param type Video source type. Refer to VideoSourceType{@link #VideoSourceType} for more details. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - You can call this API whether the user is in a room or not. - Calling this API to switch to the custom video source will stop the enabled internal video capture. - To switch to internal video capture, call this API to stop custom capture and then call startVideoCapture{@link #RTCEngine#startVideoCapture} to enable internal video capture. - To push custom encoded video frames to the SDK, call this API to switch VideoSourceType to VIDEO_SOURCE_TYPE_ENCODED_WITH_SIMULCAST(2) or VIDEO_SOURCE_TYPE_ENCODED_WITHOUT_SIMULCAST(3).
inherited
setVideoWatermark(string imagePath, WatermarkConfig watermarkConfig) Future<int?>
@detail api @hiddensdk(audiosdk) @author zhushufan.ref @brief Adds watermark to designated video stream. @param imagePath File path of the watermark image. You can use the absolute path, the asset path(/assets/xx.png), or the URI path(content://). The path should be less than 512 bytes.
The watermark image should be in PNG or JPG format. @param watermarkConfig Watermark configurations. See RTCWatermarkConfig{@link #RTCWatermarkConfig}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Call clearVideoWatermark{@link #RTCEngine#clearVideoWatermark} to remove the watermark on the designated video stream. - You can only add one watermark to one video stream. The newly added watermark replaces the previous one. You can call this API multiple times to add watermarks to different streams. - You can call this API before and after joining room. - If you mirror the preview, or the preview and the published stream, the watermark will also be mirrored locally, but the published watermark will not be mirrored. - When you enable simulcast mode, the watermark will be added to all video streams, and it will scale down to smaller encoding configurations accordingly.
inherited
setVoiceChangerType(VoiceChangerType voiceChanger) Future<int?>
@valid since 3.32 @detail api @author wangjunzheng @brief Set the sound change effect type @param voiceChanger The sound change effect type. See VoiceChangerType{@link #VoiceChangerType} @return API call result:
- 0: Success. - <0: Failure. See ReturnStatus{@link #ReturnStatus} for specific reasons. @note - To use this feature, you need to integrate the SAMI library. See On-Demand Plugin Integration. - You can call it before and after entering the room. - Effective for both internal and external audio source. - Only valid for mono-channel audio. - Mutually exclusive with setVoiceReverbType{@link #RTCEngine#setVoiceReverbType}, and the effects set later will override the effects set first.
inherited
setVoiceReverbType(VoiceReverbType voiceReverb) Future<int?>
@valid since 3.32 @detail api @author wangjunzheng @brief Set the reverb effect type @param voiceReverb Reverb effect type. See VoiceReverbType{@link #VoiceReverbType} @return API call result:
- 0: Success. - <0: Failure. See ReturnStatus{@link #ReturnStatus} for specific reasons. @note - You can call it before and after entering the room. - Effective for both internal and external audio source. - Only valid for mono-channel audio. - Mutually exclusive with setVoiceChangerType{@link #RTCEngine#setVoiceChangerType}, and the effects set later will override the effects set first.
inherited
startAudioCapture() Future<int?>
@detail api @author dixing @brief Start internal audio capture. The default is off.
Internal audio capture refers to: capturing audio using the built-in module.
The local client will be informed via onAudioDeviceStateChanged{@link #IRTCEngineEventHandler#onAudioDeviceStateChanged} after starting audio capture by calling this API.
The remote clients in the room will be informed of the state change via onUserStartAudioCapture{@link #IRTCEngineEventHandler#onUserStartAudioCapture} after the visible user starts audio capture by calling this API. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - To enable a microphone without the user's permission will trigger onWarning{@link #IRTCEngineEventHandler#onWarning}. - Call stopAudioCapture{@link #RTCEngine#stopAudioCapture} to stop the internal audio capture. Otherwise, the internal audio capture will sustain until you destroy the engine instance. - To mute and unmute microphones, we recommend using publishStreamAudio{@link #RTCRoom#publishStreamAudio}, other than stopAudioCapture{@link #RTCEngine#stopAudioCapture} and this API. Because starting and stopping capture devices often need some time waiting for the response of the device, that may lead to a short silence during the communication. - To switch from custom to internal audio capture, stop publishing before disabling the custom audio capture module and then call this API to enable the internal audio capture.
inherited
startAudioRecording(AudioRecordingConfig config) Future<int?>
@detail api @author huangshouqin @brief Starts recording audio communication, and generate the local file.
If you call this API before or after joining the room without internal audio capture, then the recording task can still begin but the data will not be recorded in the local files. Only when you call startAudioCapture{@link #RTCEngine#startAudioCapture} to enable internal audio capture, the data will be recorded in the local files. @param config See AudioRecordingConfig{@link #AudioRecordingConfig}. @return - 0: Success - -2: Invalid parameters - -3: Not valid in this SDK. Please contact the technical support. @note - All audio effects are valid in the file. Mixed audio file is not included in the file. - Call stopAudioRecording{@link #RTCEngine#stopAudioRecording} to stop recording. - You can call this API before and after joining the room. If this API is called before you join the room, you need to call stopAudioRecording{@link #RTCEngine#stopAudioRecording} to stop recording. If this API is called after you join the room, the recording task ends automatically. If you join multiple rooms, audio from all rooms are recorded in one file. - After calling the API, you'll receive onAudioRecordingStateUpdate{@link #IRTCEngineEventHandler#onAudioRecordingStateUpdate}.
inherited
startCloudProxy(List<CloudProxyInfo> cloudProxiesInfo) Future<int?>
@detail api @author daining.nemo @brief Start cloud proxy @param cloudProxiesInfo cloud proxy informarion list. See CloudProxyInfo{@link #CloudProxyInfo}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Call this API before joining the room. - Start pre-call network detection after starting cloud proxy. - After starting cloud proxy and connects the cloud proxy server successfully, receives onCloudProxyConnected{@link #IRTCEngineEventHandler#onCloudProxyConnected}. - To stop cloud proxy, call stopCloudProxy{@link #RTCEngine#stopCloudProxy}.
inherited
startEchoTest({required EchoTestConfig config, required int delayTime}) Future<int?>
Not support yet.
startFileRecording({required RecordingConfig config, required RecordingType recordingType}) Future<int?>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief This method records the audio & video data during the call to a local file. @param config Local recording parameter configuration. See RecordingConfig{@link #RecordingConfig} @param recordingType Locally recorded media type, see RecordingType{@link #RecordingType}
Note:Screen stream only supports recording video (RECORD_VIDEO_ONLY);Main stream supports recording all types. @return 0: Normal
-1: Parameter setting exception
-2: The current version of the SDK does not support this feature, please contact technical support staff @note - You must join a room before calling this method. - Tune When you use this method, you get an onRecordingStateUpdate{@link #IRTCEngineEventHandler#onRecordingStateUpdate} callback. - If the recording is normal, the system will notify the recording progress through the onRecordingProgressUpdate{@link #IRTCEngineEventHandler#onRecordingProgressUpdate} callback every second.
inherited
startHardwareEchoDetection(string testAudioFilePath) Future<int?>
@detail api @author zhangcaining @brief Start echo detection before joining a room. @param testAudioFilePath Absolute path of the music file for the detection. It is expected to encode with UTF-8. The following files are supported: mp3, aac, m4a, 3gp, wav.
We recommend to assign a music file whose duration is between 10 to 20 seconds.
Do not pass a Silent file. @return Method call result:
- 0: Success. - -1: Failure due to the onging process of the previous detection. Call stopHardwareEchoDetection{@link #RTCEngine#stopHardwareEchoDetection} to stop it before calling this API again. - -2: Failure due to an invalid file path or file format. @note - You can use this feature only when ChannelProfile{@link #ChannelProfile} is set to CHANNEL_PROFIEL_MEETING or CHANNEL_PROFILE_MEETING_ROOM. - Before calling this API, ask the user for the permissions to access the local audio devices. - Before calling this api, make sure the audio devices are activate and keep the capture volume and the playback volume within a reasonable range. - The detection result is passed as the argument of onHardwareEchoDetectionResult. - During the detection, the SDK is not able to response to the other testing APIs, such as startEchoTest{@link #RTCEngine#startEchoTest}, startAudioDeviceRecordTest{@link #IRTCAudioDeviceManager#startAudioDeviceRecordTest} or startAudioPlaybackDeviceTest{@link #IRTCAudioDeviceManager#startAudioPlaybackDeviceTest}. - Call stopHardwareEchoDetection{@link #RTCEngine#stopHardwareEchoDetection} to stop the detection and release the audio devices.
inherited
startNetworkDetection({required bool isTestUplink, required int expectedUplinkBitrate, required bool isTestDownlink, required int expectedDownlinkBitrate}) Future<int?>
@detail api @author hanchenchen.c @brief Enable pre-call network detection @param isTestUplink Whether to detect uplink bandwidth @param expectedUplinkBitrate Expected uplink bandwidth in kbps, unit: kbps
Range: {0, [100-10000]}, 0: Auto, that RTC will set the highest bite rate. @param isTestDownlink Whether to detect downlink bandwidth @param expectedDownlinkBitrate Expected downlink bandwidth in kbps, unit: kbps
Range: {0, [100-10000]}, 0: Auto, that RTC will set the highest bite rate. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - After successfully calling this interface, you will receive onNetworkDetectionResult{@link #IRTCEngineEventHandler#onNetworkDetectionResult} within 3s and every 2s thereafter notifying the probe results; - If the probe stops, you will receive onNetworkDetectionStopped{@link #IRTCEngineEventHandler#onNetworkDetectionStopped} to notify the probe to stop.
inherited
startPushMixedStream({required String taskId, required MixedStreamPushTargetConfig pushTargetConfig, required MixedStreamConfig mixedConfig}) Future<int?>
@hidden(Linux) @valid since 3.60. Since version 3.60, this interface replaces the startPushMixedStreamToCDN and startPushPublicStream methods for the functions described below. If you have upgraded to version 3.60 or later and are still using these two methods, please migrate to this interface. @detail api @author lizheng @brief Specify the streams to be mixed and initiates the task to push the mixed stream to CDN or WTN. @param taskId Task ID. The length should not exceed 127 bytes.
You may want to push more than one mixed stream to CDN from the same room. When you do that, use different ID for corresponding tasks; if you will start only one task, use an empty string.
When PushTargetType = 1 (WTN stream), this parameter is invalid. Pass an empty string. @param pushTargetConfig Push target config, such as the push URL and WTN stream ID. See MixedStreamPushTargetConfig{@link #MixedStreamPushTargetConfig}. @param mixedConfig Configurations to be set when pushing streams to CDN or WTN. See MixedStreamConfig{@link #MixedStreamConfig}. @return - 0: Success. You can get notified the result of the task and the events in the process of pushing the stream to CDN via onMixedStreamEvent{@link #IRTCEngineEventHandler#onMixedStreamEvent}. - !0: Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note - Subscribe to the Push-to-CDN and the WTN stream notifications in the console to receive notifications about task status changes. When calling this API repeatedly, subsequent calls to this API will trigger both TranscodeStarted and TranscodeUpdated callbacks. - Call stopPushMixedStream{@link #RTCEngine#stopPushMixedStream} to stop pushing streams to CDN. - Call updatePushMixedStream{@link #RTCEngine#updatePushMixedStream} to update part of the configurations of the task. - Call startPushSingleStream{@link #RTCEngine#startPushSingleStream} to push a single stream to CDN.
startPushSingleStream({required string taskId, required PushSingleStreamParam param}) Future<int?>
@hidden(Linux) @valid since 3.60. @detail api @hiddensdk(audiosdk) @brief Pushes a single media stream to CDN or RTC room. @param taskId Task ID.
You may want to start more than one task to push streams to CDN. When you do that, use different IDs for corresponding tasks; if you will start only one task, use an empty string. @param param Configurations for pushing a single stream to CDN. See PushSingleStreamParam{@link #PushSingleStreamParam}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note - After calling this API, you will be informed of the result and errors during the pushing process with onSingleStreamEvent{@link #IRTCEngineEventHandler#onSingleStreamEvent}. - Subscribe to the Push-to-CDN and the WTN stream notifications in the console to receive notifications about task status changes. When calling this API repeatedly, subsequent calls to this API will trigger both TranscodeStarted and TranscodeUpdated callbacks. - Call stopPushSingleStream{@link #RTCEngine#stopPushSingleStream} to stop the task. - Since this API does not perform encoding and decoding, the video stream pushed to RTMP will change according to the resolution, encoding method, and turning off the camera of the end of pushing streams.
inherited
startScreenCapture(ScreenMediaType type) Future<int>
Start screen capture
startVideoCapture() Future<int?>
@detail api @hiddensdk(audiosdk) @author zhangzhenyu.samuel @brief Enable internal video capture immediately. The default setting is off.
Internal video capture refers to: capturing video using the built-in module.
The local client will be informed via onVideoDeviceStateChanged{@link #IRTCEngineEventHandler#onVideoDeviceStateChanged} after starting video capture by calling this API.
The remote clients in the room will be informed of the state change via onUserStartVideoCapture{@link #IRTCEngineEventHandler#onUserStartVideoCapture} after the visible client starts video capture by calling this API. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Since the upgrade in v3.37.0, you need to add Kotlin plugin to Gradle in the project to use this API. - Call stopVideoCapture{@link #RTCEngine#stopVideoCapture} to stop the internal video capture. Otherwise, the internal video capture will sustain until you destroy the engine instance. - Once you create the engine instance, you can start internal video capture regardless of the video publishing state. The video stream will start publishing only after the video capture starts. - To switch from custom to internal video capture, stop publishing before disabling the custom video capture module and then call this API to enable the internal video capture. - Call switchCamera{@link #RTCEngine#switchCamera} to switch the camera used by the internal video capture module. - If the default video format can not meet your requirement, contact our technical specialist to help you with Cloud Config. After that, you can push and apply these configurations to Android clients at any time.
inherited
startVideoDigitalZoomControl(ZoomDirectionType direction) Future<int?>
@valid since 3.51 @detail api @hiddensdk(audiosdk) @author likai.666 @brief Continuous and repeatedly digital zoom control. This action effect both the video preview locally and the stream published. @param direction Action of the digital zoom control. Refer to ZoomDirectionType{@link #ZoomDirectionType}. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note - As the default offset is 0, you must call setVideoDigitalZoomConfig{@link #RTCEngine#setVideoDigitalZoomConfig} before this API. - You can only move video images after they are magnified via this API or setVideoDigitalZoomControl{@link #RTCEngine#setVideoDigitalZoomControl}. - The control process stops when the scale reaches the limit, or the images have been moved to the border. if the next action exceeds the scale or movement range, SDK will execute it with the limits. - Call stopVideoDigitalZoomControl{@link #RTCEngine#stopVideoDigitalZoomControl} to stop the ongoing zoom control. - Call setVideoDigitalZoomControl{@link #RTCEngine#setVideoDigitalZoomControl} to have a one-time digital zoom control. - Refer to setCameraZoomRatio{@link #RTCEngine#setCameraZoomRatio} if you intend to have an optical zoom control to the camera.
inherited
stopAudioCapture() Future<int?>
@detail api @author dixing @brief Stop internal audio capture. The default is off.
Internal audio capture refers to: capturing audio using the built-in module.
The local client will be informed via onAudioDeviceStateChanged{@link #IRTCEngineEventHandler#onAudioDeviceStateChanged} after stopping audio capture by calling this API.
The remote clients in the room will be informed of the state change via onUserStopAudioCapture{@link #IRTCEngineEventHandler#onUserStopAudioCapture} after the visible client stops audio capture by calling this API. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Call startAudioCapture{@link #RTCEngine#startAudioCapture} to enable the internal audio capture. - Without calling this API the internal audio capture will sustain until you destroy the engine instance.
inherited
stopAudioRecording() Future<int?>
@detail api @author huangshouqin @brief Stop audio recording. @return - 0: Success - <0: Failure @note Call startAudioRecording{@link #RTCEngine#startAudioRecording} to start the recording task.
inherited
stopChorusCacheSync() Future<int?>
@hidden internal use only @detail api @hiddensdk(audiosdk) @brief Stop aligning RTC data by cache. @return See ReturnStatus{@link #ReturnStatus}.
inherited
stopClientMixedStream(string taskId) Future<int?>
@hidden for internal use only @hiddensdk(audiosdk)
inherited
stopCloudProxy() Future<int?>
@detail api @author daining.nemo @brief Stop cloud proxy @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note To start cloud proxy, call startCloudProxy{@link #RTCEngine#startCloudProxy}.
inherited
stopEchoTest() Future<int?>
@detail api @author qipengxiang @brief Stop the current call test.
After calling startEchoTest{@link #RTCEngine#startEchoTest}, you must call this API to stop the test. @return API call result:
- 0: Success. - -3: Failure, no test is in progress. @note After stopping the test with this API, all the system devices and streams are restored to the state they were in before the test.
inherited
stopFileRecording() Future<int?>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Stop local recording @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Call startFileRecording{@link #RTCEngine#startFileRecording} After starting local recording, you must call this method to stop recording. - After calling this method, you will receive an onRecordingStateUpdate{@link #IRTCEngineEventHandler#onRecordingStateUpdate} callback prompting you to record the result.
inherited
stopHardwareEchoDetection() Future<int?>
@detail api @author zhangcaining @brief Stop the echo detection before joining a room. @return Method call result:
- 0: Success. - -1: Failure. @note - Refer to startHardwareEchoDetection{@link #RTCEngine#startHardwareEchoDetection} for information on how to start a echo detection. - We recommend calling this API to stop the detection once getting the detection result from onHardwareEchoDetectionResult{@link #IRTCEngineEventHandler#onHardwareEchoDetectionResult}. - You must stop the echo detection to release the audio devices before the user joins a room. Otherwise, the detection may interfere with the call.
inherited
stopNetworkDetection() Future<int?>
@detail api @author hanchenchen.c @brief Stop pre-call network probe @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - After calling this interface, you will receive an onNetworkDetectionStopped{@link #IRTCEngineEventHandler#onNetworkDetectionStopped} callback to notify the probe to stop.
inherited
stopPushMixedStream(string taskId, MixedStreamPushTargetType targetType) Future<int?>
@valid since 3.60. Since version 3.60, this interface replaces the stopPushStreamToCDN method for stopping the push of mixed streams to CDN. If you have upgraded to version 3.60 or later and are still using this method, please migrate to this interface. @detail api @hiddensdk(audiosdk) @brief Stops the task started via startPushMixedStream{@link #RTCEngine#startPushMixedStream}. @param taskId Task ID. Specifys of which pushing task you want to update the parameters. @param targetType See MixedStreamPushTargetType{@link #MixedStreamPushTargetType}. @return - 0: Success - !0: Fail. See ReturnStatus{@link #ReturnStatus} for more details.
inherited
stopPushSingleStream(string taskId) Future<int?>
@valid since 3.60. Since version 3.60, this interface replaces the stopPushStreamToCDN method for stopping the push of single media streams to CDN. If you have upgraded to version 3.60 or later and are still using this method, please migrate to this interface. @detail api @hiddensdk(audiosdk) @author liujingchao @brief Stops the task of pushing a single media stream to CDN started via startPushSingleStream{@link #RTCEngine#startPushSingleStream}. @param taskId Task ID. Specifys the task you want to stop. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details
inherited
stopScreenCapture() Future<int?>
inherited
stopVideoCapture() Future<int?>
@detail api @hiddensdk(audiosdk) @author zhangzhenyu.samuel @brief Disable internal video capture immediately. The default is off.
Internal video capture refers to: capturing video using the built-in module.
The local client will be informed via onVideoDeviceStateChanged{@link #IRTCEngineEventHandler#onVideoDeviceStateChanged} after stopping video capture by calling this API.
The remote clients in the room will be informed of the state change via onUserStopVideoCapture{@link #IRTCEngineEventHandler#onUserStopVideoCapture} after the visible client stops video capture by calling this API. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - Call startVideoCapture {@link #RTCEngine#startVideoCapture} to enable the internal video capture. - Without calling this API the internal video capture will sustain until you destroy the engine instance.
inherited
stopVideoDigitalZoomControl() Future<int?>
@valid since 3.51 @detail api @hiddensdk(audiosdk) @author likai.666 @brief Stop the ongoing digital zoom control instantly. @return - 0: Success. - < 0: Failure. See ReturnStatus{@link #ReturnStatus} for more details. @note Refer to startVideoDigitalZoomControl{@link #RTCEngine#startVideoDigitalZoomControl} for starting digital zooming.
inherited
switchCamera(CameraId cameraId) Future<int?>
@detail api @hiddensdk(audiosdk) @author zhangzhenyu.samuel @brief Switch to the front-facing/back-facing camera used in the internal video capture
The local client will be informed via onVideoDeviceStateChanged{@link #IRTCEngineEventHandler#onVideoDeviceStateChanged} after calling this API. @param cameraId Camera ID. Refer to CameraId{@link #CameraId} for more details. @return - 0: Success - < 0: Failure @note - Front-facing camera is the default camera. - If the internal video capturing is on, the switch is effective once you call this API. If the internal video capturing is off, the setting will be effective when capture starts.
inherited
takeLocalSnapshot(String filePath) → CancelableOperation<LocalSnapshot>
Take snapshot
takeLocalSnapshotToFile(string filePath) Future<int?>
@detail api @author wangfujun.911 @brief Takes a snapshot of the local/remote video stream and save it as a JPG file at the specified local path.
After calling this method, the SDK triggers onLocalSnapshotTakenToFile{@link #IRTCEngineEventHandler#onLocalSnapshotTakenToFile} to report whether the snapshot is taken successfully and provide details of the snapshot image. @param filePath The absolute file path where the snapshot JPG file will be saved. The file extension must be .jpg. Ensure that the directory exists and is writable. Example: /sdcard/Pictures/snapshot.jpg. @return The index of the remote snapshot task, starting from 1. The index can be used to track the task status or perform other management operations.
inherited
takeRemoteSnapshot(String streamId, String filePath) → CancelableOperation<RemoteSnapshot>
Take snapshot
takeRemoteSnapshotToFile(string streamId, string filePath) Future<int?>
@detail api @author wangfujun.911 @brief Takes snapshot of the remote video stream and save it as a JPG file at the specified local path.
After calling this method, the SDK triggers onRemoteSnapshotTakenToFile{@link #IRTCEngineEventHandler#onRemoteSnapshotTakenToFile} to report whether the snapshot is taken successfully and provide details of the snapshot image. @param streamId ID of the remote video stream. @param filePath The absolute file path where the snapshot JPG file will be saved. The file extension must be .jpg. Ensure that the directory exists and is writable. Example: /sdcard/Pictures/snapshot.jpg. @return The index of the remote snapshot task, starting from 1. The index can be used to track the task status or perform other management operations.
inherited
toString() String
A string representation of this object.
inherited
transformToPlatformConstructorArgs(List args, List<int> indices, Map<String, dynamic> typeMap, Map<String, dynamic> enumMap, Map<String, dynamic> classMap, String platformVar) List
实例化参数处理 将 pack 过后的 enum / class 转成 android / ios 平台侧的 enum / class
inherited
updateInstance(dynamic instance) → void
inherited
updateLocalVideoCanvas({required VideoRenderMode renderMode, required int backgroundColor}) Future<int?>
@detail api @hiddensdk(audiosdk) @author wangfujun.911 @brief Update the render mode and background color of local video rendering @param renderMode See VideoCanvas{@link #VideoCanvas}.renderMode @param backgroundColor See VideoCanvas{@link #VideoCanvas}.backgroundColor @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details. @note Calling this API during local video rendering will be effective immediately.
inherited
updateLoginToken(string token) Future<int?>
@detail api @author hanchenchen.c @brief Update the Token
Token used by the user for login has a certain valid period. When the Token expires, you need to call this method to update the login Token information.
When calling the login{@link #RTCEngine#login} method to log in, if an expired token is used, the login will fail and you will receive an onLoginResult{@link #IRTCEngineEventHandler#onLoginResult} callback notification with an error code of 'LOGIN_ERROR_CODE_INVALID_TOKEN'. You need to reacquire the token and call this method to update the token. @param token
Updated dynamic key @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note - If the token is invalid and the login fails, the SDK will automatically log in again after updating the token by calling this method, and the user does not need to call the login{@link #RTCEngine#login} method himself. - Token expires, if you have successfully logged in, it will not be affected. An expired Token error will be notified to the user the next time you log in with an expired Token, or when you log in again due to a disconnection due to poor local network conditions.
inherited
updatePushMixedStream({required String taskId, required MixedStreamPushTargetConfig pushTargetConfig, required MixedStreamConfig mixedConfig}) Future<int?>
Update push stream config
updateRemoteStreamVideoCanvas({required String streamId, VideoRotation rotation = VideoRotation.rotation0, VideoRenderMode renderMode = VideoRenderMode.hidden, int backgroundColor = 0x00000000}) Future<int?>
Update remote user view attributes
updateScreenCapture(ScreenMediaType type) Future<int?>
@detail api @hiddensdk(audiosdk) @author wangqianqian.1104 @brief Updates the media type of the internal screen capture. @param type Media type. See ScreenMediaType{@link #ScreenMediaType}. @return - 0: Success. - < 0 : Fail. See ReturnStatus{@link #ReturnStatus} for more details @note Call this API after calling startScreenCapture{@link #RTCEngine#startScreenCapture}.
inherited

Operators

operator ==(Object other) bool
The equality operator.
inherited

Static Methods

createRTCEngine(RTCVideoContext context) Future<RTCEngine>
Create engine instance, return engine instance.
getSDKVersion() Future<String?>
Get SDK version
override
setLogConfig(RTCLogConfig logConfig) Future<int?>
Set log config
override