ByteRTCEngine class

Inheritance
  • Object
  • NativeClass
  • ByteRTCEngine

Constructors

ByteRTCEngine([NativeClassOptions? options])

Properties

$resource → NativeResource
no setterinherited
delegate FutureOr<ByteRTCEngineDelegate?>
@detail callback
getter/setter pair
hashCode int
The hash code for this object.
no setterinherited
monitorDelegate FutureOr<ByteRTCMonitorDelegate?>
@hidden @deprecated
getter/setter pair
ready Future<void>
Whether the instance is initialized
no setterinherited
runtimeType Type
A representation of the runtime type of the object.
no setterinherited

Methods

clearVideoWatermark() FutureOr<int>
@detail api @hiddensdk(audiosdk) @author zhushufan.ref @brief Removes video watermark from designated video stream. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details
createGameRoom(NSString roomId, GameRoomConfig roomConfig) FutureOr<ByteRTCGameRoom>
@detail api @author shenpengliang @brief Create a game room instance.
This API only returns a room instance. You still need to call joinRoom:userInfo:{@link #ByteRTCGameRoom#joinRoom:userInfo} to actually create/join the room.
Each call of this API creates one ByteRTCGameRoom{@link #ByteRTCGameRoom} instance. Call this API as many times as the number of rooms you need, and then call joinRoom:userInfo:{@link #ByteRTCGameRoom#joinRoom:userInfo} of each GameRTCRoom instance to join multiple rooms at the same time.
In multi-room mode, a user can subscribe to media streams in the joined rooms at the same time. @param roomId The string matches the regular expression: [a-zA-Z0-9_\@\\-\\.]{1,128}. @param roomConfig The room configuration, see GameRoomConfig{@link #GameRoomConfig}. @return ByteRTCGameRoom{@link #ByteRTCGameRoom} instance. If you get NULL instead of an GameRTCRoom instance, please ensure the roomId is valid. And the specified room is not yet created. @note - If the room that you wish to join already exists, you still need to call this API first to create the GameRTCRoom instance, and then call joinRoom:userInfo:{@link #ByteRTCGameRoom#joinRoom:userInfo}. - Do not create multiple rooms with the same roomId, otherwise the newly created room instance will replace the old one.
createRTCRoom(NSString roomId) FutureOr<ByteRTCRoom>
@detail api @author shenpengliang @brief Create a RTC room instance.
This API only returns a room instance. You still need to call joinRoom:userInfo:userVisibility:roomConfig:{@link #ByteRTCRoom#joinRoom:userInfo:userVisibility:roomConfig} to actually create/join the room.
Each call of this API creates one ByteRTCRoom{@link #ByteRTCRoom} instance. Call this API as many times as the number of rooms you need, and then call joinRoom:userInfo:userVisibility:roomConfig:{@link #ByteRTCRoom#joinRoom:userInfo:userVisibility:roomConfig} of each RTCRoom instance to join multiple rooms at the same time.
In multi-room mode, a user can subscribe to media streams in the joined rooms at the same time. @param roomId The string matches the regular expression: [a-zA-Z0-9_\@\\-\\.]{1,128}. @return ByteRTCRoom{@link #ByteRTCRoom} instance. If you get NULL instead of an RTCRoom instance, please ensure the roomId is valid. And the specified room is not yet created. @note - If the room that you wish to join already exists, you still need to call this API first to create the ByteRTCRoom instance, and then call joinRoom:userInfo:userVisibility:roomConfig:{@link #ByteRTCRoom#joinRoom:userInfo:userVisibility:roomConfig}. - Do not create multiple rooms with the same roomId, otherwise the newly created room instance will replace the old one. - To forward streams to the other rooms, call startForwardStreamToRooms:{@link #ByteRTCRoom#startForwardStreamToRooms} instead of enabling Multi-room mode.
createRTSRoom(NSString roomId) FutureOr<ByteRTCRTSRoom>
@detail api @brief Create a RTS room instance.
This API only returns a RTS room instance. You still need to call joinRTSRoom:userInfo:{@link #ByteRTCRTSRoom#joinRTSRoom:userInfo} to actually create/join the room.
Each call of this API creates one ByteRTCRoom{@link #ByteRTCRoom} instance. Call this API as many times as the number of rooms you need, and then call joinRTSRoom:userInfo:{@link #ByteRTCRTSRoom#joinRTSRoom:userInfo} of each ByteRTCRTSRoom instance to join multiple rooms at the same time.
In multi-room mode, a user can send and receive rts message in the joined rooms at the same time. @param roomId The string matches the regular expression: [a-zA-Z0-9_\@\\-\\.]{1,128}. @return ByteRTCRoom{@link #ByteRTCRoom} instance. If you get NULL instead of an RTCRoom instance, please ensure the roomId is valid. And the specified room is not yet created. @note - If the room that you wish to join already exists, you still need to call this API first to create the ByteRTCRTSRoom instance, and then call joinRoom:userInfo:userVisibility:roomConfig:{@link #ByteRTCRoom#joinRoom:userInfo:userVisibility:roomConfig}. - Do not create multiple rooms with the same roomId, otherwise the newly created room instance will replace the old one.
destroy() → void
inherited
disableAlphaChannelVideoEncode() FutureOr<int>
@hidden(macOS) @valid since 3.58 @detail api @hiddensdk(audiosdk) @region Video Management @author zhuhongshuyu @brief Disables the Alpha channel encoding feature for custom captured video frames. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note This interface must be called after stopping publishing the video stream.
disableAudioFrameCallback(ByteRTCAudioFrameCallbackMethod method) FutureOr<int>
@detail api @author gongzhengduo @brief Disables audio data callback. @param method Audio data callback method. See ByteRTCAudioFrameCallbackMethod{@link #ByteRTCAudioFrameCallbackMethod}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note Call this API after calling enableAudioFrameCallback:format:{@link #ByteRTCEngine#enableAudioFrameCallback:format}.
disableAudioProcessor(ByteRTCAudioFrameMethod method) FutureOr<int>
@detail api @author gongzhengduo @brief Disable custom audio processing. @param method Audio Frame type. See ByteRTCAudioFrameMethod{@link #ByteRTCAudioFrameMethod}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details
enableAGC(BOOL enable) FutureOr<int>
@hidden(iOS) @valid since 3.51 @detail api @author liuchuang @brief Turns on/ off AGC(Analog Automatic Gain Control).
After AGC is enabled, SDK can automatically adjust mircrophone pickup volume to keep the output volume at a steady level. @param enable whether to turn on AGC.
- true: AGC is turned on. - false: AGC is turned off,with DAGC(Digtal Automatic Gain Control) still on. @return - 0: Success. - -1: Failure. @note You can call this method before and after joining the room. To turn on AGC before joining the room, you need to contact the technical support to get a private parameter to set ByteRTCRoomProfile{@link #ByteRTCRoomProfile}.
To enable AGC after joining the room, you must set ByteRTCRoomProfile{@link #ByteRTCRoomProfile} to ByteRTCRoomProfileMeeting, ByteRTCRoomProfileMeetingRoom or ByteRTCRoomProfileClassroom.
It is not recommended to call setAudioCaptureDeviceVolume: to adjust mircrophone pickup volume with AGC on.
enableAlphaChannelVideoEncode(ByteRTCAlphaLayout alphaLayout) FutureOr<int>
@hidden(macOS) @valid since 3.58 @detail api @hiddensdk(audiosdk) @region Video Management @author zhuhongshuyu @brief Enables the Alpha channel encoding feature for custom capture video frames.
Suitable for scenarios where the video subject needs to be separated from the background at the push-streaming end, and the background can be custom rendered at the receive-streaming end. @param alphaLayout The arrangement position of the separated Alpha channel relative to the RGB channel information. Currently, only ByteRTCAlphaLayout.ByteRTCAlphaLayoutTop is supported, which is positioned above the RGB channel information. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - This API only applies to custom capture video frames that use ByteRTCVideoPixelFormat.ByteRTCVideoPixelFormatCVPixelBuffer. - This API must be called before publishing the video stream. - After enabling Alpha channel encoding with this API, you need to call pushExternalVideoFrame:{@link #ByteRTCEngine#pushExternalVideoFrame} to push custom captured video frames to the RTC SDK. If a video frame format that is not supported is pushed, the call to pushExternalVideoFrame:{@link #ByteRTCEngine#pushExternalVideoFrame} will return the error code ByteRTCReturnStatus.ByteRTCReturnStatusParameterErr.
enableAudioAEDReport(NSInteger interval) FutureOr<int>
@hidden 3.60 for internal use only @detail api @author gengjunjie @brief Enables AED detection. After that, you will receive rtcEngine:onAudioAEDStateUpdate. @param interval Callback interval, in milliseconds.
+ <= 0: Disable callback. + [100, 3000]: Enable callback and set the reporting interval to this value. It is recommended to set the interval to 2000. + Invalid interval values, less than 100 set to 100, greater than 3000 set to 3000. @return + 0: Success. + <0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus}.
enableAudioDecoding(bool enable) FutureOr<void>
@hidden for internal use only @region custom audio acquisition rendering @brief whether to use sdk audio decoding. @param enable whether to use audio decoding.
。 - true: audio decoding is turned on.(default) - false: audio decoding is turned off. @note - use before registerRemoteEncodedAudioFrameObserver.
enableAudioEncoding(bool enable) FutureOr<void>
@hidden for internal use only @region custom audio acquisition rendering @brief whether to use sdk audio encoding. @param enable whether to use audio encoding.
。 - true: audio encoding is turned on.(default) - false: audio encoding is turned off. @note - use before pushExternalEncodedAudioFrame{@link #ByteRTCEngine#pushExternalEncodedAudioFrame}.
enableAudioFrameCallback(ByteRTCAudioFrameCallbackMethod method, ByteRTCAudioFormat format) FutureOr<int>
@detail api @author gongzhengduo @brief Enable audio frames callback and set the format for the specified type of audio frames. @param method Audio data callback method. See ByteRTCAudioFrameCallbackMethod{@link #ByteRTCAudioFrameCallbackMethod}.
If method is set as 0, 1, 2, or 5, set format to the accurate value listed in the audio parameters format.
If method is set as 3, set format to auto. @param format Audio parameters format. See ByteRTCAudioFormat{@link #ByteRTCAudioFormat}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note After calling this API and registerAudioFrameObserver:{@link #ByteRTCMediaPlayer#registerAudioFrameObserver}, ByteRTCAudioFrameObserver{@link #ByteRTCAudioFrameObserver} will receive the corresponding audio data callback. However, these two APIs are independent of each other and the calling order is not restricted.
enableAudioProcessor(ByteRTCAudioFrameMethod method, ByteRTCAudioFormat format) FutureOr<int>
@detail api @author gongzhengduo @brief Enable audio frames callback for custom processing and set the format for the specified type of audio frames. @param method The types of audio frames. See ByteRTCAudioFrameMethod{@link #ByteRTCAudioFrameMethod}. Set this parameter to process multiple types of audio.
With different values, you will receive the corresponding callback:
- For locally captured audio, you will receive onProcessRecordAudioFrame:{@link #ByteRTCAudioFrameProcessor#onProcessRecordAudioFrame}. - For mixed remote audio, you will receive onProcessPlayBackAudioFrame:{@link #ByteRTCAudioFrameProcessor#onProcessPlayBackAudioFrame}. - For audio from remote users, you will receive onProcessRemoteUserAudioFrame:info:audioFrame:{@link #ByteRTCAudioFrameProcessor#onProcessRemoteUserAudioFrame:info:audioFrame}. - For SDK-level in-ear monitoring audio, you will receive onProcessEarMonitorAudioFrame:{@link #ByteRTCAudioFrameProcessor#onProcessEarMonitorAudioFrame} (Only on iOS). - For shared-screen audio, you will receive onProcessScreenAudioFrame:{@link #ByteRTCAudioFrameProcessor#onProcessScreenAudioFrame}. @param format The format of the returned audio frame. See ByteRTCAudioFormat{@link #ByteRTCAudioFormat}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Before calling this API, call registerAudioProcessor:{@link #ByteRTCEngine#registerAudioProcessor} to register a processor. - To disable custom audio processing, call disableAudioProcessor:{@link #ByteRTCEngine#disableAudioProcessor}.
enableAudioPropertiesReport(ByteRTCAudioPropertiesConfig config) FutureOr<int>
@detail api @author wangjunzheng @brief Enables audio information prompts. After that, you will receive rtcEngine:onLocalAudioPropertiesReport:{@link #ByteRTCEngineDelegate#rtcEngine:onLocalAudioPropertiesReport}, rtcEngine:onRemoteAudioPropertiesReport:totalRemoteVolume:{@link #ByteRTCEngineDelegate#rtcEngine:onRemoteAudioPropertiesReport:totalRemoteVolume}, and rtcEngine:onActiveSpeaker:uid:{@link #ByteRTCEngineDelegate#rtcEngine:onActiveSpeaker:uid}. @param config See ByteRTCAudioPropertiesConfig{@link #ByteRTCAudioPropertiesConfig}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details
enableAudioVADReport(NSInteger interval) FutureOr<int>
@hidden 3.60 for internal use only @detail api @author gengjunjie @brief Enables audio voice detection. After that, you will receive rtcEngine:onAudioVADStateUpdate. @param interval Callback interval, in milliseconds.
+ <= 0: Disable callback. + [100, 3000]: Enable callback and set the reporting interval to this value. + Invalid interval values, less than 100 set to 100, greater than 3000 set to 3000. @return + 0: Success. + < 0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus}.
enableCameraAutoExposureFaceMode(bool enable) FutureOr<int>
@hidden(macOS, Windows, Linux) @valid since 353 @detail api @author yinkaisheng @brief Enable or disable face auto exposure mode during internal video capture. This mode fixes the problem that the face is too dark under strong backlight; but it will also cause the problem of too bright/too dark in the area outside the ROI region. @param enable Whether to enable the mode. True by default for iOS, False by default for Android. @return - 0: Success. - !0: Failure. @note Calling this API takes effect immediately whether before or after internal video capturing.
enableEffectBeauty(BOOL enable) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangjunlin.3182 @brief Enables/Disables basic beauty effects. @param enable Whether to enable basic beauty effects.
- YES: Enables basic beauty effects. - NO: (Default) Disables basic beauty effects. @return - 0: Success. - –1001: This method is not available for your current RTC SDK. - -12: This method is not available in the Audio SDK. - <0: Failure. Effect SDK internal error. For specific error code, see Error Code Table. @note - You cannot use the basic beauty effects and the advanced effect features at the same time. See how to use advanced effect features for more information. - You need to integrate Effect SDK before calling this API. Effect SDK v4.4.2+ is recommended. - Call setBeautyIntensity:withIntensity:{@link #ByteRTCEngine#setBeautyIntensity:withIntensity} to set the beauty effect intensity. If you do not set the intensity before calling this API, the default intensity will be enabled. The default values for the intensity of each beauty mode are as follows: 0.7 for brightning, 0.8 for smoothing, 0.5 for sharpening, and 0.7 for clarity. - This API is not applicable to screen capturing.
enableExternalSoundCard(bool enable) FutureOr<int>
@detail api @author zhangyuanyuan.0101 @brief Enables the audio process mode for external sound card. @param enable
- true: enable - false: disable (by default) @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - When you use external sound card for audio capture, enable this mode for better audio quality. - When using the mode, you can only use earphones. If you need to use internal or external speaker, disable this mode.
enableLocalVoiceReverb(bool enable) FutureOr<int>
@detail api @author wangjunzheng @brief Enable the reverb effect for the local captured voice. @param enable Whether to enable the reverb effect. @return - 0: Success. - < 0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note Call setLocalVoiceReverbParam:{@link #ByteRTCEngine#setLocalVoiceReverbParam} to set the reverb effect.
enablePlaybackDucking(BOOL enable) FutureOr<int>
@detail api @author majun.lvhiei @brief Enables/disables the playback ducking function. This function is usually used in scenarios where short videos or music will be played simultaneously during RTC calls.
With the function on, if remote voice is detected, the local media volume of RTC will be lowered to ensure the clarity of the remote voice. If remote voice disappears, the local media volume of RTC restores. @param enable Whether to enable playback ducking:
- YES: Yes - NO: No @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details
enableVocalInstrumentBalance(BOOL enable) FutureOr<int>
@detail api @author majun.lvhiei @brief Enables/disables the loudness equalization function.
If you call this API with the parameter set to True, the loudness of user's voice will be adjusted to -16lufs. If then you also call setAudioMixingLoudness:loudness: and import the original loudness of the audio data used in audio mixing, the loudness will be adjusted to -20lufs when the audio data starts to play. @param enable Whether to enable loudness equalization function:
- true: Yes - false: No @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note You must call this API before starting to play the audio file with startAudioMixing:filePath:config:.
feedback(ByteRTCProblemFeedbackOption types, ByteRTCProblemFeedbackInfo info) FutureOr<int>
@detail api @author wangzhanqiang @brief The call ends and the user feedback problem is reported to RTC @param types The list of preset problems. See ByteRTCProblemFeedbackOption{@link #ByteRTCProblemFeedbackOption} @param info Specific description of other problems other than the preset problem, and room's information. See ByteRTCProblemFeedbackInfo{@link #ByteRTCProblemFeedbackInfo} @return - 0: Success. - -3: Failure. @note If the user is in the room when reporting, the report leads to the room / rooms where the user is currently located;
If the user is not in the room when reporting, the report leads to the previously exited Room.
getAudioDeviceManager() FutureOr<ByteRTCAudioDeviceManager>
@hidden(iOS) @detail api @author dixing @brief Get ByteRTCAudioDeviceManager{@link #ByteRTCAudioDeviceManager} @return ByteRTCAudioDeviceManager{@link #ByteRTCAudioDeviceManager}
getAudioEffectPlayer() FutureOr<ByteRTCAudioEffectPlayer>
@valid since 3.53 @detail api @brief Create an instance for audio effect player. @return See ByteRTCAudioEffectPlayer{@link #ByteRTCAudioEffectPlayer}.
getAudioRoute() FutureOr<ByteRTCAudioRoute>
@hidden(macOS) @detail api @author dixing @brief Gets the information of currently-using playback route. @return See ByteRTCAudioRoute{@link #ByteRTCAudioRoute}. @note To set the audio playback route, see setAudioRoute:{@link #ByteRTCEngine#setAudioRoute}. For mobile only.
getCameraZoomMaxRatio() FutureOr<float>
@hidden(macOS) @detail api @brief Gets the maximum zoom magnification of the currently used camera (front/rear). @return The maximum zoom parameters that can be set
getKTVManager() FutureOr<ByteRTCKTVManager>
@hidden currently not available @detail api @author lihuan.wuti2ha @brief Creates the KTV manager interfaces. @return KTV manager interfaces. See ByteRTCKTVManager{@link #ByteRTCKTVManager}.
getMediaPlayer(int playerId) FutureOr<ByteRTCMediaPlayer>
@valid since 3.53 @detail api @brief Create a media player instance. @param playerId Media player id. The range is [0, 3]. You can create up to 4 instances at the same time. If it exceeds the range, nullptr will be returned. @return Media player instance. See ByteRTCMediaPlayer{@link #ByteRTCMediaPlayer}.
getNativeHandle() FutureOr<void>
@detail api @brief Get IRTCEngine in C++ layer. @return - >0:Success. Return the address of IRTCEngine in C++ layer. - NULL:Failure. @note In some scenarios, getting and working with IRTCEngine in C++ layer has much higher execution efficiency than through the OC encapsulation layer. Typical scenarios include: custom processing of video/audio frames, encryption of audio and video calls, etc.
getNetworkTimeInfo() FutureOr<ByteRTCNetworkTimeInfo>
@detail api @author songxiaomeng.19 @brief Obtain the synchronization network time information. @return See ByteRTCNetworkTimeInfo{@link #ByteRTCNetworkTimeInfo}. @note - When you call this API for the first time, you starts synchornizing the network time information and receive the return value 0. After the synchonization finishes, you will receive rtcEngineOnNetworkTimeSynchronized:{@link #ByteRTCEngineDelegate#rtcEngineOnNetworkTimeSynchronized}. After that, calling this API will get you the correct network time. - Under chorus scenario, participants shall start audio mixing at the same network time.
getPeerOnlineStatus(NSString peerUserId) FutureOr<int>
@detail api @author hanchenchen.c @brief Query the login status of the opposite or local user @param peerUserId
User ID to query @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - You must call login:uid:{@link #ByteRTCEngine#login:uid} After logging in, you can call this interface. - After calling this interface, the SDK will use rtcEngine:onGetPeerOnlineStatus:status:{@link #ByteRTCEngineDelegate#rtcEngine:onGetPeerOnlineStatus:status} callback to notify the query result. - Before sending an out-of-room message, the user can know whether the peer user is logged in through this interface to decide whether to send a message. You can also check your login status through this interface.
getScreenCaptureSourceList() FutureOr<ByteRTCScreenCaptureSourceInfo>
@hidden(iOS) @detail api @author liyi.000 @brief Get the list of shared objects (application windows and screens). @return The list of shared objects. See ByteRTCScreenCaptureSourceInfo{@link #ByteRTCScreenCaptureSourceInfo}。
The enumerated value can be used for startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters}. @note Only valid for PC and macOS.
getSingScoringManager() FutureOr<ByteRTCSingScoringManager>
@detail api @author wangjunzheng @brief Create a karaoke scoring management interface. @return Karaoke scoring management interface, see ByteRTCSingScoringManager{@link #ByteRTCSingScoringManager}. @note To use the karaoke scoring feature, i.e., to call this method and all methods in the ByteRTCSingScoringManager class, you need to intergrate SAMI dynamic library. For details, see Integrate Plugins on Demand.
getThumbnail(ByteRTCScreenCaptureSourceType sourceType, intptr_t sourceId, int maxWidth, int maxHeight) FutureOr
@hidden(iOS) @detail api @author liyi.000 @brief Get the thumbnail of the screen @param sourceType Type of the screen capture object. See ByteRTCScreenCaptureSourceType{@link #ByteRTCScreenCaptureSourceType}. @param sourceId ID of the screen-shared object. You can get the ID from ByteRTCScreenCaptureSourceInfo returned by calling getScreenCaptureSourceList{@link #ByteRTCEngine#getScreenCaptureSourceList}. @param maxWidth Maximum width. RTC will scale the thumbnail to fit the given size while maintaining the original aspect ratio. If the aspect ratio of the given size does not match the sharing object, the thumbnail will have blank borders. @param maxHeight Maximum height. Refer to the note for maxWidth. @return The thumbnail of the sharing object.
The thumbnail is of the same width-height ratio of the object. The size of the trumbnail is no larger than the specified size.
getVideoDeviceManager() FutureOr<ByteRTCVideoDeviceManager>
@hidden(iOS) @detail api @author zhangzhenyu.samuel @brief Get ByteRTCVideoDeviceManager{@link #ByteRTCVideoDeviceManager} @return ByteRTCVideoDeviceManager{@link #ByteRTCVideoDeviceManager}
getVideoEffectInterface() FutureOr<ByteRTCVideoEffect>
@detail api @author zhushufan.ref @brief Gets video effect interfaces. @return Video effect interfaces. See ByteRTCVideoEffect{@link #ByteRTCVideoEffect}.
getWindowAppIcon(intptr_t sourceId, int width, int height) FutureOr
@hidden(iOS) @brief Gets application window preview thumbnail for screen sharing. @region Screen Sharing @author liyi.000 @param sourceId ID of the screen-sharing object. You can get the ID from ByteRTCScreenCaptureSourceInfo returned by calling getScreenCaptureSourceList{@link #ByteRTCEngine#getScreenCaptureSourceList}. @param width Maximum width of the App icon. The width is always equal to the height. SDK will set the height and width to the smaller value if the given values are unequal. RTC will return nullptr if you set the value with a number out of the valid range, 32, 256. The default size is 100 x 100. @param height Maximum height of the app icon. Refer to the note for width. @return Application icon thumbnail. You can call this API when the item to be shared is an application. If not, the return value will be nullptr.
getWTNStream() FutureOr<ByteRTCWTNStream>
isCameraExposurePositionSupported() FutureOr<bool>
@hidden(macOS) @detail api @author zhangzhenyu.samuel @brief Checks if manual exposure setting is available for the currently used camera. @return - true: Available. - false: Unavailable. @note You must call startVideoCapture{@link #ByteRTCEngine#startVideoCapture} to start SDK internal video capturing before calling this API.
isCameraFocusPositionSupported() FutureOr<bool>
@hidden(macOS) @detail api @author zhangzhenyu.samuel @brief Checks if manual focus is available for the currently used camera. @return - true: Available. - false: Unavailable. @note You must call startVideoCapture{@link #ByteRTCEngine#startVideoCapture} to start SDK internal video capturing before calling this API.
isCameraTorchSupported() FutureOr<bool>
@hidden(macOS) @detail api @brief Detects whether the currently used camera supports flash. @return - true: support fill light - false: not support fill light
isCameraZoomSupported() FutureOr<bool>
@hidden(macOS) @detail api @brief Detects whether the currently used camera supports zoom (digital/optical zoom). @return - true: supports scaling - false: does not support scaling
login(NSString token, NSString uid) FutureOr<int>
@detail api @author hanchenchen.c @brief Log in to call sendUserMessageOutsideRoom:message:config:{@link #ByteRTCEngine#sendUserMessageOutsideRoom:message:config} and sendServerMessage:{@link #ByteRTCEngine#sendServerMessage} to send P2P messages or send messages to a server without joining the RTC room.
To log out, call logout{@link #ByteRTCEngine#logout}. @param token Token is required during login for authentication.
This Token is different from that required by calling joinRoom. You can assign any value even null to roomId to generate a login token. During developing and testing, you can use temporary tokens generated on the console. Deploy the token generating application on your server. @param uid User ID which need to be unique within one appid. @return - 0: Success. - <0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for specific reasons. @note The local user will receive rtcEngine:onLoginResult:errorCode:elapsed:{@link #ByteRTCEngineDelegate#rtcEngine:onLoginResult:errorCode:elapsed} after this API is called successfully. But remote users will not receive notification about that.
logout() FutureOr<int>
@detail After api @author hanchenchen.c @brief Log out of RTS server.
Calls this interface to log out, it is impossible to call methods related to out-of-room messages and end-to-server messages or receive related callbacks. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Before calling this interface to log out, login:uid:{@link #ByteRTCEngine#login:uid} Login - After local users call this method to log out, they will receive rtcEngine:onLogout:{@link #ByteRTCEngineDelegate#rtcEngine:onLogout} callback notification results, and remote users will not receive notifications.
muteAudioCapture(bool mute) FutureOr<int>
@valid since 3.58.1 @detail api @author shiyayun @brief Set whether to mute the recording signal (without changing the local hardware). @param mute Whether to mute audio capture.
- True: Mute (disable microphone) - False: (Default) Enable microphone @return - 0: Success. - < 0 : Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - Calling this API does not affect the status of SDK audio stream publishing. - Adjusting the volume by calling setCaptureVolume:{@link #ByteRTCEngine#setCaptureVolume} after muting will not cancel the mute state. The volume state will be retained until unmuted. - You can use this interface to set the capture volume before or after calling startAudioCapture{@link #ByteRTCEngine#startAudioCapture} to enable audio capture.
muteScreenAudioCapture(bool mute) FutureOr<int>
@valid since 3.60. @detail api @author shiyayun @brief Mutes or unmutes the audio captured when screen sharing.
Calling this method will send muted data instead of the screen audio data, and it does not affect the local audio device capture status and the SDK audio stream publishing status. @param mute Whether to mute the audio capture when screen sharing.
- True: Mute the audio capture when screen sharing.
- False: (Default) Unmute the audio capture when screen sharing. @return - 0: Success. - < 0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - Adjusting the volume by calling setCaptureVolume:{@link #ByteRTCEngine#setCaptureVolume} after muting will not cancel the mute state. The volume state will be retained until unmuted. - You can use this interface to set the capture volume before or after calling startAudioCapture{@link #ByteRTCEngine#startAudioCapture} to enable audio capture.
nativeCall<T>(String method, [List? args, NativeMethodMeta? meta]) Future<T>
Call instance method
inherited
noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
pullExternalAudioFrame(ByteRTCAudioFrame audioFrame) FutureOr<int>
@detail api @author huangshouqin @brief Pull remote audio data. You can use the data for audio processing or custom rendering.
After calling this method, the SDK will actively pull the audio data that is ready to be played, including the decoded and mixed audio data from remote end for external playback. @param audioFrame Audio data frame. See ByteRTCAudioFrame{@link #ByteRTCAudioFrame} @return - 0: Success - < 0: Failure @note - Before pulling custom audio data, you must call setAudioRenderType:{@link #ByteRTCEngine#setAudioRenderType} to enable custom audio capture and rendering. - You should pull audio data every 10 milliseconds since the duration of a RTC SDK audio frame is 10 milliseconds. Samples x call frequency = audioFrame's sample rate. Assume that the sampling rate is set to 48000, call this API every 10 ms, so that 480 sampling points should be pulled each time. - The audio sampling format is S16. The data format in the audio buffer is PCM, and its capacity size is audioFrame.samples × audioFrame.channel × 2.
pushClientMixedStreamExternalVideoFrame(NSString uid, ByteRTCVideoFrameData frame) FutureOr<int>
pushExternalAudioFrame(ByteRTCAudioFrame audioFrame) FutureOr<int>
@detail api @author huangshouqin @brief Push custom captured audio data to the RTC SDK. @param audioFrame Audio data frame. See ByteRTCAudioFrame{@link #ByteRTCAudioFrame}. - The audio sampling format must be S16. The data format within the audio buffer must be PCM, and its capacity should be audioFrame.samples × audioFrame.channel × 2. - A specific sample rate and the number of channels must be specified; it is not supported to set them to automatic. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - Before pushing external audio data, you must call setAudioSourceType:{@link #ByteRTCEngine#setAudioSourceType} to enable custom audio capture. - You must push custom captured audio data every 10 milliseconds. The samples (number of audio sampling points) of a single push should be audioFrame.sample Rate/100. For example, when the sampling rate is set to 48000, data of 480 sampling points should be pushed each time.
pushExternalEncodedAudioFrame(ByteRTCEncodedAudioFrameData audioFrame) FutureOr<int>
@hidden for internal use only @region custom audio acquisition rendering @brief Pull remote audio data. You can use the data for audio processing or custom rendering.
After calling this method, the SDK will actively pull the audio data that is ready to be played, including the decoded and mixed audio data from remote end for external playback. @param audio_frame The audio data. See EncodedAudioFrameData{@link #EncodedAudioFrameData}。 @return API call result:
- 0: Success. - <0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for specific reasons. @note - Before push the audio data, call enableAudioEncoding{@link #ByteRTCEngine#enableAudioEncoding} to close audio encode.
pushExternalEncodedVideoFrame(NSInteger videoIndex, ByteRTCEncodedVideoFrame videoFrame) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Push custom encoded video stream @param videoIndex The corresponding encoded stream subscript, starting from 0, if you call setVideoEncoderConfig:{@link #ByteRTCEngine#setVideoEncoderConfig} The number of multiple streams must be consistent here @param videoFrame Encoding For streaming video frame information. See ByteRTCEncodedVideoFrame{@link #ByteRTCEncodedVideoFrame}. @return API call result:
- 0: Success. - <0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for specific reasons. @note - Currently, only video frames in H264 and ByteVC1 formats are supported, and the video stream protocol must be in an Annex B format. - This function runs within the user calling thread - Before pushing a custom encoded video frame, you must call setVideoSourceType:{@link #ByteRTCEngine#setVideoSourceType} to switch the video input source to the custom encoded video source.
pushExternalVideoFrame(ByteRTCVideoFrameData frame) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author liuyangyang @brief Pushes external video frames, encapsulated with ByteRTCVideoFrame. @param frame This video frame contains video data to be encoded by the SDK. Refer to ByteRTCVideoFrame{@link #ByteRTCVideoFrame}. @return API call result:
- 0: Success. - <0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for specific reasons. @note Before pushing an external video frame, you must call setVideoSourceType:{@link #ByteRTCEngine#setVideoSourceType} Turn on external video source capture.
pushReferenceAudioPCMData(ByteRTCAudioFrame audioFrame) FutureOr<int>
pushScreenAudioFrame(ByteRTCAudioFrame audioFrame) FutureOr<int>
@detail api @author liyi.000 @brief Using a custom capture method, when capturing screen audio during screen sharing, push the audio frame to the RTC SDK for encoding and other processing. @param audioFrame Audio data frame. See ByteRTCAudioFrame{@link #ByteRTCAudioFrame} - The audio sampling format is S16. The data format within the audio buffer must be PCM data, and its capacity should be samples × frame.channel × 2. - A specific sample rate and the number of channels must be specified; it is not supported to set them to automatic. @return Method call result
- 0: Setup succeeded. - < 0: Setup failed. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - You must call this API after calling setScreenAudioSourceType:{@link #ByteRTCEngine#setScreenAudioSourceType} to enable custom capture of the screen audio. - You should call this method every 10 milliseconds to push a custom captured audio frame. A push audio frame should contain frame.sample _rate/100 audio sample points. For example, if the sampling rate is 48000Hz, 480 sampling points should be pushed each time. - After calling this interface to push the custom captured audio frame to the RTC SDK, you must call publishScreenAudio: to push the captured screen audio to the remote end. Audio frame information pushed to the RTC SDK is lost before calling publishScreenAudio:.
registerAudioFrameObserver(id<ByteRTCAudioFrameObserver> audioFrameObserver) FutureOr<int>
@detail api @author gongzhengduo @brief Register an audio frame observer. @param audioFrameObserver Audio data callback observer. See ByteRTCAudioFrameObserver{@link #ByteRTCAudioFrameObserver}. Use null to cancel the registration. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note After calling this API and enableAudioFrameCallback:format:{@link #ByteRTCEngine#enableAudioFrameCallback:format}, ByteRTCAudioFrameObserver{@link #ByteRTCAudioFrameObserver} receives the corresponding audio data callback. You can retrieve the audio data and perform processing on it without affecting the audio that RTC SDK uses to encode or render.
registerAudioProcessor(id<ByteRTCAudioFrameProcessor> processor) FutureOr<int>
@detail api @author gongzhengduo @brief Register a custom audio preprocessor.
After that, you can call enableAudioProcessor:audioFormat:{@link #ByteRTCEngine#enableAudioProcessor:audioFormat} to process the audio streams that either captured locally or received from the remote side. RTC SDK then encodes or renders the processed data. @param processor Custom audio processor. See ByteRTCAudioFrameProcessor{@link #ByteRTCAudioFrameProcessor}。
SDK only holds weak references to the processor, you should guarantee its Life Time. To cancel registration, set the parameter to nullptr. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note
registerLocalEncodedVideoFrameObserver(id<ByteRTCLocalEncodedVideoFrameObserver> frameObserver) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author liuyangyang @brief Register a local video frame observer.
This method applys to both internal capturing and custom capturing.
After calling this API, SDK triggers onLocalEncodedVideoFrame:Frame:{@link #ByteRTCLocalEncodedVideoFrameObserver#onLocalEncodedVideoFrame:Frame} whenever a video frame is captured. @param frameObserver Local video frame observer. See ByteRTCLocalEncodedVideoFrameObserver{@link #ByteRTCLocalEncodedVideoFrameObserver}. You can cancel the registration by setting it to nullptr. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note You can call this API before or after entering the RTC room. Calling this API before entering the room ensures that video frames are monitored and callbacks are triggered as early as possible.
registerLocalVideoProcessor(id<ByteRTCVideoProcessorDelegate> processor, ByteRTCVideoPreprocessorConfig config) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author zhushufan.ref @brief Set up a custom video preprocessor.
Using this video preprocessor, you can call processVideoFrame:{@link #ByteRTCVideoProcessorDelegate#processVideoFrame} to preprocess the video frames captured by the RTC SDK and use the processed video frames for RTC audio & video communication. @param processor Custom video processor. See ByteRTCVideoProcessorDelegate{@link #ByteRTCVideoProcessorDelegate}. If null is passed in, the video frames captured by the RTC SDK are not preprocessed.
SDK only holds weak references to the processor, you should guarantee its Life Time.
When designing the'processor ', the video frame data should be obtained from the'textureBuf' field of ByteRTCVideoFrame{@link #ByteRTCVideoFrame};
The video frame data returned after processing should be in the format of'ByteRTCVideoPixelFormat 'in ByteRTCVideoPixelFormat{@link #ByteRTCVideoPixelFormat}, and must be stored in the'textureBuf' field that returns the frame data. @param config Customize the settings applicable to the video preprocessor. See ByteRTCVideoPreprocessorConfig{@link #ByteRTCVideoPreprocessorConfig}.
Currently, the 'required_pixel_format 'in'config' only supports: 'ByteRTCVideoPixelFormatI420' and'ByteRTCVideoPixelFormatUnknown ':
- When set to'Unknown', the RTC SDK gives the format of the video frame for processing by the processor, that is, the format of the acquisition. - When set to'ByteRTCVideoPixelFormatI420 ', the RTC SDK will convert the captured video into the corresponding format for pre-processing. This method call fails when - Is set to another value. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - When this interface is called repeatedly, only the last call takes effect. The effects do not stack. - For iOS platforms, setting the requiredPixelFormat in ByteRTCVideoPreprocessorConfig{@link #ByteRTCVideoPreprocessorConfig} to'kVideoPixelFormatUnknown 'brings some performance optimization by avoiding format conversion.
registerRemoteEncodedAudioFrameObserver(id<ByteRTCRemoteEncodedAudioFrameObserver> observer) FutureOr<void>
@detail api @hidden for internal use only @brief Register the remote audio frame monitor.
After calling this method, every time the SDK detects a remote audio frame, it will call back the audio frame information to the user through onRemoteEncodedAudioFrame. @param observer Remote AudioFrame Monitor. See IRemoteEncodedAudioFrameObserver. @note - This method is recommended to be called before entering the room. - Setting the parameter to nullptr cancels registration. - Before calling, call enableAudioDecoding{@link #ByteRTCEngine#enableAudioDecoding} to close audio decode.
registerRemoteEncodedVideoFrameObserver(id<ByteRTCRemoteEncodedVideoFrameObserver> observer) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Video data callback after registering remote encoding.
After registration, when the SDK detects a remote encoded video frame, onRemoteEncodedVideoFrame:info:withEncodedVideoFrame:{@link #ByteRTCRemoteEncodedVideoFrameObserver#onRemoteEncodedVideoFrame:info:withEncodedVideoFrame} callback @param observer Remote encoded video data monitor. See ByteRTCRemoteEncodedVideoFrameObserver{@link #ByteRTCRemoteEncodedVideoFrameObserver} @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - See Custom Video Encoding and Decoding for more details about custom video decoding. - This method applys to manual subscription mode and can be called either before or after entering the Room. It is recommended to call it before entering the room. - The engine needs to be unregistered before it is destroyed. Call this method to set the parameter to nullptr.
requestRemoteVideoKeyFrame(NSString streamId) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author liuyangyang @brief After subscribing to the remote video stream, request the keyframe @param streamId ID of Remote stream. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - This method is only suitable for manual subscription mode and is used after successful subscription to the remote flow. - This method is suitable for calling setVideoDecoderConfig:withVideoDecoderConfig:{@link #ByteRTCEngine#setVideoDecoderConfig:withVideoDecoderConfig} After the custom decoding function is turned on, and the custom decoding fails
sendInstanceGet<T>(String property) Future<T>
Get instance property
inherited
sendInstancePropertiesGet(dynamic nativeClass) Future<Map<String, dynamic>>
Get instance properties
inherited
sendInstanceSet(String property, dynamic value) Future<void>
Set instance property
inherited
sendPublicStreamSEIMessage(int channelId, NSData message, int repeatCount, ByteRTCSEICountPerFrame mode) FutureOr<int>
@hidden for internal use only @valid since 3.56 @detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief <span id="IRTCVideo-sendseimessage-2"></span> WTN stream sends SEI data. @param channelId SEI message channel id. The value range is 0 - 255. With this parameter, you can set different ChannelIDs for different recipients. In this way, different recipients can choose the SEI information based on the ChannelID received in the callback. @param message SEI data. @param repeatCount Number of times a message is sent repeatedly. The value range is 0, max{29, \%{video frame rate}-1}. Recommended range: 2,4.
After calling this API, the SEI data will be added to a consecutive repeat_count+1 number of video frames starting from the current frame. @param mode SEI sending mode. See ByteRTCSEICountPerFrame{@link #ByteRTCSEICountPerFrame}. @return - < 0:Failure. - = 0:Failure because the SEI sending queue was full. - > 0: Success. The value indicated the number of the SEI sent. @note - We recommend the number of SEI messages per second should not exceed the current video frame rate. - In a video call, the custom captured video frame can also be used for sending SEI data if the original video frame contains no SEI data, otherwise calling this method will not take effect. - Each video frame carrys only the SEI data received within 2s before and after. In a voice call scenario, if no SEI data is sent within 1min after calling this API, SDK will automatically cancel publishing black frames. - After the message is sent successfully, the remote user who subscribed your video stream will receive rtcEngine:onPublicStreamSEIMessageReceivedWithChannel:andChannelId:andMessage:. - When the call fails, neither the local nor the remote side will receive a callback.
sendScreenCaptureExtensionMessage(NSData messsage) FutureOr<int>
@hidden(macOS) @detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Sends message to screen capture Extension @param messsage Message sent to the Extension @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Call this API after calling startScreenCapture:bundleId:{@link #ByteRTCEngine#startScreenCapture:bundleId}. - The extension will receive onReceiveMessageFromApp:{@link #ByteRtcScreenCapturerExtDelegate#onReceiveMessageFromApp} when the message is sent.
sendSEIMessage(NSData message, int repeatCount, ByteRTCSEICountPerFrame mode) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Sends SEI data.
In a video call scenario, SEI is sent with the video frame, while in a voice call scenario, SDK will automatically publish a black frame with a resolution of 16 × 16 pixels to carry SEI data. @param message SEI data. No more than 4 KB SEI data per frame is recommended. @param repeatCount Number of times a message is sent repeatedly. The value range is 0, max{29, \%{video frame rate}-1}. Recommended range: 2,4.
After calling this API, the SEI data will be added to a consecutive repeatCount+1 number of video frames starting from the current frame. @param mode SEI sending mode. See ByteRTCSEICountPerFrame{@link #ByteRTCSEICountPerFrame}. @return - >= 0: The number of SEIs to be added to the video frame - < 0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - We recommend the number of SEI messages per second should not exceed the current video frame rate. In a voice call, the blank-frame rate is 15 fps. - In a voice call, this API can be called to send SEI data only in internal capture mode. - In a video call, the custom captured video frame can also be used for sending SEI data if the original video frame contains no SEI data, otherwise calling this method will not take effect. - Each video frame carrys only the SEI data received within 2s before and after. In a voice call scenario, if no SEI data is sent within 1min after calling this API, SDK will automatically cancel publishing black frames. - After the message is sent successfully, the remote user who subscribed your video stream will receive rtcEngine:onSEIMessageReceived:info:andMessage:{@link #ByteRTCEngineDelegate#rtcEngine:onSEIMessageReceived:info:andMessage}. - When you switch from a voice call to a video call, SEI data will automatically start to be sent with normally captured video frames instead of black frames.
sendServerBinaryMessage(NSData messageStr) FutureOr<NSInteger>
@detail api @author hanchenchen.c @brief Client side sends binary messages to the application server (P2Server) @param messageStr
Binary message content sent
Message does not exceed 46KB. @return - > 0: sent successfully, return the number of the sent message, increment from 1. - -1: Sent failed due to empty message. @note - Before sending a binary message to the application server, call login:uid:{@link #ByteRTCEngine#login:uid} to complete the login, then call setServerParams:url:{@link #ByteRTCEngine#setServerParams:url} Set up the application server. - After calling this interface, you will receive an rtcEngine:onServerMessageSendResult:error:message:{@link #ByteRTCEngineDelegate#rtcEngine:onServerMessageSendResult:error:message} callback to inform the message sender that the sending succeeded or failed; - If the binary message is sent successfully, the application server that previously called setServerParams:url:{@link #ByteRTCEngine#setServerParams:url} will receive the message.
sendServerMessage(NSString messageStr) FutureOr<NSInteger>
@detail api @author hanchenchen.c @brief Client side sends a text message to the application server (P2Server) @param messageStr
The content of the text message sent
The message does not exceed 64 KB. @return - > 0: Sent successfully, return the number of the sent message, increment from 1. @note - Before sending a text message to the application server, you must first call login:uid:{@link #ByteRTCEngine#login:uid} to complete the login, and then call setServerParams:url:{@link #ByteRTCEngine#setServerParams:url} Set up the application server. - After calling this interface, you will receive an rtcEngine:onServerMessageSendResult:error:message:{@link #ByteRTCEngineDelegate#rtcEngine:onServerMessageSendResult:error:message} callback to inform the message sender whether the message was sent successfully. - If the text message is sent successfully, the application server that previously called setServerParams:url:{@link #ByteRTCEngine#setServerParams:url} will receive the message.
sendStreamSyncInfo(NSData data, ByteRTCStreamSyncInfoConfig config) FutureOr<int>
@detail api @author wangjunzheng @brief Sends audio stream synchronization information. The message is sent to the remote end through the audio stream and synchronized with the audio stream. After the interface is successfully called, the remote user will receive rtcEngine:onStreamSyncInfoReceived:info:streamType:data:{@link #ByteRTCEngineDelegate#rtcEngine:onStreamSyncInfoReceived:info:streamType:data}. @param data Message content. @param config For configuration of media stream information synchronization. See ByteRTCStreamSyncInfoConfig{@link #ByteRTCStreamSyncInfoConfig}. @return - > = 0: Message sent successfully. Returns the number of successful sends. - -1: Message sending failed. Message length greater than 16 bytes. - -2: Message sending failed. The content of the incoming message is empty. - -3: Message sending failed. This screen stream was not published when the message was synchronized through the screen stream. - -4: Message sending failed. This audio stream is not yet published when you synchronize messages with an audio stream captured by a microphone or custom device, as described in the error code ByteRTCErrorCode{@link #ByteRTCErrorCode}. @note
sendUserBinaryMessageOutsideRoom(NSString userId, NSData messageStr, ByteRTCMessageConfig config) FutureOr<NSInteger>
@detail api @author hanchenchen.c @brief Send binary messages (P2P) to the specified user outside the room @param userId
Message receiving user's ID @param messageStr
Binary message content sent
Message does not exceed 46KB. @param config Message type, see ByteRTCMessageConfig{@link #ByteRTCMessageConfig}. @return - > 0: sent successfully, return the number of the sent message, increment from 1. - -1: Sent failed due to empty message. @note - Before sending out-of-room binary messages, you must call login:uid:{@link #ByteRTCEngine#login:uid} to complete login. - After the user calls this interface to send a binary message, he will receive an rtcEngine:onUserMessageSendResultOutsideRoom:error:{@link #ByteRTCEngineDelegate#rtcEngine:onUserMessageSendResultOutsideRoom:error} callback to notify whether the message was sent successfully; - If the binary message is sent successfully, the user specified by the userId will use rtcEngine:onUserBinaryMessageReceivedOutsideRoom:message:{@link #ByteRTCEngineDelegate#rtcEngine:onUserBinaryMessageReceivedOutsideRoom:message} callback Receive the message.
sendUserMessageOutsideRoom(NSString userId, NSString messageStr, ByteRTCMessageConfig config) FutureOr<NSInteger>
@detail api @author hanchenchen.c @brief Sends a text message (P2P) to a specified user outside the room @param userId
Message Receives the user's ID @param messageStr
Text message content sent
Message does not exceed 64 KB. @param config Message type, see ByteRTCMessageConfig{@link #ByteRTCMessageConfig}. @return - > 0: Sent successfully, return the number of the sent message, increment from 1. @note - Before sending an out-of-room text message, you should call login:uid:{@link #ByteRTCEngine#login:uid} to complete login. - After the user calls this interface to send a text message, he will receive an rtcEngine:onUserMessageSendResultOutsideRoom:error:{@link #ByteRTCEngineDelegate#rtcEngine:onUserMessageSendResultOutsideRoom:error} callback to know whether the message was sent successfully; - If the text message is sent successfully, the user specified by the userId will receive the message through rtcEngine:onUserMessageReceivedOutsideRoom:message:{@link #ByteRTCEngineDelegate#rtcEngine:onUserMessageReceivedOutsideRoom:message} callback.
setAnsMode(ByteRTCAnsMode ansMode) FutureOr<int>
@valid since 3.52 @detail api @author liuchuang @brief Set the Active Noise Cancellation(ANC) mode during audio and video communications. @param ansMode ANC modes. See ByteRTCAnsMode{@link #ByteRTCAnsMode}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - You can call this API before or after entering a room. When you repeatedly call it, only the last call takes effect.
The AI noise cancellation can be enabled by calling this interface only in the following ByteRTCRoomProfile{@link #ByteRTCRoomProfile} scenarios. - Game Voice: ByteRTCRoomProfileGame - High-Quality Game: ByteRTCRoomProfileGameHD - Cloud Gaming: ByteRTCRoomProfileCloudGame - 1 vs 1 Audio and Video Call: ByteRTCRoomProfileChat - Multi-End Synchronized Audio and Video Playback: ByteRTCRoomProfileLwTogether - Personal Device in Cloud Meeting: ByteRTCRoomProfileMeeting - Classroom Interaction: ByteRTCRoomProfileClassroom - Conference Room Terminals in Cloud Meetings: ByteRTCRoomProfileMeetingRoom
setAudioAlignmentProperty(NSString streamId, ByteRTCAudioAlignmentMode mode) FutureOr<int>
@detail api @hidden internal use only @author majun.lvhiei @brief On the listener side, set all subscribed audio streams precisely timely aligned. @param streamId ID of the remote audio stream used as the benchmark during time alignment.
You are recommended to use the audio stream from the lead singer.
You must call this API after receiving rtcRoom:onUserPublishStreamAudio:info:isPublish:{@link #ByteRTCRoomDelegate#rtcRoom:onUserPublishStreamAudio:info:isPublish}. @param mode Whether to enable the alignment. Disabled by default. See ByteRTCAudioAlignmentMode{@link #ByteRTCAudioAlignmentMode}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - You must use the function when all participants set ByteRTCRoomProfile{@link #ByteRTCRoomProfile} to ByteRTCRoomProfileChorus when joining the room. - All remote participants must call startAudioMixing:filePath:config: to play background music and set syncProgressToRecordFrame of ByteRTCAudioMixingConfig to true. - If the subscribed audio stream is delayed too much, it may not be precisely aligned. - The chorus participants must not enable the alignment. If you wish to change the role from listener to participant, you should disable the alignment.
setAudioProfile(ByteRTCAudioProfileType audioProfile) FutureOr<int>
@detail api @author dixing @brief Sets the sound quality. You should choose the appropriate sound quality according to the needs of the business scenario. @param audioProfile Sound quality. See ByteRTCAudioProfileType{@link #ByteRTCAudioProfileType} @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - This method can be called before and after entering the room; - Support dynamic switching of sound quality during a call.
setAudioRenderType(ByteRTCAudioRenderType type) FutureOr<int>
@detail api @author huangshouqin @brief Switches the audio render type. @param type Audio output source type. See ByteRTCAudioRenderType{@link #ByteRTCAudioRenderType}.
Use internal audio render by default. The audio capture type and the audio render type may be different from each other. @return Method call result:
- =0: Success. - <0: Failure. @note - You can call this API before or after joining the room. - After calling this API to enable custom audio rendering, call pullExternalAudioFrame:{@link #ByteRTCEngine#pullExternalAudioFrame} for audio data.
setAudioRoute(ByteRTCAudioRoute audioRoute) FutureOr<int>
@hidden(macOS) @detail api @author yezijian.me @brief Set the current audio playback route. The default device is set via setDefaultAudioRoute:{@link #ByteRTCEngine#setDefaultAudioRoute}.
When the audio playback route changes, you will receive rtcEngine:onAudioRouteChanged:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioRouteChanged}. @param audioRoute Audio route. Refer to ByteRTCAudioRoute{@link #ByteRTCAudioRoute}. You can only use the build-in speaker or the default route. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - You can implement most scenarios by calling setDefaultAudioRoute:{@link #ByteRTCEngine#setDefaultAudioRoute} and the default audio route switching strategy of the RTC SDK. For details about the strategy, see Set the Audio Route. You should use this API in a few exceptional scenarios like manually switching audio route with external audio device connected. - This interface is only supported in the ByteRTCAudioScenarioCommunication audio scenario. Call setAudioScenario:{@link #ByteRTCEngine#setAudioScenario} to switch between different audio scenarios. - For the volume type in different audio scenarios, refer to ByteRTCAudioScenarioType{@link #ByteRTCAudioScenarioType}.
setAudioScenario(ByteRTCAudioScenarioType audioScenario) FutureOr<int>
@hidden(macOS) @valid since 3.60. @detail api @author gongzhengduo @brief Sets the audio scenarios.
After selecting the audio scenario, SDK will automatically switch to the proper volume modes (the call/media volume) according to the scenarios and the best audio configurations under such scenarios.
This API should not be used at the same time with the old one. @param audioScenario Audio scenarios. See ByteRTCAudioScenarioType{@link #ByteRTCAudioScenarioType}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - You can use this API both before and after joining the room. - Call volume is more suitable for calls, meetings and other scenarios that demand information accuracy. Call volume will activate the system hardware signal processor, making the sound clearer. The volume cannot be reduced to 0. - Media volume is more suitable for entertainment scenarios, which require musical expression. The volume can be reduced to 0.
setAudioSourceType(ByteRTCAudioSourceType type) FutureOr<int>
@detail api @author huangshouqin @brief Switches the audio capture type. @param type Audio input source type. See ByteRTCAudioSourceType{@link #ByteRTCAudioSourceType}
Use internal audio capture by default. The audio capture type and the audio render type may be different from each other. @return Method call result:
- =0: Success. - <0: Failure. @note - You can call this API before or after joining the room. - If you call this API to switch from internal audio capture to custom capture, the internal audio capture is automatically disabled. You must call pushExternalAudioFrame:{@link #ByteRTCEngine#pushExternalAudioFrame} to push custom captured audio data to RTC SDK for transmission. - If you call this API to switch from custom capture to internal capture, you must then call startAudioCapture{@link #ByteRTCEngine#startAudioCapture} to enable internal capture.
setBeautyIntensity(ByteRTCEffectBeautyMode beautyMode, float intensity) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangjunlin.3182 @brief Sets the beauty effect intensity. @param beautyMode Basic beauty effect. See ByteRTCEffectBeautyMode{@link #ByteRTCEffectBeautyMode}. @param intensity Beauty effect intensity in range of 0,1. When you set it to 0, the beauty effect will be turned off.
The default values for the intensity of each beauty mode are as follows: 0.7 for brightning, 0.8 for smoothing, 0.5 for sharpening, and 0.7 for clarity. @return - 0: Success. - –2: intensity is out of range. - –1001: This API is not available for your current RTC SDK. - <0: Failure. Effect SDK internal error. For specific error code, see Error Code Table. @note - If you call this API before calling enableVideoEffect{@link #ByteRTCVideoEffect#enableVideoEffect}, the default settings of beauty effect intensity will adjust accordingly. - If you destroy the engine, the beauty effect settings will be invalid.
setBluetoothMode(ByteRTCBluetoothMode mode) FutureOr<int>
@hidden(macOS) @detail api @author dixing @brief On iOS, you can change the Bluetooth profile when the media volume is set in all scenarios. @param mode The Bluetooth profiles. See ByteRTCBluetoothMode{@link #ByteRTCBluetoothMode}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note You will receive rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning} in the following scenarios: 1) You cannot change the Bluetooth profile to HFP.;2) The media volume is not set in all scenarios. We suggest that you call setAudioScenario:{@link #ByteRTCEngine#setAudioScenario} to set the media volume scenario before calling this API.
setBusinessId(NSString businessId) FutureOr<int>
@detail api @author wangzhanqiang @brief Sets the business ID
You can use businessId to distinguish different business scenarios. You can customize your businessId to serve as a sub AppId, which can share and refine the function of the AppId, but it does not need authentication. @param businessId
Your customized businessId
BusinessId is a tag, and you can customize its granularity. @return - 0: Success. - -2: The input is invalid. Legal characters include all lowercase letters, uppercase letters, numbers, and four other symbols, including '.', '-','_', and '@'. @note - You must call this API before the joinRoom:userInfo:userVisibility:roomConfig:{@link #ByteRTCRoom#joinRoom:userInfo:userVisibility:roomConfig} API, otherwise it will be invalid.
setCameraAdaptiveMinimumFrameRate(int framerate) FutureOr<int>
@hidden(macOS) @valid since 353 @detail api @brief Set the minimum frame rate of of the dynamic framerate mode during internal video capture. @param framerate The minimum value in fps. The default value is 7.
The maximum value of the dynamic framerate mode is set by calling setVideoCaptureConfig:{@link #ByteRTCEngine#setVideoCaptureConfig}. When minimum value exceeds the maximum value, the frame rate is set to a fixed value as the maximum value; otherwise, dynamic framerate mode is enabled. @return - 0: Success. - !0: Failure. @note - You must call this API before calling startVideoCapture{@link #ByteRTCEngine#startVideoCapture} to enable internal capture to make the setting valid. - If the maximum frame rate changes due to performance degradation, static adaptation, etc., the set minimum frame rate value will be re-compared with the new maximum value. Changes in comparison results may cause switch between fixed and dynamic frame rate modes. - For Android, dynamic framerate mode is enabled. - For iOS, dynamic framerate mode is disabled.
setCameraExposureCompensation(float val) FutureOr<int>
@hidden(macOS) @detail api @author zhangzhenyu.samuel @brief Sets the exposure compensation for the currently used camera. @param val Exposure compensation in range of -1, 1. Default to 0, which means no exposure compensation. @return - 0: Success. - < 0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - You must call startVideoCapture{@link #ByteRTCEngine#startVideoCapture} to start SDK internal video capturing, and use SDK internal rendering before calling this API. - The camera exposure compensation setting will be invalid after calling stopVideoCapture{@link #ByteRTCEngine#stopVideoCapture} to stop internal capturing.
setCameraExposurePosition(dynamic position) FutureOr<int>
@hidden(macOS) @detail api @author zhangzhenyu.samuel @brief Sets the manual exposure position for the currently used camera. @param position The position of the exposure point. Setting the upper-left corner of the canvas as the origin, the x in position means the x-coordinate of the exposure point in range of 0, 1, and the y in position means the y-coordinate of the exposure point in range of 0, 1. @return - 0: Success. - < 0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - You must call startVideoCapture{@link #ByteRTCEngine#startVideoCapture} to start SDK internal video capturing, and use SDK internal rendering before calling this API. - When you set the exposure point at the center of the canvas, the exposure point setting will be canceled. - The camera exposure point setting will be invalid after calling stopVideoCapture{@link #ByteRTCEngine#stopVideoCapture} to stop internal capturing.
setCameraFocusPosition(dynamic position) FutureOr<int>
@hidden(macOS) @detail api @author zhangzhenyu.samuel @brief Sets the manual focus position for the currently used camera. @param position The position of the focus point. Setting the upper-left corner of the canvas as the origin, the x in position means the x-coordinate of the focus point in range of 0, 1, and the y in position means the y-coordinate of the focus point in range of 0, 1. @return - 0: Success. - < 0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - You must call startVideoCapture{@link #ByteRTCEngine#startVideoCapture} to start SDK internal video capturing, and use SDK internal rendering before calling this API. - When you set the focus point at the center of the canvas, the focus point setting will be canceled. - The camera focus point setting will be invalid after calling stopVideoCapture{@link #ByteRTCEngine#stopVideoCapture} to stop internal capturing.
setCameraTorch(ByteRTCTorchState torchState) FutureOr<int>
@hidden(macOS) @detail api @brief Turns on/off the flash of the currently used camera. @param torchState
Fill light status. See ByteRTCTorchState{@link #ByteRTCTorchState} @return - 0: Success - -1: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details.
setCameraZoomRatio(float zoomRatio) FutureOr<int>
@hidden(macOS) @detail api @brief Sets the zoom magnification of the currently used camera (front/rear). @param zoomRatio Camera zoom parameters. 1.0 means scaling to the original image, the maximum value that can be set is obtained through the getCameraZoomMaxRatio{@link #ByteRTCEngine#getCameraZoomMaxRatio} method @return - 0: Success. - -1: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - The camera zoom factor can only be set when startVideoCapture{@link #ByteRTCEngine#startVideoCapture} is called for video capture using the SDK internal capture module. - The setting result fails after calling stopVideoCapture{@link #ByteRTCEngine#stopVideoCapture} to turn off internal collection. - Call setVideoDigitalZoomConfig:size:{@link #ByteRTCEngine#setVideoDigitalZoomConfig:size} to set digital zoom. Call setVideoDigitalZoomControl:{@link #ByteRTCEngine#setVideoDigitalZoomControl} to perform digital zoom.
setCaptureVolume(int volume) FutureOr<int>
@detail api @author huangshouqin @brief Adjusts the audio capture volume. @param volume Ratio of capture volume to original volume. Ranging: 0,400. Unit: %
- 0: Mute - 100: Original volume. To ensure the audio quality, we recommend 0, 100. - 400: Four times the original volume with signal-clipping protection. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note Call this API to set the volume of the audio capture before or during the audio capture.
setCellularEnhancement(ByteRTCMediaTypeEnhancementConfig config) FutureOr<int>
@hidden(macOS) @detail api @hiddensdk(audiosdk) @brief Enable cellular network assisted communication to improve call quality. @param config See ByteRTCMediaTypeEnhancementConfig{@link #ByteRTCMediaTypeEnhancementConfig}. @return Method call result:
- 0: Success. - -1: Failure, internal error. - -2: Failure, invalid parameters. @note The function is off by default.
setClientMixedStreamObserver(id<ByteRTCClientMixedStreamDelegate> observer) FutureOr<int>
setCustomizeEncryptHandler(id<ByteRTCEncryptHandler> handler) FutureOr<int>
@detail api @author wangjunlin.3182 @brief Sets custom encryption and decryption methods. @param handler Custom encryption handler, which needs to implement the encryption and decryption method. See ByteRTCEncryptHandler{@link #ByteRTCEncryptHandler}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The method and setEncryptInfo:key:{@link #ByteRTCEngine#setEncryptInfo:key} are mutually exclusive relationships, that is, according to the call order, the last call method takes effect version. - This method must be called before calling joinRoom:userInfo:userVisibility:roomConfig:{@link #ByteRTCRoom#joinRoom:userInfo:userVisibility:roomConfig}, which can be called repeatedly, taking the last called parameter as the effective parameter. - Whether encrypted or decrypted, the length of the modified data needs to be controlled under 180%. That is, if the input data is 100 bytes, the processed data must be less than 180 bytes. If the encryption or decryption result exceeds the limit, the audio & video frame may be discarded. - Data encryption/decryption is performed serially, so depending on the implementation The method may affect the final rendering efficiency. Whether to use this method needs to be carefully evaluated by the user.
setDefaultAudioRoute(ByteRTCAudioRoute audioRoute) FutureOr<int>
@hidden(macOS) @detail api @author yezijian.me @brief Set the speaker or earpiece as the default audio playback device. @param audioRoute Audio playback device. Refer to ByteRTCAudioRoute{@link #ByteRTCAudioRoute}. You can only use earpiece and speakerphone. @return - 0: Success. - < 0: failure. It fails when the device designated is neither a speaker nor an earpiece. @note For the default audio route switching strategy of the RTC SDK, see Set the Audio Route.
setDummyCaptureImagePath(NSString filePath) FutureOr<int>
@hidden(macOS) @detail api @hiddensdk(audiosdk) @author liuyangyang @brief Set an alternative image when the local internal video capture is not enabled.
When you call stopVideoCapture, an alternative image will be pushed. You can set the path to null or open the camera to stop publishing the image.
You can repeatedly call this API to update the image. @param filePath Set the path of the static image.
You can use the absolute path (file://xxx). The maximum size for the path is 512 bytes.
You can upload a .JPG, .JPEG, .PNG, or .BMP file.
When the aspect ratio of the image is inconsistent with the video encoder configuration, the image will be proportionally resized, with the remaining pixels rendered black. The framerate and the bitrate are consistent with the video encoder configuration. @return - 0: Success. - -2: Failure. Ensure that the filePath is valid. - -12: This method is not available in the Audio SDK. @note - The API is only effective when publishing an internally captured video. - You cannot locally preview the image. - You can call this API before and after joining an RTC room. In the multi-room mode, the image can be only displayed in the room you publish the stream. - You cannot apply effects like filters and mirroring to the image, while you can watermark the image. - The image is not effective for a screen-sharing stream. - When you enable the simulcast mode, the image will be added to all video streams, and it will be proportionally scaled down to smaller encoding configurations.
setEarMonitorMode(ByteRTCEarMonitorMode mode, ByteRTCEarMonitorAudioFilter filter) FutureOr<int>
@detail api @valid since 3.60. @brief Enables/Disables in-ear monitoring. @param mode Whether or not in-ear monitoring is enabled. See ByteRTCEarMonitorMode{@link #ByteRTCEarMonitorMode}. It defaults to off. @param filter Whether to include the local audio filters. See ByteRTCEarMonitorAudioFilter{@link #ByteRTCEarMonitorAudioFilter}. It defaults to no audio processing. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - In-ear monitoring is effective for audios captured by the RTC SDK. - We recommend that you use wired earbuds/headphones for a low-latency, high-resolution audio experience. - For iOS, you can only use SDK-level in-ear monitoring. - For macOS, ensure the users use the earpiece directly connected to the device by 3.5mm audio jack, USB, or Bluetooth. Earpiece connected to the device with the use of an intermediary device can not access to the in-ear monitoring feature, such as earpiece connected to the device through a monitor via HDMI or USB-C interface, or through an OTG sound card.
setEarMonitorVolume(NSInteger volume) FutureOr<int>
@detail api @author majun.lvhiei @brief Sets the in-ear monitoring volume. @param volume The monitoring volume with the adjustment range between 0% and 100%. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note Call setEarMonitorMode:{@link #ByteRTCEngine#setEarMonitorMode} before setting the volume.
setEncryptInfo(ByteRTCEncryptType encrypt_type, NSString key) FutureOr<int>
@detail api @author wangjunlin.3182 @brief Sets the way to use built-in encryption when transmitting. @param encrypt_type Built-in encryption algorithm. See ByteRTCEncryptType{@link #ByteRTCEncryptType} @param key Encryption key, limited to 36 bits in length, beyond which it will be truncated @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Use this method when using built-in encryption when transmitting; if you need to use transmission See onEncryptData:{@link #ByteRTCEncryptHandler#onEncryptData}. Built-in encryption and custom encryption mutex, determine the scheme of transmission encryption according to the last called method.
- This method must be called before entering the room, and can be called repeatedly, taking the last called parameter as the effective parameter.
setExtensionConfig(NSString groupId) FutureOr<int>
@hidden(macOS) @detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Set Extension configuration. It should be set before capturing screen internally. @param groupId Your app and Extension should belong to the same App Group. You need to put in their Group ID here. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note You must call this API immediately after calling createRTCEngine:delegate:{@link #ByteRTCEngine#createRTCEngine:delegate}. You only need to call this API once in the life cycle of the engine instance.
setExternalVideoEncoderEventHandler(id<ByteRTCExternalVideoEncoderEventHandler> handler) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Register custom encoded frame push event callback @param handler Custom encoded frame callback class. See ByteRTCExternalVideoEncoderEventHandler{@link #ByteRTCExternalVideoEncoderEventHandler} @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - This method needs to be called before entering the room. - The engine needs to be unregistered before it is destroyed. Call this method to set the parameter to nullptr.
setLocalProxy(NSArray<ByteRTCLocalProxyInfo> configurations) FutureOr<int>
@detail api @author keshixing.rtc @brief Sets local proxy. @param configurations Local proxy configurations. Refer to ByteRTCLocalProxyInfo{@link #ByteRTCLocalProxyInfo} for details.
You can set both Http tunnel and Socks5 as your local proxies, or only set one of them based on your needs. If you set both Http tunnel and Socks5 as your local proxies, then media traffic and signaling are routed through Socks5 proxy and Http requests through Http tunnel proxy. If you set either Http tunnel or Socks5 as your local proxy, then media traffic, signaling and Http requests are all routed through the proxy you chose.
If you want to remove the existing local proxy configurations, you can call this API with the parameter set to null. @note - You must call this API before joining the room. - After calling this API, you will receive rtcEngine:onLocalProxyStateChanged:withProxyState:withProxyError:{@link #ByteRTCEngineDelegate#rtcEngine:onLocalProxyStateChanged:withProxyState:withProxyError} callback that informs you of the states of local proxy connection.
setLocalSimulcastMode(ByteRTCVideoSimulcastMode mode, NSArray<ByteRTCVideoEncoderConfig> streamConfig) FutureOr<int>
@valid since 3.60. @detail api @brief Enable the Simulcast feature and configure the lower-quality video streams settings. @param mode Whether to publish lower-quality streams and how many of them to be published. See ByteRTCVideoSimulcastMode{@link #ByteRTCVideoSimulcastMode}. By default, it is set to Single, where the publisher sends the video in a single profile. In the other modes, the low-quality stream is set to a default resolution of 160px × 90px with a bitrate of 50Kbps. @param streamConfig The specification of the lower quality stream. You can configure up to three low-quality streams for a video source. See ByteRTCVideoEncoderConfig{@link #ByteRTCVideoEncoderConfig}. The resolution of the lower quality stream must be smaller than the standard stream set via setVideoEncoderConfig:withParameters:{@link #ByteRTCEngine#setVideoEncoderConfig:withParameters}. The specifications in the array must be arranged in ascending order based on resolution. By default, it is set to Single, where the publisher sends the video in a single profile. In the other modes, the low-quality stream is set to a default resolution of 160px × 90px with a bitrate of 50Kbps. @return - 0: Success. - < 0 : Fail. @note - The default specification of the video stream is 640px × 360px @15fps. - The method applies to the camera video only. - Refer to Simulcasting for more information.
setLocalVideoCanvas(ByteRTCVideoCanvas canvas) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author sunhang.io @brief Sets the view to be used for local video rendering and the rendering mode. @param canvas View information and rendering mode. See ByteRTCVideoCanvas{@link #ByteRTCVideoCanvas}. @return - 0: Success. - -2: Invalid parameter. - -12: This method is not available in the Audio SDK. @note - You should bind your stream to a view before joining the room. This setting will remain in effect after you leave the room. - If you need to unbind the local video stream from the current view, you can call this API and set the videoCanvas to null.
setLocalVideoMirrorType(ByteRTCMirrorType mirrorType) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangjunlin.3182 @brief Sets the mirror mode for the captured video stream. @param mirrorType Mirror type. See ByteRTCMirrorType{@link #ByteRTCMirrorType}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Switching video streams does not affect the settings of the mirror type. - This API is not applicable to screen-sharing streams. - When using an external renderer, you can set mirrorType to 0 and 3, but you cannot set it to 1. - Before you call this API, the initial states of each video stream are as follows:
setLocalVideoSink(id<ByteRTCVideoSinkDelegate> videoSink, ByteRTCVideoSinkPixelFormat requiredFormat) FutureOr<int>
@detail api @hiddensdk(audiosdk) @deprecated since 3.57, use setLocalVideoSink:withLocalRenderConfig:{@link #ByteRTCEngine#setLocalVideoSink:withLocalRenderConfig} instead. @region Custom Video Capturing & Rendering @author sunhang.io @brief Binds the local video stream to a custom renderer. @param videoSink Custom video renderer. See ByteRTCVideoSinkDelegate{@link #ByteRTCVideoSinkDelegate}. @param requiredFormat Video frame encoding format that applys to custom rendering. See ByteRTCVideoSinkPixelFormat{@link #ByteRTCVideoSinkPixelFormat}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - RTC SDK uses its own renderer (internal renderer) for video rendering by default. - If you need to unbind the video stream from the custom render, you must set videoSink to null. The binding status will be cleared when you leave room. - You should call this API before joining the room, and after receiving rtcEngine:onFirstLocalVideoFrameCaptured:withFrameInfo:{@link #ByteRTCEngineDelegate#rtcEngine:onFirstLocalVideoFrameCaptured:withFrameInfo} which reports that the first local video frame has been successfully captured. - This method gets video frames that have undergone preprocessing. If you need to obtain video frames from other positions, such as after capture, you should call setLocalVideoSink:withLocalRenderConfig:{@link #ByteRTCEngine#setLocalVideoSink:withLocalRenderConfig} instead.
setLocalVoiceEqualization(ByteRTCVoiceEqualizationConfig config) FutureOr<int>
@detail api @author wangjunzheng @brief Set the equalization effect for the local captured audio. The audio includes both internal captured audio and external captured voice, but not the mixing audio file. @param config See ByteRTCVoiceEqualizationConfig{@link #ByteRTCVoiceEqualizationConfig}. @return - 0: Success. - < 0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note According to the Nyquist acquisition rate, the audio acquisition rate must be greater than twice the set center frequency. Otherwise, the setting will not be effective.
setLocalVoicePitch(NSInteger pitch) FutureOr<int>
@detail api @author wangjunzheng @brief Changes local voice to a different key, mostly used in Karaoke scenarios.
You can adjust the pitch of local voice such as ascending or descending with this method. @param pitch The value that is higher or lower than the original local voice within a range from -12 to 12. The default value is 0, i.e. no adjustment is made.
The difference in pitch between two adjacent values within the value range is a semitone, with positive values indicating an ascending tone and negative values indicating a descending tone, and the larger the absolute value set, the more the pitch is raised or lowered.
Out of the value range, the setting fails and triggers rtcEngine:onWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onWarning} callback, indicating WARNING_CODE_SET_SCREEN_STREAM_INVALID_VOICE_PITCH for invalid value setting with ByteRTCWarningCode{@link #ByteRTCWarningCode}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details
setLocalVoiceReverbParam(ByteRTCVoiceReverbConfig param) FutureOr<int>
@detail api @author wangjunzheng @brief Set the reverb effect for the local captured audio. The audio includes both internal captured audio and external captured voice, but not the mixing audio file. @param param See ByteRTCVoiceReverbConfig{@link #ByteRTCVoiceReverbConfig}. @return - 0: Success. - < 0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note Call enableLocalVoiceReverb:{@link #ByteRTCEngine#enableLocalVoiceReverb} to enable the reverb effect.
setLowLightAdjusted(ByteRTCVideoEnhancementMode mode) FutureOr<int>
@hidden(iOS) @valid since 3.57 @detail api @hiddensdk(audiosdk) @author zhoubohui @brief Sets the video lowlight enhancement mode.
It can significantly improve image quality in scenarios with insufficient light, contrast lighting, or backlit situations. @param mode It defaults to Disable. Refer to ByteRTCVideoEnhancementMode{@link #ByteRTCVideoEnhancementMode} for more details. @return - 0: Success. After you call this method, it will take action immediately. But it may require some time for downloads and detection processes before you can see the enhancement. - < 0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - Turning on this mode will impact device performance. This feature should be activated only when required and the device performance is adequate. - Functionality applies to videos captured by the internal module as well as those from custom collections.
setPlaybackVolume(NSInteger volume) FutureOr<int>
@detail api @author huangshouqin @brief Adjusts the playback volume of the mixed remote audio. You can call this API before or during the playback. @param volume Ratio(%) of playback volume to original volume, in the range 0, 400, with overflow protection.
To ensure the audio quality, we recommend setting the volume to 100.
- 0: mute - 100: original volume - 400: Four times the original volume with signal-clipping protection. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note Suppose a remote user A is always within the range of the target user whose playback volume will be adjusted, if you use both this method and setRemoteAudioPlaybackVolume:volume:{@link #ByteRTCEngine#setRemoteAudioPlaybackVolume:volume}/setRemoteRoomAudioPlaybackVolume:{@link #ByteRTCRoom#setRemoteRoomAudioPlaybackVolume}, the volume that the local user hears from user A is the overlay of both settings.
setPublishFallbackOption(ByteRTCPublishFallbackOption option) FutureOr<int>
@detail api @author panjian.fishing @brief Sets the fallback option for published audio & video streams.
You can call this API to set whether to automatically lower the resolution you set of the published streams under limited network conditions. @param option Fallback option, see ByteRTCPublishFallbackOption{@link #ByteRTCPublishFallbackOption}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - This API only works after you call setLocalSimulcastMode:{@link #ByteRTCEngine#setlocalsimulcastmode} to enable the mode of publishing multiple streams. - You must call this API before the user enters the room. - After calling this method, if there is a performance degradation or recovery due to poor performance or network conditions, the local end will receive early warnings through the rtcEngine:onPerformanceAlarms:info:mode:reason:sourceWantedData:{@link #ByteRTCEngineDelegate#rtcEngine:onPerformanceAlarms:info:mode:reason:sourceWantedData} callback to adjust the capture device. - After setting the fallback option, the user subscribed to the audio/video stream will receive rtcEngine:onSimulcastSubscribeFallback:info:event:{@link #ByteRTCEngineDelegate#rtcEngine:onSimulcastSubscribeFallback:info:event} when the audio/video stream published by the local user falls back or resumes from the fallback. - You can alternatively set fallback options in the console, which is of higher priority.
setRemoteAudioPlaybackVolume(NSString streamId, int volume) FutureOr<int>
@detail api @author huanghao @brief Set the audio volume of playing the received remote stream. You must join the room before calling the API. The validity of the setting is not associated with the publishing status of the stream. @param streamId ID of stream. @param volume The ratio between the playing voilume of the original volume. The range is [0, 400] with overflow protection. The unit is %.
For better audio quality, you are recommended to se the value to [0, 100]. @return result
- 0: Success. - <0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus}. @note Assume that a remote user A is always within the scope of the adjusted target users:
- When this API is used together with setRemoteRoomAudioPlaybackVolume:{@link #ByteRTCRoom#setRemoteRoomAudioPlaybackVolume}, the volume of local listening user A is the volume set by the API called later; - When this API is used together with the setPlaybackVolume:{@link #ByteRTCEngine#setPlaybackVolume}, the volume of local listening user A will be the superposition of the two set volume effects. - When you call this API to set the remote stream volume, if the remote user leaves the room, the setting will be invalid.
setRemoteUserPriority(ByteRTCRemoteUserPriority priority, NSString roomId, NSString uid) FutureOr<int>
@detail api @author panjian.fishing @brief Sets user priority @param priority Priority of remote user. See enumeration type ByteRTCRemoteUserPriority{@link #ByteRTCRemoteUserPriority} @param roomId Room ID @param uid ID of remote user @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - This method is used with setSubscribeFallbackOption:{@link #ByteRTCEngine#setSubscribeFallbackOption}. - If the subscription flow fallback option is turned on, weak connections or insufficient performance will give priority to ensuring the quality of the flow received by high-priority users. - This method can be used before and after entering the room, and the priority of the remote user can be modified.
setRemoteVideoCanvas(NSString streamId, ByteRTCVideoCanvas canvas) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author sunhang.io @brief Sets the view and rendering mode to use when rendering a video stream from a specified remote user uid.
If you need to unbind the video view, set canvas to empty. @param streamId ID of Remote stream. @param canvas View information and rendering mode. See ByteRTCVideoCanvas{@link #ByteRTCVideoCanvas}. Starting from version 3.56, you can set the rotation angle of the remote video rendering using renderRotation. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note When the local user leaves the room, the setting will be invalid. The remote user leaving the room does not affect the setting.
setRemoteVideoMirrorType(NSString streamId, ByteRTCRemoteMirrorType mirrorType) FutureOr<int>
@detail api @hiddensdk(audiosdk) @valid since 3.57 @region Video Management @brief When using internal rendering, enable mirroring for the remote stream. @param streamId ID of Remote stream. @param mirrorType The mirror type for the remote stream, see ByteRTCRemoteMirrorType{@link #ByteRTCRemoteMirrorType}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @order 3
setRemoteVideoSink(NSString streamId, id<ByteRTCVideoSinkDelegate> videoSink, ByteRTCVideoSinkPixelFormat requiredFormat) FutureOr<int>
@detail api @hiddensdk(audiosdk) @deprecated since 3.57, use setRemoteVideoSink:withSink:withRemoteRenderConfig:{@link #ByteRTCEngine#setRemoteVideoSink:withSink:withRemoteRenderConfig} instead. @region Custom Video Capturing & Rendering @author sunhang.io @brief Binds the remote video stream to a custom renderer. @param streamId ID of Remote stream. @param videoSink Custom video renderer. See ByteRTCVideoSinkDelegate{@link #ByteRTCVideoSinkDelegate}. @param requiredFormat Encoding format that applys to the custom renderer. See ByteRTCVideoSinkPixelFormat{@link #ByteRTCVideoSinkPixelFormat}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - RTC SDK uses its own renderer (internal renderer) for video rendering by default. - Joining or leaving the room will not affect the binding state. - This API can be called before and after entering the room. To call before entering the room, you need to get the remote stream information before joining the room; if you cannot get the remote stream information in advance, you can call the API after joining the room and getting the remote stream information through rtcRoom:onUserPublishStreamVideo:info:isPublish:{@link #ByteRTCRoomDelegate#rtcRoom:onUserPublishStreamVideo:info:isPublish}. - If you need to unbind the remote stream from the renderer, you must set videoSink to null. - This method gets video frames that have undergone proprocessing.
setRemoteVideoSuperResolution(NSString streamId, ByteRTCVideoSuperResolutionMode mode) FutureOr<int>
@hidden not available @detail api @hiddensdk(audiosdk) @author yinkaisheng @brief Sets the super resolution mode for remote video stream. @param streamId ID of Remote stream. @param mode Super resolution mode. See ByteRTCVideoSuperResolutionMode{@link #ByteRTCVideoSuperResolutionMode}. @return.
- 0: ByteRTCReturnStatusSuccess. It does not indicate the actual status of the super resolution mode, you should refer to rtcEngine:onRemoteVideoSuperResolutionModeChanged:info:withMode:withReason:{@link #ByteRTCEngineDelegate#rtcEngine:onRemoteVideoSuperResolutionModeChanged:info:withMode:withReason} callback. - -1: ByteRTCReturnStatusNativeInValid. Native library is not loaded. - -2: ByteRTCReturnStatusParameterErr. Invalid parameter. - -9: ByteRTCReturnStatusScreenNotSupport. Failure. Screen stream is not supported. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more return value indications. @note - Call this API after joining room. - The original resolution of the remote video stream should not exceed 640 × 360 pixels. - You can only turn on super-resolution mode for one stream.
setRuntimeParameters(NSDictionary parameters) FutureOr<int>
@detail api @author panjian.fishing @brief Sets runtime parameters @param parameters Reserved parameters @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note Call this API before joinRoom:userInfo:userVisibility:roomConfig:{@link #ByteRTCRoom#joinRoom:userInfo:userVisibility:roomConfig} and startAudioCapture{@link #ByteRTCEngine#startAudioCapture}.
setScreenAudioChannel(ByteRTCAudioChannel channel) FutureOr<int>
@hidden(iOS) @detail api @author zhangcaining @brief Set the audio channel of the screen-sharing audio stream @param channel The number of Audio channels. See ByteRTCAudioChannel{@link #ByteRTCAudioChannel}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note When you call setScreenAudioStreamIndex:to mix the microphone audio stream and the screen-sharing audio stream, the audio channel is set by setAudioProfile:{@link #ByteRTCEngine#setAudioProfile} rather than this API.
setScreenAudioSourceType(ByteRTCAudioSourceType sourceType) FutureOr<int>
@detail api @author liyi.000 @brief Sets the screen audio source type. (internal capture/custom capture) @param sourceType Screen audio source type. See ByteRTCAudioSourceType{@link #ByteRTCAudioSourceType}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The default screen audio source type is RTC SDK internal capture. - You should call this API before calling publishScreenAudio:. Otherwise, you will receive rtcEngine:onWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onWarning} with 'ByteRTCWarningSetScreenAudioSourceTypeFailed'. - When using internal capture, you need to restart screen capture. - When using custom capture, you need to call pushScreenAudioFrame:{@link #ByteRTCEngine#pushScreenAudioFrame} to push the audio stream to the RTC SDK. - Whether you use internal capture or custom capture, you must call publishScreenAudio: to publish the captured screen audio stream.
setScreenCaptureVolume(int volume) FutureOr<int>
@detail api @author wangjunzheng @brief Adjusts the volume of audio captured during screen sharing.
This method only changes the volume of audio data and does not affect the hardware volume of the local device. @param volume Ratio(%) of capture volume to original volume, in the range 0, 400, with overflow protection.
To ensure better call quality, we recommend setting the volume to 0, 100.
+ 0: Mute + 100: Original volume + 400: Four times the original volume with signal-clipping protection @return + 0: Success.
+ <0: Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note You can call this method to set the capture volume before or after enabling screen audio capture. Unlike setCaptureVolume:{@link #ByteRTCEngine#setCaptureVolume}, this method only adjusts the audio capture volume during screen sharing.
setServerParams(NSString signature, NSString url) FutureOr<int>
@detail api @author hanchenchen.c @brief Sets application server parameters
Client side calls sendServerMessage:{@link #ByteRTCEngine#sendServerMessage} or sendServerBinaryMessage:{@link #ByteRTCEngine#sendServerBinaryMessage} Before sending a message to the application server, you must set a valid signature and application server address. @param signature Dynamic signature. The App server may use the signature to verify the source of messages.
You need to define the signature yourself. It can be any non-empty string. It is recommended to encode information such as UID into the signature.
The signature will be sent to the address set through the "url" parameter in the form of a POST request. @param url Address of the application server @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The user must call login:uid:{@link #ByteRTCEngine#login:uid} before logging in to call this interface. - After calling this interface, the SDK will use rtcEngine:onServerParamsSetResult:{@link #ByteRTCEngineDelegate#rtcEngine:onServerParamsSetResult} to return the corresponding result.
setSubscribeFallbackOption(ByteRTCSubscribeFallbackOption option) FutureOr<int>
@detail api @author panjian.fishing @brief Sets the fallback option for subscribed RTC streams.
You can call this API to set whether to lower the resolution of currently subscribed stream under limited network conditions. @param option Fallback option, see ByteRTCSubscribeFallbackOption{@link #ByteRTCSubscribeFallbackOption} for more details. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - You must call this API before enterting the room. - When the fallback option is set, the local user will receive rtcEngine:onSimulcastSubscribeFallback:info:event:{@link #ByteRTCEngineDelegate#rtcEngine:onSimulcastSubscribeFallback:info:event} and rtcEngine:onRemoteVideoSizeChanged:info:withFrameInfo:{@link #ByteRTCEngineDelegate#rtcEngine:onRemoteVideoSizeChanged:info:withFrameInfo} when the subscribed audio/video stream falls back or resumes from a fallback. - You can alternatively set fallback options in the console, which is of higher priority.
setVideoCaptureConfig(ByteRTCVideoCaptureConfig captureConfig) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author liuyangyang @brief Sets the video capture parameters for internal capture of the RTC SDK.
If your project uses the SDK internal capture module, you can specify the video capture parameters including preference, resolution and frame rate through this interface. @param captureConfig Video capture parameters. See: ByteRTCVideoCaptureConfig{@link #ByteRTCVideoCaptureConfig}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note
setVideoCaptureRotation(ByteRTCVideoRotation rotation) FutureOr<int>
@detail api @hiddensdk(audiosdk) @brief Set the rotation of the video images captured from the local device.
Call this API to rotate the videos when the camera is fixed upside down or tilted. For rotating videos on a phone, we recommend to use setVideoRotationMode:{@link #ByteRTCEngine#setVideoRotationMode}. @param rotation It defaults to ByteRTCVideoRotation0, which means not to rotate. Refer to ByteRTCVideoRotation{@link #ByteRTCVideoRotation}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - For the videos captured by the internal module, the rotation will be combined with that set by calling setVideoRotationMode:{@link #ByteRTCEngine#setVideoRotationMode}. - This API affects the external-sourced videos. The final rotation would be the original rotation angles adding up with the rotation set by calling this API. - The elements added during the video pre-processing stage, such as video sticker and background applied using enableVirtualBackground{@link #ByteRTCVideoEffect#enableVirtualBackground:withSource} will also be rotated by this API. - The rotation would be applied to both locally rendered video s and those sent out. However, if you need to rotate a video which is intended for pushing to CDN individually, use setVideoOrientation:{@link #ByteRTCEngine#setVideoOrientation}.
setVideoDecoderConfig(NSString streamId, ByteRTCVideoDecoderConfig config) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Before subscribing to the remote video stream, set the remote video data decoding method @param streamId ID of Remote stream. @param config Video decoding method. See ByteRTCVideoDecoderConfig{@link #ByteRTCVideoDecoderConfig}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - When you want to custom decode a remote stream, you need to call registerRemoteEncodedVideoFrameObserver:{@link #ByteRTCEngine#registerRemoteEncodedVideoFrameObserver} Register the remote video stream monitor, and then call the interface to set the decoding method to custom decoding. The monitored video data is called back through onRemoteEncodedVideoFrame:info:withEncodedVideoFrame:{@link #ByteRTCRemoteEncodedVideoFrameObserver#onRemoteEncodedVideoFrame:info:withEncodedVideoFrame}. - Since version 3.56, for automatic subscription, you can set streamId to a specific value (if there is corresponding logic). In this case, the decoding settings set by calling the API applies to all remote main streams or screen sharing streams based on the relevant logic of streamId.
setVideoDenoiser(ByteRTCVideoDenoiseMode mode) FutureOr<int>
@hidden not available @detail api @hiddensdk(audiosdk) @author Yujianli @brief Sets the video noise reduction mode. @param mode Video noise reduction mode. Refer to ByteRTCVideoDenoiseMode{@link #ByteRTCVideoDenoiseMode} for more details. @return - 0: Success. Please refer to rtcEngine:onVideoDenoiseModeChanged:withReason:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDenoiseModeChanged:withReason} callback for the actual state of video noise reduction mode. - < 0: Failure.
setVideoDigitalZoomConfig(ByteRTCZoomConfigType type, float size) FutureOr<int>
@valid since 3.51 @detail api @hiddensdk(audiosdk) @author likai.666 @brief Set the step size for each digital zooming control to the local videos. @param type Required. Identifying which type the size is referring to. Refer to ByteRTCZoomConfigType{@link #ByteRTCZoomConfigType}. @param size Required. Reserved to three decimal places. It defaults to 0.
The meaning and range vary from different types. If the scale or moving distance exceeds the range, the limit is taken as the result.
- ByteRTCZoomConfigTypeFocusOffset: Increasement or decrease to the scaling factor. Range: 0, 7. For example, when it is set to 0.5 and setVideoDigitalZoomControl:{@link #ByteRTCEngine#setVideoDigitalZoomControl} is called to zoom in, the scale will increase 0.5. The scale ranges 1,8 and defaults to 1, which means an original size. - ByteRTCZoomConfigTypeMoveOffset:Ratio of the distance to the border of video images. It ranges 0, 0.5 and defaults to 0, which means no offset. When you call setVideoDigitalZoomControl:{@link #ByteRTCEngine#setVideoDigitalZoomControl} and choose CAMERA_MOVE_LEFT, the moving distance is size x original width. While for the CAMERA_MOVE_UP, the moving distance is size x original height. Suppose that a video spans 1080 px and the size is set to 0.5 so that the distance would be 0.5 x 1080 px = 540 px. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - Only one size can be set for a single call. You must call this API to pass values respectively if you intend to set multiple sizes. - As the default size is 0, you must call this API before performing any digital zoom control by calling setVideoDigitalZoomControl:{@link #ByteRTCEngine#setVideoDigitalZoomControl} or startVideoDigitalZoomControl:{@link #ByteRTCEngine#startVideoDigitalZoomControl}.
setVideoDigitalZoomControl(ByteRTCZoomDirectionType direction) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author likai.666 @brief Digital zoom or move the local video image once. This action affects both the video preview locally and the stream published. @param direction Action of the digital zoom control. Refer to ByteRTCZoomDirectionType{@link #ByteRTCZoomDirectionType}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - As the default offset is 0, you must call setVideoDigitalZoomConfig:size:{@link #ByteRTCEngine#setVideoDigitalZoomConfig:size} before this API. - You can only move video images after they are magnified via this API or startVideoDigitalZoomControl:{@link #ByteRTCEngine#startVideoDigitalZoomControl}. - When you request an out-of-range scale or movement, SDK will execute it with the limits. For example, when the image has been moved to the border, the image cannot be zoomed out, or has been magnified to 8x. - Call startVideoDigitalZoomControl:{@link #ByteRTCEngine#startVideoDigitalZoomControl} to have a continuous and repeatedly digital zoom control. - Mobile devices can control the optical zoom of the camera, see setCameraZoomRatio:.
setVideoEncoderConfig(ByteRTCVideoEncoderConfig encoderConfig, NSDictionary parameters) FutureOr<int>
@hidden currently not available
setVideoOrientation(ByteRTCVideoOrientation orientation) FutureOr<int>
@hidden(macOS) @detail api @hiddensdk(audiosdk) @author wangjunlin.3182 @brief Sets the orientation of the video frame before custom video processing and encoding. The default value is Adaptive.
You should set the orientation to Portrait when using video effects or custom processing.
You should set the orientation to Portrait or Landscape when pushing a single stream to the CDN. @param orientation Orientation of the video frame. See ByteRTCVideoOrientation{@link #ByteRTCVideoOrientation}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The orientation setting is only applicable to internal captured video sources. For custom captured video sources, setting the video frame orientation may result in errors, such as swapping width and height. Screen sources do not support video frame orientation setting. - We recommend setting the orientation before joining room. The updates of encoding configurations and the orientation are asynchronous, therefore can cause a brief malfunction in preview if you change the orientation after joining room.
setVideoRotationMode(ByteRTCVideoRotationMode rotationMode) FutureOr<int>
@hidden(macOS) @detail api @hiddensdk(audiosdk) @brief Sets the orientation of the video capture. By default, the App direction is used as the orientation reference.
During rendering, the receiving client rotates the video in the same way as the sending client does. @param rotationMode Rotation reference can be the orientation of the App or gravity. Refer to ByteRTCVideoRotationMode{@link #ByteRTCVideoRotationMode} for details. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The orientation setting is effective for internal video capture only. That is, the orientation setting is not effective to the custom video source or the screen-sharing stream. - If the video capture is on, the setting will be effective once you call this API. If the video capture is off, the setting will be effective on when capture starts.
setVideoSourceType(ByteRTCVideoSourceType type) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author liuyangyang @brief Set the video source, including the screen recordings.
The internal video capture is the default, which refers to capturing video using the built-in module. @param type Video source type. Refer to ByteRTCVideoSourceType{@link #ByteRTCVideoSourceType} for more details. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - You can call this API whether the user is in a room or not. - Calling this API to switch to the custom video source will stop the enabled internal video capture. - To switch to internal video capture, call this API to stop custom capture and then call startVideoCapture{@link #ByteRTCEngine#startVideoCapture} to enable internal video capture. - To push custom encoded video frames to the SDK, call this API to switch ByteRTCVideoSourceType to ByteRTCVideoSourceTypeEncodedManualSimulcast or ByteRTCVideoSourceTypeEncodedAutoSimulcast.
setVideoWatermark(NSString imagePath, ByteRTCVideoWatermarkConfig rtcWatermarkConfig) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author zhushufan.ref @brief Adds watermark to designated video stream. @param imagePath The absolute path of the watermark image. The path should be less than 512 bytes.
The watermark image should be in PNG or JPG format. @param rtcWatermarkConfig Watermark configurations. See ByteRTCVideoWatermarkConfig{@link #ByteRTCVideoWatermarkConfig}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Call clearVideoWatermark{@link #ByteRTCEngine#clearVideoWatermark} to remove the watermark on the designated video stream. - You can only add one watermark to one video stream. The newly added watermark replaces the previous one. You can call this API multiple times to add watermarks to different streams. - You can call this API before and after joining room. - If you mirror the preview, or the preview and the published stream, the watermark will also be mirrored locally, but the published watermark will not be mirrored. - When you enable simulcast mode, the watermark will be added to all video streams, and it will scale down to smaller encoding configurations accordingly.
setVoiceChangerType(ByteRTCVoiceChangerType voiceChanger) FutureOr<int>
@valid since 3.32 @detail api @author luomingkang @brief Sets the type of voice change effect. @param voiceChanger The sound change effect type. See ByteRTCVoiceChangerType{@link #ByteRTCVoiceChangerType}. @return API call result:
- 0: Success. - <0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for specific reasons. @note - To use this feature, you need to integrate the SAMI dynamic library. See Integrate Plugins on Demand. - You can call it before and after entering the room. - Effective for both internal and external audio source. - Only valid for mono-channel audio. - Mutually exclusive with setVoiceReverbType:{@link #ByteRTCEngine#setVoiceReverbType}, and the effects set later will override the effects set first.
setVoiceReverbType(ByteRTCVoiceReverbType voiceReverb) FutureOr<int>
@valid since 3.32 @detail api @author wangjunzheng @brief Sets the reverb effect type @param voiceReverb Reverb effect type. See ByteRTCVoiceReverbType{@link #ByteRTCVoiceReverbType}. @return API call result:
- 0: Success. - <0: Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for specific reasons. @note - You can call it before and after entering the room. - Effective for both internal and external audio source. - Only valid for mono-channel audio. - Mutually exclusive with setVoiceChangerType:{@link #ByteRTCEngine#setVoiceChangerType}, and the effects set later will override the effects set first.
startAudioCapture() FutureOr<int>
@detail api @author dixing @brief Starts internal audio capturing. The default is off.
Internal capture refers to audio capture using the built-in capture module of the SDK.
After this API is called, the local user will receive rtcEngine:onAudioDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioDeviceStateChanged:device_type:device_state:device_error}.
If this API is called by a visible user, the other users in the room will receive rtcEngine:onUserStartAudioCapture:info:{@link #ByteRTCEngineDelegate#rtcEngine:onUserStartAudioCapture:info}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Calling this API without obtaining permission to use the microphone of the current device will trigger rtcEngine:onWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onWarning}. - Call stopAudioCapture{@link #ByteRTCEngine#stopAudioCapture} to stop the internal audio capture. Otherwise, the internal audio capture will sustain until you destroy the engine instance. - To mute and unmute microphones, we recommend using publishStreamAudio:{@link #ByteRTCRoom#publishStreamAudio}, other than stopAudioCapture{@link #ByteRTCEngine#stopAudioCapture} and this API. Because starting and stopping capture devices often need some time waiting for the response of the device, that may lead to a short silence during the communication. - Once you create the engine instance, you can start internal audio capture regardless of the audio publishing state. The audio stream will start publishing only after the audio capture starts. - To switch from custom to internal audio capture, stop publishing before disabling the custom audio capture module by calling setAudioSourceType:{@link #ByteRTCEngine#setAudioSourceType} and then call this API to enable the internal audio capture.
startAudioRecording(ByteRTCAudioRecordingConfig recordingConfig) FutureOr<int>
@detail api @author huangshouqin @brief Starts recording audio communication, and generate the local file.
If you call this API before or after joining the room without internal audio capture, then the recording task can still begin but the data will not be recorded in the local files. Only when you call startAudioCapture{@link #ByteRTCEngine#startAudioCapture} to enable internal audio capture, the data will be recorded in the local files. @param recordingConfig See ByteRTCAudioRecordingConfig{@link #ByteRTCAudioRecordingConfig}. @return - 0: Success - -2: Invalid parameters - -3: Not valid in this SDK. Please contact the technical support. @note - All audio effects are valid in the file. Mixed audio file is not included in the file. - Call stopAudioRecording{@link #ByteRTCEngine#stopAudioRecording} to stop recording. - You can call this API before and after joining the room. If this API is called before you join the room, you need to call stopAudioRecording{@link #ByteRTCEngine#stopAudioRecording} to stop recording. If this API is called after you join the room, the recording task ends automatically. If you join multiple rooms, audio from all rooms are recorded in one file. - After calling the API, you'll receive rtcEngine:onAudioRecordingStateUpdate:error_code:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioRecordingStateUpdate:error_code}.
startChorusCacheSync(ByteRTCChorusCacheSyncConfig config, id<ByteRTCChorusCacheSyncObserver> observer) FutureOr<int>
@hidden internal use only @detail api @hiddensdk(audiosdk) @brief Start aligning RTC data by cache. Received RTC data from different sources will be cached, and aligned based on the included timestamps. This feature compromizes the real-time nature of RTC data consumption. @param config See ByteRTCChorusCacheSyncConfig{@link #ByteRTCChorusCacheSyncConfig}. @param observer Event and data observer. See ByteRTCChorusCacheSyncObserver{@link #ByteRTCChorusCacheSyncObserver}. @return See ByteRTCReturnStatus{@link #ByteRTCReturnStatus}. @note To disable the feature, call stopChorusCacheSync{@link #ByteRTCEngine#stopChorusCacheSync}.
startClientMixedStream(NSString taskId, ByteRTCMixedStreamConfig config, ByteRTCClientMixedStreamConfig extraConfig) FutureOr<int>
@hidden for internal use only @hiddensdk(audiosdk) @detail api
startCloudProxy(NSArray<ByteRTCCloudProxyInfo> cloudProxiesInfo) FutureOr<int>
@detail api @author daining.nemo @brief Starts cloud proxy @param cloudProxiesInfo cloud proxy informarion list. See ByteRTCCloudProxyInfo{@link #ByteRTCCloudProxyInfo}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Call this API before joining the room. - Start pre-call network detection after starting cloud proxy. - After starting cloud proxy and connects the cloud proxy server successfully, receives rtcEngine:onCloudProxyConnected:{@link #ByteRTCEngineDelegate#rtcEngine:onCloudProxyConnected}. - To stop cloud proxy, call stopCloudProxy{@link #ByteRTCEngine#stopCloudProxy}.
startEchoTest(ByteRTCEchoTestConfig echoConfig, NSInteger delayTime) FutureOr<int>
@detail api @author qipengxiang @brief Starts a call test.
Before entering the room, you can call this API to test whether your local audio/video equipment as well as the upstream and downstream networks are working correctly.
Once the test starts, SDK will record your sound or video. If you receive the playback within the delay range you set, the test is considered normal. @param echoConfig Test configurations, see ByteRTCEchoTestConfig{@link #ByteRTCEchoTestConfig}. @param delayTime Delayed audio/video playback time specifying how long you expect to receive the playback after starting the. The range of the value is 2,10 in seconds and the default value is 2. @return API call result:
startFileRecording(ByteRTCRecordingConfig recordingConfig, ByteRTCRecordingType recordingType) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief This method records the audio & video data during the call to a local file. @param recordingConfig Local recording parameter configuration. See ByteRTCRecordingConfig{@link #ByteRTCRecordingConfig} @param recordingType Locally recorded media type, see ByteRTCRecordingType{@link #ByteRTCRecordingType} @return - 0: normal - -1: Parameter setting exception - -2: The current version of the SDK does not support this feature, please contact technical support @note - You must join a room before calling this method. - After this API is called, the local user will receive rtcEngine:onRecordingStateUpdate:state:error_code:recording_info:{@link #ByteRTCEngineDelegate#rtcEngine:onRecordingStateUpdate:state:error_code:recording_info} callback. - If the recording is normal, the system will report the recording progress through rtcEngine:onRecordingProgressUpdate:process:recording_info:{@link #ByteRTCEngineDelegate#rtcEngine:onRecordingProgressUpdate:process:recording_info} callback every second.
startHardwareEchoDetection(NSString testAudioFilePath) FutureOr<int>
@detail api @brief Start echo detection before joining a room. @param testAudioFilePath Absolute path of the music file for the detection. It is expected to encode with UTF-8. The following files are supported: mp3, aac, m4a, 3gp, wav.
We recommend to assign a music file whose duration is between 10 to 20 seconds.
Do not pass a Silent file. @return Method call result:
- 0: Success. - -1: Failure due to the onging process of the previous detection. Call stopHardwareEchoDetection{@link #ByteRTCEngine#stopHardwareEchoDetection} to stop it before calling this API again. - -2: Failure due to an invalid file path or file format. @note - You can use this feature only when ByteRTCRoomProfile{@link #ByteRTCRoomProfile} is set to ByteRTCRoomProfileMeeting or ByteRTCRoomProfileMeetingRoom. - Before calling this API, ask the user for the permissions to access the local audio devices. - Before calling this api, make sure the audio devices are activate and keep the capture volume and the playback volume within a reasonable range. - The detection result is passed as the argument of rtcEngine:onHardwareEchoDetectionResult:{@link #ByteRTCEngineDelegate#rtcEngine:onHardwareEchoDetectionResult}. - During the detection, the SDK is not able to response to the other testing APIs, such as startEchoTest:playDelay:{@link #ByteRTCEngine#startEchoTest:playDelay}, startAudioDeviceRecordTest:{@link #ByteRTCAudioDeviceManager#startAudioDeviceRecordTest} or startAudioPlaybackDeviceTest:interval:{@link #ByteRTCAudioDeviceManager#startAudioPlaybackDeviceTest:interval}. - Call stopHardwareEchoDetection{@link #ByteRTCEngine#stopHardwareEchoDetection} to stop the detection and release the audio devices.
startNetworkDetection(bool isTestUplink, int expectedUplinkBitrate, bool isTestDownlink, int expectedDownlinkBitrate) FutureOr<int>
@detail api @author hanchenchen.c @brief Pre-call network detection @param isTestUplink Whether to detect uplink bandwidth @param expectedUplinkBitrate Expected uplink bandwidth, unit: kbps
Range: {0, 100-10000}, 0: Auto, that RTC will set the highest bite rate. @param isTestDownlink Whether to detect downlink bandwidth @param expectedDownlinkBitrate Expected downlink bandwidth, unit: kbps
Range: {0, 100-10000}, 0: Auto, that RTC will set the highest bite rate. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - After calling this interface, you will receive rtcEngine:onNetworkDetectionResult:quality:rtt:lostRate:bitrate:jitter:{@link #ByteRTCEngineDelegate#rtcEngine:onNetworkDetectionResult:quality:rtt:lostRate:bitrate:jitter} within 3s and every 2s thereafter notifying the probe result; - If the probe stops, you will receive rtcEngine:onNetworkDetectionStopped:{@link #ByteRTCEngineDelegate#rtcEngine:onNetworkDetectionStopped} Notify probe to stop.
startPushMixedStream(NSString taskId, ByteRTCMixedStreamPushTargetConfig pushTargetConfig, ByteRTCMixedStreamConfig config) FutureOr<int>
@valid since 3.60. Since version 3.60, this interface replaces the startPushMixedStreamToCDN:mixedConfig:observer: and startPushPublicStream:withLayout: methods for the functions described below. If you have upgraded to version 3.60 or later and are still using these two methods, please migrate to this interface. @detail api @hiddensdk(audiosdk) @author lizheng @brief Specify the streams to be mixed and initiates the task to push the mixed stream to CDN or WTN. @param taskId Task ID. The length should not exceed 126 bytes.
You may want to push more than one mixed stream to CDN from the same room. When you do that, use different ID for corresponding tasks; if you will start only one task, use an empty string. @param pushTargetConfig Push target config, such as the push url and WTN stream ID. See ByteRTCMixedStreamPushTargetConfig{@link #ByteRTCMixedStreamPushTargetConfig}. @param config Configurations to be set when pushing streams to CDN. See ByteRTCMixedStreamConfig{@link #ByteRTCMixedStreamConfig}. @return - 0: Success. You can get notified the result of the task and the events in the process of pushing the stream to CDN via rtcEngine:onMixedStreamEvent:withMixedStreamInfo:withErrorCode:{@link #ByteRTCEngineDelegate#rtcEngine:onMixedStreamEvent:withMixedStreamInfo:withErrorCode}. - !0: Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - After calling this API, you will be informed of the result and errors during the pushing process via the rtcEngine:onMixedStreamEvent:withMixedStreamInfo:withErrorCode:{@link #ByteRTCEngineDelegate#rtcEngine:onMixedStreamEvent:withMixedStreamInfo:withErrorCode} callback. - Subscribe to the Push-to-CDN and the WTN stream notifications in the console to receive notifications about task status changes. When calling this API repeatedly, subsequent calls to this API will trigger both TranscodeStarted and TranscodeUpdated callbacks. - Call stopPushMixedStream:withPushTargetType:{@link #ByteRTCEngine#stopPushMixedStream:withPushTargetType} to stop pushing streams to CDN. - Call updatePushMixedStream:withPushTargetConfig:withMixedConfig:{@link #ByteRTCEngine#updatePushMixedStream:withPushTargetConfig:withMixedConfig} to update part of the configurations of the task. - Call startPushSingleStream:singleStream:{@link #ByteRTCEngine#startPushSingleStream:singleStream} to push a single stream to CDN. @order 0
startPushSingleStream(NSString taskId, ByteRTCPushSingleStreamParam singleStream) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author liujingchao @brief Creates a new task of pushing a single media stream to CDN. @param taskId Task ID.
You may want to start more than one task to push streams to CDN. When you do that, use different IDs for corresponding tasks; if you will start only one task, use an empty string. @param singleStream Configurations for pushing a single stream to CDN. See PushByteRTCPushSingleStreamParam{@link #ByteRTCPushSingleStreamParam}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Before calling this API,you need to enable Push to CDN on the console. - After calling this API, you will be informed of the result and errors during the pushing process with rtcEngine:onSingleStreamEvent:withTaskId:withErrorCode:{@link #ByteRTCEngineDelegate#rtcEngine:onSingleStreamEvent:withTaskId:withErrorCode}. - Call stopPushSingleStream:{@link #ByteRTCEngine#stopPushSingleStream} to stop the task. - Since this API does not perform encoding and decoding, the video stream pushed to RTMP will change according to the resolution, encoding method, and turning off the camera of the end of pushing streams.
startScreenAudioCapture(NSString deviceId) FutureOr<int>
@hidden(iOS) @detail api @author yezijian.me @brief Starts using RTC SDK internal capture to capture screen audio during screen sharing. @param deviceId ID of the virtual device @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The call of this API takes effects only when you are using RTC SDK to record screen. You will get a warning by rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning} after calling this API when the video source is set to an external recorder. - You also need to call publishScreenAudio: to publish the captured screen audio. - To disable screen audio internal capture, call stopScreenAudioCapture{@link #ByteRTCEngine#stopScreenAudioCapture}.
startScreenCapture(ByteRTCScreenMediaType type, NSString bundleId) FutureOr<int>
@hidden(macOS) @detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Starts capturing the screen audio and/or video stream with the RTC SDK internal module. @param type Media type. See ByteRTCScreenMediaType{@link #ByteRTCScreenMediaType}. @param bundleId The bundle ID of the Extension, which is used to only display your Extension in your app. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The call of this API takes effects only when you are using RTC SDK to record screen. You will get a warning by rtcEngine:onVideoDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceWarning:deviceType:deviceWarning} or rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning} after calling this API when the source is set to an external recorder. - If you start the Extension from the iOS control center, this API do not need to be called. - After the streams are captured, you need to call publishScreenVideo:{@link #ByteRTCRoom#publishScreenVideo} and/or publishScreenAudio:{@link #ByteRTCRoom#publishScreenAudio} to push the streams to the remote end. - You will receive rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error} and rtcEngine:onAudioDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioDeviceStateChanged:device_type:device_state:device_error} when the capturing is started.
startScreenVideoCapture(ByteRTCScreenCaptureSourceInfo sourceInfo, ByteRTCScreenCaptureParam captureParameters) FutureOr<int>
@hidden(iOS) @detail api @hiddensdk(audiosdk) @author liyi.000 @brief Capture screen video stream for sharing. Screen video stream includes: content displayed on the screen, or content in the application window. @param sourceInfo Screen capture source information. See ByteRTCScreenCaptureSourceInfo{@link #ByteRTCScreenCaptureSourceInfo}.
Call getScreenCaptureSourceList{@link #ByteRTCEngine#getScreenCaptureSourceList} to get all the screen sources that can be shared. @param captureParameters Screen capture parameters. See ByteRTCScreenCaptureParam{@link #ByteRTCScreenCaptureParam}. @return - 0: Success; - -1: Failure; @note - The call of this API takes effects only when you are using RTC SDK to record screen. You will get a warning by rtcEngine:onVideoDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceWarning:deviceType:deviceWarning} after calling this API when the video source is set to an external recorder. - This API only starts screen capturing but does not publish the captured video. Call publishScreenVideo:{@link #ByteRTCRoom#publishScreenVideo} to publish the captured video. - To turn off screen video capture, call stopScreenVideoCapture{@link #ByteRTCEngine#stopScreenVideoCapture}. - Local users will receive rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error} on the state of screen capturing such as start, pause, resume, and error. - After successfully calling this API, local users will receive rtcEngine:onFirstLocalVideoFrameCaptured:withFrameInfo:{@link #ByteRTCEngineDelegate#rtcEngine:onFirstLocalVideoFrameCaptured:withFrameInfo}. - Before calling this API, you can call setScreenVideoEncoderConfig:{@link #ByteRTCEngine#setScreenVideoEncoderConfig} to set the frame rate and encoding resolution of the screen video stream. - After receiving rtcEngine:onFirstLocalVideoFrameCaptured:withFrameInfo:{@link #ByteRTCEngineDelegate#rtcEngine:onFirstLocalVideoFrameCaptured:withFrameInfo}, you can set the local screen sharing view by calling setLocalVideoCanvas:withCanvas:{@link #ByteRTCEngine#setLocalVideoCanvas:withCanvas} or setLocalVideoSink:withSink:withPixelFormat:{@link #ByteRTCEngine#setLocalVideoSink:withSink:withPixelFormat}. - After you start capturing screen video stream for sharing,you can call updateScreenCaptureHighlightConfig:{@link #ByteRTCEngine#updateScreenCaptureHighlightConfig} to update border highlighting settings, updateScreenCaptureMouseCursor:{@link #ByteRTCEngine#updateScreenCaptureMouseCursor} to update the processing settings for the mouse, and updateScreenCaptureFilterConfig:{@link #ByteRTCEngine#updateScreenCaptureFilterConfig} to set the window that needs to be filtered on PC clinets.
startVideoCapture() FutureOr<int>
@detail api @hiddensdk(audiosdk) @author zhangzhenyu.samuel @brief Enables internal video capture immediately. The default setting is off.
Internal video capture refers to: capturing video using the built-in module.
The local client will be informed via rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error} after starting video capture by calling this API.
The remote clients in the room will be informed of the state change via rtcEngine:onUserStartVideoCapture:info:{@link #ByteRTCEngineDelegate#rtcEngine:onUserStartVideoCapture:info} after the visible client starts video capture by calling this API. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Call stopVideoCapture{@link #ByteRTCEngine#stopVideoCapture} to stop the internal video capture. Otherwise, the internal video capture will sustain until you destroy the engine instance. - Once you create the engine instance, you can start internal video capture regardless of the video publishing state. The video stream will start publishing only after the video capture starts. - To switch from custom to internal video capture, stop publishing before disabling the custom video capture module and then call this API to enable the internal video capture. - Call switchCamera:{@link #ByteRTCEngine#switchCamera} to switch the camera used by the internal video capture module. You cannot switch camera on macOS. - Since the upgrade in v3.37.0, to start capture by calling this API, you need to request the capture permission in your App.
startVideoDigitalZoomControl(ByteRTCZoomDirectionType direction) FutureOr<int>
@valid since 3.51 @detail api @hiddensdk(audiosdk) @author likai.666 @brief Continuous and repeatedly digital zoom control. This action effect both the video preview locally and the stream published. @param direction Action of the digital zoom control. Refer to ByteRTCZoomDirectionType{@link #ByteRTCZoomDirectionType}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - As the default offset is 0, you must call setVideoDigitalZoomConfig:size:{@link #ByteRTCEngine#setVideoDigitalZoomConfig:size} before this API. - You can only move video images after they are magnified via this API or setVideoDigitalZoomControl:{@link #ByteRTCEngine#setVideoDigitalZoomControl}. - The control process stops when the scale reaches the limit, or the images have been moved to the border. if the next action exceeds the scale or movement range, SDK will execute it with the limits. - Call stopVideoDigitalZoomControl{@link #ByteRTCEngine#stopVideoDigitalZoomControl} to stop the ongoing zoom control. - Call setVideoDigitalZoomControl:{@link #ByteRTCEngine#setVideoDigitalZoomControl} to have a one-time digital zoom control. - Refer to setCameraZoomRatio:{@link #ByteRTCEngine#setCameraZoomRatio} if you intend to have an optical zoom control to the camera. For iOS only.
stopAudioCapture() FutureOr<int>
@detail api @author dixing @brief Stops internal audio capturing. The default is off.
Internal audio capture refers to: capturing audio using the built-in module.
The local client will be informed via rtcEngine:onAudioDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioDeviceStateChanged:device_type:device_state:device_error} after stopping audio capture by calling this API.
The remote clients in the room will be informed of the state change via rtcEngine:onUserStopAudioCapture:info:{@link #ByteRTCEngineDelegate#rtcEngine:onUserStopAudioCapture:info} after the visible client stops audio capture by calling this API. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Call startAudioCapture{@link #ByteRTCEngine#startAudioCapture} to enable the internal audio capture. - Without calling this API the internal audio capture will sustain until you destroy the engine instance.
stopAudioRecording() FutureOr<int>
@detail api @author huangshouqin @brief Stop audio recording. @return - 0: Success - <0: Failure @note Call startAudioRecording:{@link #ByteRTCEngine#startAudioRecording} to start the recording task.
stopChorusCacheSync() FutureOr<int>
@hidden internal use only @detail api @hiddensdk(audiosdk) @brief Stop aligning RTC data by cache. @return See ByteRTCReturnStatus{@link #ByteRTCReturnStatus}.
stopClientMixedStream(NSString taskId) FutureOr<int>
@hidden for internal use only @detail api @hiddensdk(audiosdk)
stopCloudProxy() FutureOr<int>
@detail api @author daining.nemo @brief Stop cloud proxy @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note To start cloud proxy, call startCloudProxy:{@link #ByteRTCEngine#startCloudProxy}.
stopEchoTest() FutureOr<int>
@detail api @author qipengxiang @brief Stops the current call test.
After calling startEchoTest:playDelay:{@link #ByteRTCEngine#startEchoTest:playDelay}, you must call this API to stop the test. @return API call result:
- 0: Success - -3: Failure, no test is in progress. @note After stopping the test with this API, all the system devices and streams are restored to the state they were in before the test.
stopFileRecording() FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Stops local recording @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - After calling startFileRecording:type:{@link #ByteRTCEngine#startFileRecording:type} to start local recording, you must call this method to stop recording. - The recording result will be called back through rtcEngine:onRecordingStateUpdate:state:error_code:recording_info:{@link #ByteRTCEngineDelegate#rtcEngine:onRecordingStateUpdate:state:error_code:recording_info}.
stopHardwareEchoDetection() FutureOr<int>
@detail api @author zhangcaining @brief Stop the echo detection before joining a room. @return Method call result:
- 0: Success. - -1: Failure. @note - Refer to startHardwareEchoDetection:{@link #ByteRTCEngine#startHardwareEchoDetection} for information on how to start a echo detection. - We recommend calling this API to stop the detection once getting the detection result from rtcEngine:onHardwareEchoDetectionResult:{@link #ByteRTCEngineDelegate#rtcEngine:onHardwareEchoDetectionResult}. - You must stop the echo detection to release the audio devices before the user joins a room. Otherwise, the detection may interfere with the call.
stopNetworkDetection() FutureOr<int>
@detail api @author hanchenchen.c @brief Stop pre-call network probe @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - After calling this interface, you will receive an rtcEngine:onNetworkDetectionStopped:{@link #ByteRTCEngineDelegate#rtcEngine:onNetworkDetectionStopped} notification to stop the probe.
stopPushMixedStream(NSString taskId, ByteRTCMixedStreamPushTargetType pushTargetType) FutureOr<int>
@valid since 3.60. Since version 3.60, this interface replaces the stopPushStreamToCDN: method for stopping the push of mixed streams to CDN. If you have upgraded to version 3.60 or later and are still using this method, please migrate to this interface. @author lizheng @detail api @hiddensdk(audiosdk) @brief Stops the task to push a mixing media stream to CDN or a WTN stream. @param taskId Task ID. Specifys of which pushing task you want to update the parameters. @param pushTargetType See ByteRTCMixedStreamPushTargetType{@link #ByteRTCMixedStreamPushTargetType}. @return + 0: Success + !0: Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note - To start pushing a mixing stream to CDN or a WTN stream, see startPushMixedStream:withPushTargetConfig:withMixedConfig:{@link #ByteRTCEngine#startPushMixedStream:withPushTargetConfig:withMixedConfig}. @order 3
stopPushSingleStream(NSString taskId) FutureOr<int>
@valid since 3.60. Since version 3.60, this interface replaces the stopPushStreamToCDN: method for stopping the push of single media streams to CDN. If you have upgraded to version 3.60 or later and are still using this method, please migrate to this interface. @detail api @author liujingchao @brief Stops the task to push a single media stream to CDN. @param taskId Task ID. Specifys the task you want to stop. @return - 0: Success - < 0 : Failure. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. - To start pushing single stream to CDN, see startPushSingleStream:singleStream:{@link #ByteRTCEngine#startPushSingleStream:singleStream}. - To start pushing mixed stream to CDN, see startPushMixedStream:withPushTargetConfig:withMixedConfig:{@link #ByteRTCEngine#startPushMixedStream:withPushTargetConfig:withMixedConfig}.
stopScreenAudioCapture() FutureOr<int>
@hidden(iOS) @detail api @author liyi.000 @brief Stops using RTC SDK internal capture to capture screen audio during screen sharing. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The call of this API takes effects only when you are using RTC SDK to record screen. You will get a warning by rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning} after calling this API when the video source is set to an external recorder. - This API can only stop the screen capture by the RTC SDK. If the video source has been set to external recorder, the call of this API will fail with a warning message. You need to stop it in the external recorder. - To enable the screen audio internal capture, call startScreenAudioCapture:{@link #ByteRTCEngine#startScreenAudioCapture}.
stopScreenCapture() FutureOr<int>
@hidden(macOS) @detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Stop internal screen capture. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The call of this API takes effects only when you are using RTC SDK to record screen. You will get a warning by rtcEngine:onVideoDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceWarning:deviceType:deviceWarning} or rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioDeviceWarning:deviceType:deviceWarning} after calling this API when the source is set to an external recorder. - Calling this API changes the capturing status without affecting the publishing status. - You will receive rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error} and rtcEngine:onAudioDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioDeviceStateChanged:device_type:device_state:device_error}.
stopScreenVideoCapture() FutureOr<int>
@hidden(iOS) @detail api @hiddensdk(audiosdk) @author liyi.000 @brief Stops capturing screen video stream. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - The call of this API takes effects only when you are using RTC SDK to record screen. You will get a warning by rtcEngine:onVideoDeviceWarning:deviceType:deviceWarning:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceWarning:deviceType:deviceWarning} after calling this API when the video source is set to an external recorder. - To enable screen video stream capture, calling startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters}. - You will receive rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error} after calling this API. - This API has no effect on screen video stream publishing.
stopVideoCapture() FutureOr<int>
@detail api @hiddensdk(audiosdk) @author zhangzhenyu.samuel @brief Disables internal video capture immediately. The default is off.
Internal video capture refers to: use the RTC SDK built-in video capture module to capture.
The local client will be informed via rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error} after stopping video capture by calling this API.
The remote clients in the room will be informed of the state change via rtcEngine:onUserStopVideoCapture:info:{@link #ByteRTCEngineDelegate#rtcEngine:onUserStopVideoCapture:info} after the visible client stops video capture by calling this API. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - A call startVideoCapture{@link #ByteRTCEngine#startVideoCapture} to enable the internal video capture. - Without calling this API the internal video capture will sustain until you destroy the engine instance.
stopVideoDigitalZoomControl() FutureOr<int>
@detail api @hiddensdk(audiosdk) @author likai.666 @brief Stop the ongoing digital zoom control instantly. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note Refer to startVideoDigitalZoomControl:{@link #ByteRTCEngine#startVideoDigitalZoomControl} for starting digital zooming.
switchCamera(ByteRTCCameraID cameraId) FutureOr<int>
@hidden(macOS) @detail api @hiddensdk(audiosdk) @author zhangzhenyu.samuel @brief Switches to the front-facing/back-facing camera used in the internal video capture
The local client will be informed via rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error} after calling this API. @param cameraId Camera type. Refer to ByteRTCCameraID{@link #ByteRTCCameraID} for more details. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Front-facing camera is the default camera. - If the internal video capturing is on, the switch is effective once you call this API. If the internal video capturing is off, the setting will be effective when capture starts.
takeLocalSnapshot(id<ByteRTCVideoSnapshotCallbackDelegate> callback) FutureOr<NSInteger>
@detail api @hiddensdk(audiosdk) @author wangfujun.911 @brief Take a snapshot of the local video. @param callback See ByteRTCVideoSnapshotCallbackDelegate{@link #ByteRTCVideoSnapshotCallbackDelegate}. @return The index of the local snapshot task, starting from 1. @note - The snapshot is taken with all video effects on, like rotation, and mirroring. - You can take the snapshot either using SDK internal video capture or customized capture.
takeLocalSnapshotToFile(NSString filePath) FutureOr<NSInteger>
@detail api @valid since 3.60. @author wangfujun.911 @brief Takes a snapshot of the local video stream and save it as a JPG file at the specified local path.
After calling this method, the SDK triggers rtcEngine:onLocalSnapshotTakenToFile:filePath:width:height:errorCode:taskId:{@link #ByteRTCEngineDelegate#rtcEngine:onLocalSnapshotTakenToFile:filePath:width:height:errorCode:taskId} to report whether the snapshot is taken successfully and provide details of the snapshot. @param filePath The absolute file path where the snapshot JPG file will be saved. The file extension must be .jpg. Ensure that the directory exists and is writable. Example: /Users/YourName/Pictures/snapshot.jpg. @return The index of the remote snapshot task, starting from 1. The index can be used to track the task status or perform other management operations.
takeRemoteSnapshot(NSString streamId, id<ByteRTCVideoSnapshotCallbackDelegate> callback) FutureOr<NSInteger>
@detail api @hiddensdk(audiosdk) @author wangfujun.911 @brief Take a snapshot of the remote video stream. @param streamId ID of the remote video stream for taking snapshot. @param callback See ByteRTCVideoSnapshotCallbackDelegate{@link #ByteRTCVideoSnapshotCallbackDelegate}. @return The index of the remote snapshot task, starting from 1.
takeRemoteSnapshotToFile(NSString streamId, NSString filePath) FutureOr<NSInteger>
@detail api @valid since 3.60. @author wangfujun.911 @brief Take a snapshot of the remote video stream and save it as a JPG file at the specified local path.
After calling this method, the SDK triggers rtcEngine:onRemoteSnapshotTakenToFile:info:filePath:width:height:errorCode:taskId:{@link #ByteRTCEngineDelegate#rtcEngine:onRemoteSnapshotTakenToFile:info:filePath:width:height:errorCode:taskId} to report whether the snapshot is taken successfully and provide details of the snapshot. @param streamId ID of the remote video stream for taking snapshot. @param filePath The absolute file path where the snapshot JPG file will be saved. The file extension must be .jpg. Ensure that the directory exists and is writable. Example: /Users/YourName/Pictures/snapshot.jpg. @return The index of the remote snapshot task, starting from 1. The index can be used to track the task status or perform other management operations.
toString() String
A string representation of this object.
inherited
updateClientMixedStream(NSString taskId, ByteRTCMixedStreamConfig config, ByteRTCClientMixedStreamConfig extraConfig) FutureOr<int>
@hidden for internal use only @hiddensdk(audiosdk) @detail api
updateLocalVideoCanvas(ByteRTCRenderMode renderMode, NSUInteger backgroundColor) FutureOr<int>
@detail api @hiddensdk(audiosdk) @author wangfujun.911 @brief Update the render mode and background color of local video rendering. @param renderMode See ByteRTCRenderMode{@link #ByteRTCRenderMode}. @param backgroundColor See ByteRTCVideoCanvas{@link #ByteRTCVideoCanvas}.backgroundColor. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @note Calling this API during local video rendering will be effective immediately.
updateLoginToken(NSString token) FutureOr<int>
@detail api @author hanchenchen.c @brief Updates the Token used by the user to log in.
Token used by the user to log in has a certain valid period. When the Token expires, you need to call this method to update the login Token information.
If login:uid:{@link #ByteRTCEngine#login:uid} is called with an expired Token, the login fails and the local user will receive rtcEngine:onLoginResult:errorCode:elapsed:{@link #ByteRTCEngineDelegate#rtcEngine:onLoginResult:errorCode:elapsed} about corresponding error code. You need to reacquire a Token and update it with this method. @param token
Updated dynamic key @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - If the token is invalid and the login fails, call this method to update the token, and the SDK will automatically log in again. The user does not need to call the login:uid:{@link #ByteRTCEngine#login:uid} method. - Token expires, if you have successfully logged in, it will not be affected. An expired Token error will be notified to the user the next time you log in with an expired Token, or when you log in again due to a disconnection due to poor local network conditions.
updatePushMixedStream(NSString taskId, ByteRTCMixedStreamPushTargetConfig pushTargetConfig, ByteRTCMixedStreamConfig config) FutureOr<int>
@valid since 3.60. Since version 3.60, this interface replaces the updatePushMixedStreamToCDN:mixedConfig: and updatePublicStreamParam:withLayout: methods for the functions described below. If you have upgraded to version 3.60 or later and are still using these two methods, please migrate to this interface. @hidden(Linux) @detail api @hiddensdk(audiosdk) @author lizheng @brief Update parameters needed when pushing mixed media streams to CDN. You will be informed of the change via the rtcEngine:onMixedStreamEvent:withMixedStreamInfo:withErrorCode:{@link #ByteRTCEngineDelegate#rtcEngine:onMixedStreamEvent:withMixedStreamInfo:withErrorCode} callback.
After calling startPushMixedStream:withPushTargetConfig:withMixedConfig:{@link #ByteRTCEngine#startPushMixedStream:withPushTargetConfig:withMixedConfig} to enable the function of pushing streams to CDN, you can call this API to update the relevant configurations. @param taskId Task ID. Specifys of which pushing task you want to update the parameters. @param pushTargetConfig Push target config, such as the push url and WTN stream ID. See ByteRTCMixedStreamPushTargetConfig{@link #ByteRTCMixedStreamPushTargetConfig}. @param config Configurations that you want to update. See ByteRTCMixedStreamConfig{@link #ByteRTCMixedStreamConfig} for specific indications. You can update any property for the task unless it is specified as unavailable for updates.
If you left some properties blank, you can expect these properties to be set to their default values. @return - 0: Success. - !0: Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details. @order 4
updateRemoteStreamVideoCanvas(NSString streamId, ByteRTCRenderMode renderMode, NSUInteger backgroundColor) FutureOr<int>
@deprecated since 3.56, and will be deleted in 3.62. Use updateRemoteStreamVideoCanvas:withRemoteVideoRenderConfig:{@link #ByteRTCEngine#updateRemoteStreamVideoCanvas:withRemoteVideoRenderConfig} instead. @detail api @hiddensdk(audiosdk) @author wangfujun.911 @brief Modifies remote video frame rendering settings, including render mode and background color. @param streamId ID of Remote stream. @param renderMode See ByteRTCRenderMode{@link #ByteRTCRenderMode}. @param backgroundColor See ByteRTCVideoCanvas{@link #ByteRTCVideoCanvas}.backgroundColor. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note Calling this API during remote video rendering will be effective immediately.
updateResource(NativeResource resource) → void
inherited
updateScreenCapture(ByteRTCScreenMediaType type) FutureOr<int>
@hidden(macOS) @detail api @hiddensdk(audiosdk) @author wangzhanqiang @brief Updates the media type of the internal screen capture. @param type Media type. See ByteRTCScreenMediaType{@link #ByteRTCScreenMediaType}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Call this API after starting screen capture. - You will receive rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onVideoDeviceStateChanged:device_type:device_state:device_error} or rtcEngine:onAudioDeviceStateChanged:device_type:device_state:device_error:{@link #ByteRTCEngineDelegate#rtcEngine:onAudioDeviceStateChanged:device_type:device_state:device_error}.
updateScreenCaptureFilterConfig(NSArray<NSNumber> excludedWindowList) FutureOr<int>
@hidden(iOS) @detail api @hiddensdk(audiosdk) @author liyi.000 @brief When capturing screen video streams through the capture module provided by the RTC SDK, set the windows to be filtered out. @param excludedWindowList The windows to be filtered out. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note - Before calling this API, you must have turned on internal screen capture by calling startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters}. - This function only works when the screen source is a screen rather than an application window. See: ByteRTCScreenCaptureSourceType{@link #ByteRTCScreenCaptureSourceType}. - When you call this API to exclude specific windows, the frame rate of the shared-screen stream will be lower than 30fps。
updateScreenCaptureHighlightConfig(ByteRTCHighlightConfig config) FutureOr<int>
@hidden(iOS) @detail api @hiddensdk(audiosdk) @author liyi.000 @brief Update border highlighting settings when capturing screen video streams through the internal capture module. The border is shown by default. @param config Border highlighting settings. See ByteRTCHighlightConfig{@link #ByteRTCHighlightConfig}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note Before calling this API, you must have turned on internal screen capture by calling startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters}.
updateScreenCaptureMouseCursor(ByteRTCMouseCursorCaptureState mouseCursorCaptureState) FutureOr<int>
@hidden(iOS) @detail api @hiddensdk(audiosdk) @author liyi.000 @brief Update the processing settings for the mouse when capturing screen video streams through the capture module provided by the RTC SDK. The mouse is shown by default. @param mouseCursorCaptureState See ByteRTCMouseCursorCaptureState{@link #ByteRTCMouseCursorCaptureState}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note Before calling this API, you must have turned on internal screen capture by calling startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters}.
updateScreenCaptureRegion(dynamic regionRect) FutureOr<int>
@hidden(iOS) @detail api @hiddensdk(audiosdk) @author liyi.000 @brief Update the capture area when capturing screen video streams through the internal capture module . @param regionRect The relative capture area to the area set by startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters}. @return - 0: Success. - < 0 : Fail. See ByteRTCReturnStatus{@link #ByteRTCReturnStatus} for more details @note Before calling this API, you must call startScreenVideoCapture:captureParameters:{@link #ByteRTCEngine#startScreenVideoCapture:captureParameters} to start internal screen stream capture.

Operators

operator ==(Object other) bool
The equality operator.
inherited

Static Properties

codegen_$namespace → dynamic
no setter

Static Methods

createRTCEngine(ByteRTCEngineConfig config, id<ByteRTCEngineDelegate> delegate) FutureOr<ByteRTCEngine>
@detail api @author wangzhanqiang @brief Creates an engine instance.
This is the very first API that you must call if you want to use all the RTC capabilities.
If there is no engine instance in current process, calling this API will create one. If an engine instance has been created, calling this API again will have the created engine instance returned. @param config ByteRTCEngineConfig{@link #ByteRTCEngineConfig} @param delegate Delegate sent from SDK to App. See ByteRTCEngineDelegate{@link #ByteRTCEngineDelegate} @return A ByteRTCEngine{@link #ByteRTCEngine} instance
destroyRTCEngine() FutureOr<void>
@detail api @author wangzhanqiang @brief Destroy the engine instance created by createRTCEngine:delegate:{@link #ByteRTCEngine#createRTCEngine:delegate}, and release all related resources. @note - Call this API after all business scenarios related to the engine instance are destroyed. - When the API is called, RTC SDK destroys all memory associated with the engine instance and stops any interaction with the media server. - Calling this API will start the SDK exit logic. The engine thread is held until the exit logic is complete. The engine thread is retained until the exit logic is complete. Therefore, do not call this API directly in the callback thread, or it will cause a deadlock. This function takes a long time to execute, so it's not recommended to call this API in the main thread, or the main thread may be blocked. - You can enable ARC for Objective-C, to automatically trigger the destruction logic when the dealloc method is called.
getSDKVersion() FutureOr<NSString>
@detail api @author wangzhanqiang @brief Gets the current version number of the SDK. @return SDK The current version number.
setLogConfig(ByteRTCLogConfig logConfig) FutureOr<int>
@detail api @author caofanglu @brief Configures the local log parameters of RTC SDK, including the logging level, directory, the limits for total log file size, and the prefix of log file name. @param logConfig Local log parameters. See ByteRTCLogConfig{@link #ByteRTCLogConfig}. @return - 0: Success. - –1: Failure. This API must be called before creating engine. - –2: Failure. Invalid parameters. @note This API must be called before createRTCEngine:delegate:{@link #ByteRTCEngine#createRTCEngine:delegate}.