saveTrimmedAudio method

Future<void> saveTrimmedAudio({
  1. required double startValue,
  2. required double endValue,
  3. required dynamic onSave(
    1. String? outputPath
    ),
  4. bool applyAudioEncoding = false,
  5. FileFormat? outputFormat,
  6. String? ffmpegCommand,
  7. String? customAudioFormat,
  8. int? fpsGIF,
  9. int? scaleGIF,
  10. String? audioFolderName,
  11. String? audioFileName,
  12. StorageDir storageDir = StorageDir.temporaryDirectory,
})

Saves the trimmed audio to file system.

The required parameters are startValue, endValue & onSave.

The optional parameters are audioFolderName, audioFileName, outputFormat, fpsGIF, scaleGIF, applyAudioEncoding.

The @required parameter startValue is for providing a starting point to the trimmed audio. To be specified in milliseconds.

The @required parameter endValue is for providing an ending point to the trimmed audio. To be specified in milliseconds.

The @required parameter onSave is a callback Function that helps to retrieve the output path as the FFmpeg processing is complete. Returns a String.

The parameter audioFolderName is used to pass a folder name which will be used for creating a new folder in the selected directory. The default value for it is Trimmer.

The parameter audioFileName is used for giving a new name to the trimmed audio file. By default the trimmed audio is named as <original_file_name>_trimmed.mp4.

The parameter outputFormat is used for providing a file format to the trimmed audio. This only accepts value of FileFormat type. By default it is set to FileFormat.mp4, which is for mp4 files.

The parameter storageDir can be used for providing a storage location option. It accepts only StorageDir values. By default it is set to applicationDocumentsDirectory. Some of the storage types are:

  • temporaryDirectory (Only accessible from inside the app, can be cleared at anytime)

  • applicationDocumentsDirectory (Only accessible from inside the app)

  • externalStorageDirectory (Supports only Android, accessible externally)

The parameters fpsGIF & scaleGIF are used only if the selected output format is FileFormat.gif.

  • fpsGIF for providing a FPS value (by default it is set to 10)

  • scaleGIF for proving a width to output GIF, the height is selected by maintaining the aspect ratio automatically (by default it is set to 480)

  • applyAudioEncoding for specifying whether to apply audio encoding (by default it is set to false).

ADVANCED OPTION:

If you want to give custom FFmpeg command, then define ffmpegCommand & customAudioFormat strings. The input path, output path, start and end position is already define.

NOTE: The advanced option does not provide any safety check, so if wrong audio format is passed in customAudioFormat, then the app may crash.

Implementation

Future<void> saveTrimmedAudio({
  required double startValue,
  required double endValue,
  required Function(String? outputPath) onSave,
  bool applyAudioEncoding = false,
  FileFormat? outputFormat,
  String? ffmpegCommand,
  String? customAudioFormat,
  int? fpsGIF,
  int? scaleGIF,
  String? audioFolderName,
  String? audioFileName,
  StorageDir storageDir = StorageDir.temporaryDirectory,
}) async {
  final String audioPath = currentAudioFile!.path;
  final String audioName = basename(audioPath).split('.')[0];

  String command;

  // Formatting Date and Time
  final String dateTime = DateTime.now().millisecondsSinceEpoch.toString();

  // String _resultString;
  String outputPath;
  String? outputFormatString;
  final String formattedDateTime = dateTime.replaceAll(' ', '');

  debugPrint('DateTime: $dateTime');
  debugPrint('Formatted: $formattedDateTime');

  audioFolderName ??= 'trimmer';

  audioFileName ??= '${audioName}_trimmed_$formattedDateTime';

  audioFileName = audioFileName.replaceAll(' ', '_');

  final String path = await _createFolderInAppDocDir(
    audioFolderName,
    storageDir,
  ).whenComplete(
    () => debugPrint('Retrieved Trimmer folder'),
  );

  final Duration startPoint = Duration(milliseconds: startValue.toInt());
  final Duration endPoint = Duration(milliseconds: endValue.toInt());

  // Checking the start and end point strings
  debugPrint('Start: $startPoint & End: $endPoint');

  debugPrint(path);

  if (outputFormat == null) {
    outputFormat = FileFormat.mp3;
    outputFormatString = outputFormat.toString();
    debugPrint('OUTPUT: $outputFormatString');
  } else {
    outputFormatString = outputFormat.toString();
  }

  final String trimLengthCommand =
      ' -ss $startPoint -i "$audioPath" -t ${endPoint - startPoint}';

  if (ffmpegCommand == null) {
    command = '$trimLengthCommand -c:a copy ';

    if (!applyAudioEncoding) {
      command += '-c:v copy ';
    }
  } else {
    command = '$trimLengthCommand $ffmpegCommand ';
    outputFormatString = customAudioFormat;
  }

  outputPath = '$path$audioFileName$outputFormatString';

  print('#Print# : $outputPath');

  command += '"$outputPath"';

  FFmpegKit.executeAsync(command, (FFmpegSession session) async {
    final String state =
        FFmpegKitConfig.sessionStateToString(await session.getState());
    final ReturnCode? returnCode = await session.getReturnCode();

    debugPrint('FFmpeg process exited with state $state and rc $returnCode');

    if (ReturnCode.isSuccess(returnCode)) {
      debugPrint('FFmpeg processing completed successfully.');
      debugPrint('Audio successfully saved');
      onSave(outputPath);
    } else {
      debugPrint('FFmpeg processing failed.');
      debugPrint("Couldn't save the audio");
      onSave(null);
    }
  });

  // return _outputPath;
}