Whisper Flutter (Android Only)
A Flutter plugin that brings OpenAI's Whisper ASR (Automatic Speech Recognition) capabilities natively into your Android app. It supports offline speech-to-text transcription using Whisper models directly on the device.
Features
- Real-time Microphone Input and Transcription: Capture audio directly from the microphone and get instant transcriptions.
- Whisper Models: Leverages the power of Whisper models (Tiny model included by default for a lightweight experience).
- Android Focused: Thoroughly tested and confirmed to be working seamlessly only for Android devices.
- Offline Functionality: No need for external APIs or cloud services – all processing happens directly on the device.
- Native Integration: Efficiently integrates the native
whisper.cpplibrary for optimal performance.
Installation
-
Add the dependency:
Open your
pubspec.yamlfile and add the following line underdependencies:whisper_flutter: ^latest_version # Replace with the latest version (current 0.1.0) -
Get the packages:
Run the following command in your terminal within your Flutter project directory:
flutter pub get -
Ensure CMake and NDK Support (Android):
Make sure your Android project is configured to use CMake and the Android NDK. Flutter projects typically include this setup by default. If you encounter build issues related to native code, refer to the Flutter documentation on adding native code to your project.
Platform Support
| Platform | Status |
|---|---|
| Android | Working |
| iOS | Planned |
| Web | Not yet |
Getting Started
-
Import the package:
In your Dart code, import the
whisper_flutterlibrary:import 'package:whisper_flutter/whisper_flutter.dart'; -
Use the plugin for transcription:
Here's a basic example of how to transcribe an audio file. The plugin uses WAV files optimized for Whisper (16kHz, mono, 16-bit PCM):
void main() async { WidgetsFlutterBinding.ensureInitialized(); final String audioPath = '/path/to/your/audio.wav'; try { final TranscriptionResult? result = await WhisperFlutter.transcribe(audioPath: audioPath); if (result != null && result.text.isNotEmpty) { print('Transcription: ${result.text}'); } else { print('Transcription failed or returned an empty result.'); } } catch (e) { print('Error during transcription: \$e'); } }
Screenshots
Recording Interface
| Recording Screen | Configuration Options | Model Download Progress |
|---|---|---|
![]() |
![]() |
![]() |
Transcription Results
| Result Display | Model Download |
|---|---|
![]() |
![]() |
Additional Features
| Audio Management | Status Indicators |
|---|---|
![]() |
![]() |
| Progress Widgets | Processing Display |
|---|---|
![]() |
![]() |
| Main Interface |
|---|
![]() |
Project Structure
├── lib/
│ └── whisper_flutter.dart # Dart wrapper and plugin interface
├── src/
│ ├── main.cpp # Native C++ bindings
├── android/
│ ├── build.gradle
│ ├── src/main/
│ │ └── java/com/example/whisper_flutter/ # Android JNI bridge
│ │ └── cpp/ # Compiled native library output
│ ├── CMakeLists.txt
├── example/
│ ├── lib/main.dart # Sample Flutter usage project
├── .gitignore
└── README.md









