CARP Media Sampling Package

pub package pub points github stars MIT License arXiv

This library contains a sampling package for media (audio, video, image, noise) sampling to work with the carp_mobile_sensing package. This packages supports sampling of the following Measure types:

  • dk.cachet.carp.noise
  • dk.cachet.carp.image

The name of the Flutter package is "audio" for historical reasons - however, it is (now) a "media" package and the CAMS package name is MediaSamplingPackage.

See the wiki for further documentation, particularly on available measure types. See the CARP Mobile Sensing App for an example of how to build a mobile sensing app in Flutter.

For Flutter plugins for other CARP products, see CARP Mobile Sensing in Flutter.

If you're interested in writing you own sampling packages for CARP, see the description on how to extend CARP on the wiki.


To use this package, add the following to you pubspc.yaml file. Note that this package only works together with carp_mobile_sensing.

    sdk: flutter
  carp_core: ^latest
  carp_mobile_sensing: ^latest
  carp_audio_package: ^latest

Android Integration

Add the following to your app's AndroidManifest.xml file located in android/app/src/main:

<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />

iOS Integration

Add this permission in the Info.plist file located in ios/Runner:

<string>Uses the microphone to record ambient noise in the phone's environment.</string>
<string>Uses the camera to ....</string>

Using it

To use this package, import it into your app together with the carp_mobile_sensing package:

import 'package:carp_core/carp_core.dart';
import 'package:carp_mobile_sensing/carp_mobile_sensing.dart';
import 'package:carp_audio_package/media.dart';

Before creating a study and running it, register this package in the SamplingPackageRegistry.


The noise measure is event-based, whereas the audio, video, and image measures are one-time measures. Using the measures from this package in a study protocol would look something like the following examples.

// Create a study protocol
StudyProtocol protocol = StudyProtocol(
  ownerId: '',
  name: 'Audio Sensing Example',

// Define which devices are used for data collection
// In this case, its only this smartphone
Smartphone phone = Smartphone();

// Add an task that immediately starts collecting noise.
    BackgroundTask(measures: [
      Measure(type: MediaSamplingPackage.NOISE),

The default sampling configuration of noise is to sample every 5 minutes for 10 seconds. This configuration can, however, be overridden like this:

// Collect noise, but change the default sampling configuration
    BackgroundTask(measures: [
      Measure(type: MediaSamplingPackage.NOISE)
        ..overrideSamplingConfiguration = PeriodicSamplingConfiguration(
          interval: const Duration(seconds: 30),
          duration: const Duration(seconds: 5),

And audio measure is a one-time measure and must be started and stopped explicitly. The following example show how this can be done:

// Sample an audio recording
var audioTask = BackgroundTask(measures: [
  Measure(type: MediaSamplingPackage.AUDIO),

// Start the audio task after 20 secs and stop it after 40 secs
    DelayedTrigger(delay: const Duration(seconds: 20)),
    DelayedTrigger(delay: const Duration(seconds: 40)),

Note that the image and video measures are not used in background sensing and hence do not have a probe associated. These measures are only used in a AppTask, i.e., a task done by the user.


A library for collecting an audio recording from the phone's microphone. Support the following measures: