inkpal_bridge 1.4.5 copy "inkpal_bridge: ^1.4.5" to clipboard
inkpal_bridge: ^1.4.5 copied to clipboard

Let AI agents inspect, debug, and control your running Flutter app — read widgets, tap, screenshot, capture errors. Zero deps. Debug-only.

inkpal_bridge #

Let your AI inspect, debug, and control your running Flutter app.

Works with Claude Code, Cursor, Windsurf, Codex CLI, and Copilot in VS Code.

pub package pub points

What it looks like in your editor #

You:    "There's a layout bug on the settings screen. Find it and fix it."

Agent:  → navigates to /settings
        → reads the widget tree
        → captures the runtime error
        → screenshots the broken state
        → applies the fix in lib/screens/settings.dart
        → re-runs the screen
        → screenshots the fixed state

        "Fixed: Row was missing Expanded around the long Text. Diff committed."

That's the loop. Real Flutter apps. Real fixes. Visible proof. The bridge is what lets the agent do any of that on a real running app.

Free forever — register once #

inkpal_bridge is debug-only and free for personal and commercial use. Sign up at inkpal.ai — takes 30 seconds, no card — and you get an API key that works forever with this package.

The optional 24-hour trial of the AI agent layer (the part that turns your editor's AI into a Flutter-aware automation agent) is a separate upgrade — see inkpal.ai. The bridge keeps working with your free key whether you continue with the AI layer or not.

60-second setup #

# pubspec.yaml
dependencies:
  inkpal_bridge: ^1.4.4
// lib/main.dart
import 'package:inkpal_bridge/inkpal_bridge.dart';

void main() {
  InkPalBridge.init(
    serverUrl: 'ws://localhost:8765',
    appRunner: () => runApp(const MyApp()),
    licenseKey: const String.fromEnvironment('INKPAL_LICENSE_KEY'),
  );
}
flutter run --dart-define=INKPAL_LICENSE_KEY=your_free_key

That's the install.

What you get #

The bridge gives the AI agent three capabilities that change what it can do for you:

Read your app #

The agent can see what's actually on screen — every widget, the active route, form values, error context, log history. No more "tell me what you see in the simulator." It just looks.

Drive your app #

Tap, scroll, type, navigate, capture a screenshot, replay a flow. The agent can run end-to-end paths through your real app the same way a human QA would, and verify each step worked.

Catch what went wrong #

When something throws, the bridge captures the error with the widget tree, navigation stack, and recent logs already attached. The agent gets enough context to find the cause on the first attempt instead of the fifth.

(Optional) Wire navigator + screenshot anchor #

For named-route navigation and faster screenshots, add two optional hooks to your MaterialApp:

MaterialApp(
  navigatorObservers: [
    if (InkPalBridge.instance != null)
      InkPalBridge.instance!.navigatorObserver,
  ],
  home: RepaintBoundary(
    key: InkPalBridge.instance?.repaintBoundaryKey,
    child: const HomeScreen(),
  ),
)

Custom widgets (walkerHooks) #

Have proprietary design-system widgets (BrandButton, GlassCard) without standard Material semantics? Teach the bridge to recognise them:

InkPalBridge.init(
  serverUrl: 'ws://localhost:8765',
  appRunner: () => runApp(const MyApp()),
  walkerHooks: InkPalWalkerHooks(
    isInteractiveWidget: (w) => w is BrandButton,
    extractTextFrom: (w) => w is BrandButton ? w.label : null,
  ),
);

The agent can then say "tap the Save button" and the bridge resolves it correctly — no need to wrap every callsite in Semantics(label:).

A working demo lives in example/lib/main.dart.

Router support #

Works with every major Flutter router. Pass onNavigateToRoute so the bridge can drive named navigation:

// go_router
onNavigateToRoute: (route) async => router.go(route),

// GetX
onNavigateToRoute: (route) async => Get.toNamed(route),

// Beamer
onNavigateToRoute: (route) async => beamerDelegate.beamToNamed(route),

Standard Navigator works without any callback.

App-state context #

Expose your app's runtime state so the agent has more to reason about:

InkPalBridge.init(
  serverUrl: 'ws://localhost:8765',
  appRunner: () => runApp(const MyApp()),
  globalStateProvider: () async => {
    'user': {'plan': currentUser.plan},
    'cart': {'items': cart.length, 'total': cart.total},
  },
);

Architecture #

Your AI assistant   ⇄   inkpal_bridge   ⇄   your running Flutter app

The bridge runs entirely in debug mode. In release builds, init() collapses to a direct appRunner() call — zero overhead, zero memory allocation, no socket, no extension registration. Your release builds ship as if the bridge wasn't there.

Privacy + security #

  • Debug-only. Release builds bypass everything.
  • Local-first. Communication binds to localhost. Nothing leaves your machine without explicit configuration.
  • Sensitive headers redacted. Authorization, Cookie, X-Api-Key, and similar patterns are stripped before any HTTP request is surfaced to the agent.
  • Offline-tolerant. Cached license grants survive network outages.
  • Open-source. MIT-licensed.

Documentation #

Requirements #

  • Flutter ≥ 3.10
  • Dart ≥ 3.0
  • Debug mode (zero overhead in release)

Supported platforms: Android · iOS · macOS · Linux · Windows.

License #

MIT — see LICENSE.

4
likes
160
points
512
downloads

Documentation

API reference

Publisher

verified publisherinkpal.ai

Weekly Downloads

Let AI agents inspect, debug, and control your running Flutter app — read widgets, tap, screenshot, capture errors. Zero deps. Debug-only.

Homepage
Repository (GitHub)
View/report issues

Topics

#mcp #ai #claude #copilot #agent

License

MIT (license)

Dependencies

flutter

More

Packages that depend on inkpal_bridge