Skip to Content
🚀 Orga AI is in open beta.
DocumentationClient SDKsReact NativeIntroduction

Introduction

What Is the React Native SDK?

The Orga React Native SDK (@orga‑ai/react‑native) brings Orga’s multimodal AI runtime to mobile.
It provides the same abstraction model as the React SDK—hooks, context, and components—but with native WebRTC audio/video handling.

Use it to build cross‑platform apps that see, hear, and speak in real time, without implementing signaling or media pipelines yourself.

  • Built on top of @orga‑ai/core, a framework‑agnostic TypeScript foundation that manages configuration and shared state.
  • Uses react‑native‑webrtc for low‑latency media streaming and react‑native‑incall‑manager for audio routing and call management.
  • Fully compatible with the Orga Server SDKs (Node.js, Python) for minting ephemeral tokens and orchestrating back‑end sessions.
  • Designed primarily for Expo projects using custom development builds (works on bare React Native as well).

Why Use It ?

  • Expo‑first simplicity : Install, wrap your app in the provider, and start a real‑time AI session in minutes.
  • Parity with Web SDK : Same hooks and API signatures as @orga‑ai/react for easy code sharing.
  • Reliable native media : Hardware‑level camera/mic access through react‑native‑webrtc and incall‑manager.
  • Composable UX : Use Orga‑provided primitives (<OrgaAICameraView>, <OrgaAIControls>) or bind the streams to your own custom components.

How It Works

At runtime, the React Native SDK wraps the Orga core client inside a provider and synchronizes the session with your component tree.

  1. Initialization : Call OrgaAI.init() once with a fetch function or endpoint that returns tokens and ICE servers.
  2. Session bootstrap : <OrgaAIProvider> authenticates, sets up transports.
  3. Interaction : Hooks such as useOrgaAI() expose session state, media streams, and controls.
  4. Teardown : Connections and media resources are automatically released when components unmount.

Installation

npm install @orga-ai/react-native react-native-webrtc react-native-incall-manager

Expo Go is not supported. You must create a development build (expo prebuild + eas build) and run on a physical device.

Minimum Requirements

RequirementVersion / Notes
React Nativev0.72 or newer
Expo CLI / SDKSDK 50 or newer (for API Routes support)
iOS / Android targetsiOS 13 +, Android 8 +
Nodev18 + (for your backend proxy)

Configure Permissions (Expo)

Add required permissions and InfoPlist keys to your app.json:

{ "expo": { "ios": { "infoPlist": { "NSCameraUsageDescription": "Allow $(PRODUCT_NAME) to access your camera", "NSMicrophoneUsageDescription": "Allow $(PRODUCT_NAME) to access your microphone" } }, "android": { "permissions": [ "android.permission.CAMERA", "android.permission.RECORD_AUDIO" ] } } }

Development Build Required

The SDK relies on native modules and cannot run inside Expo Go.
Build a development client with:

npx expo prebuild eas build --platform all --profile development

Backend Integration

Just like the web SDK, the mobile SDK never uses API keys directly.
You must provide a proxy endpoint that mints an ephemeral token and returns ICE servers.

You can:

  1. Use Expo API Routes (SDK 50 +) — ideal for managed workflows.
  2. Or host a custom backend (Node/Express, Fastify, etc.) using one of our Server SDKs.

Learn more in the Quick Start guide.


Platform‑Specific Notes

  • 🎥 Camera handling : Built on react-native-webrtc; verify permissions before calling startSession().
  • 🔊 Audio routing : Managed by react-native-incall-manager (handles speaker, receiver, Bluetooth).
  • đź”’ Auth model : Uses the same ephemeral token flow as web SDKs.
  • đź§  Parity with Core : Shares configuration schema, error classes, and types with @orga-ai/core.

Next Steps

  • Architecture → Understand how each layer (UI → SDK → core → backend) works together.
  • Quick Start → Step‑by‑step setup and backend integration.
  • API Reference → Reference for the OrgaAI class.
Last updated on