For Developers & Builders
The Technical
Deep Dive
Edge AI hardware, computer vision frameworks, vision-sensor fusion architecture, and the trade-offs that matter. Everything from the research distilled into actionable engineering decisions.
Edge Hardware
Choosing Your Processing Board
Four realistic options for real-time pose estimation at the edge. Prototype budget: $130-480 depending on path.
Google Coral Dev Board
$99-129Best for: Consumer yoga coach, single camera, lowest BOM
Ecosystem: TF-Lite + Edge TPU, smaller community
Best for shipping a consumer product fast and cheap
NVIDIA Jetson Orin Nano
$249Best for: Max developer velocity, broad model support
Ecosystem: CUDA, TensorRT, DeepStream, PyTorch — massive community
Best for prototyping and developer velocity
TI TDA4VM
$249Best for: Industrial, multi-camera sensor fusion, long-term availability
Ecosystem: TI Edge AI Studio, smaller community, niche tooling
Best for productized embedded systems at scale
Luxonis OAK-D Lite
$149-170Best for: Plug-and-play AI camera with built-in stereo depth
Ecosystem: DepthAI APIs, Python-first, USB interface
Best middle ground — camera + compute in one device
CV Frameworks
Pose Estimation Options
MediaPipe
Open-Source33 landmarks>30 FPSOpenPose
Research-Grade20/20 mappingGPU-dependentYOLO v5/v7/X
Open-Source17 keypointsReal-timeKemtai SDK
Proprietary44 motion pointsClinical-gradeasensei SDK
ProprietaryClinical-gradeReal-timeArchitecture Decisions
The Trade-Offs That Matter
Green AI vs. Red AI
The computational complexity vs. accuracy trade-off that defines your hardware choice and user experience.
- TwinEDA, MediaPipe achieve ~identical accuracy at <50% compute
- Prevents device overheating and battery drain on consumer hardware
- < 100 Giga-FLOPs — runs on Coral, phone, or Raspberry Pi
- BabyPoseNet, clinical models require >100 Giga-FLOPs
- Necessary for FDA Class II MSK therapy (Kemtai, Exer AI)
- 44-point tracking with 2cm joint deviation accuracy
Camera-Only vs. Vision-Sensor Fusion
Whether to rely purely on cameras or combine them with textile-integrated sensors.
- Most accessible — any smartphone becomes a coaching device
- Engineering Brain pragmatism: physical sensors = mechanical complexity
- Early hardware-heavy tracking attempts all failed commercially
- Textile IMUs (Nadi X, SeamFit) provide ground-truth 3D joint angles
- Solves occlusion permanently — data flows regardless of camera angle
- Apparel becomes a "ground-truth suit" for training vision models
Specialized vs. Mainstream Hardware
The developer velocity vs. production scalability debate.
- Massive community, tutorials, Stack Overflow answers
- Easier to hire engineers and ship MVPs quickly
- Comprehensive software stacks (CUDA, TensorRT, TF-Lite)
- Low-power, highly stable, designed for embedded production
- Multi-camera sensor fusion at scale
- TI long-term availability for hardware supply chain
Clinical Pathway
From Consumer App to Medical Device
The engineering requirements change dramatically when targeting FDA Class II registration. Here's what the research surfaced.
Algorithmic Bias
Train on diverse datasets — body types, skin tones, lighting conditions. A model that works for one demographic is clinically useless.
False Positive Risk
Telling a patient their form is perfect when it isn't can cause severe injury. Implement human-in-the-loop review for high-risk assessments.
FDA Class II Requirements
Exer AI achieved it. Kemtai is on the path. Requires absolute safety and precision — 44 motion points, 2cm deviation margin.
Proprietary vs. Open-Source
SDKs (asensei, Kemtai) give clinical accuracy out-of-box but lock you in. Open-source (YOLOv5, MediaPipe) offers freedom but massive engineering labor for scoring logic.
Build With Us
Whether you're choosing your hardware stack, evaluating CV frameworks, or planning a clinical pathway — we're assembling the team.
Join the Build