



Spaced repetition language learning platform running on web and iOS from a single codebase. End-to-end development covering a custom SRS engine, AI-powered lesson and practice generation, multilingual microservices for translation and speech, an internal image and audio database, and a unified subscription model split across Stripe and Apple In-App Purchases.
A language learning platform built around Spaced Repetition System (SRS) methodology — designed to surface vocabulary and lessons at the optimal moment for long-term retention. Fully developed end-to-end, from the SRS algorithm and microservices architecture to the cross-platform front end and native iOS experience. One of the core architectural decisions was building a shared Next.js codebase with a custom i18n routing approach that serves both the web app and the iOS app, avoiding duplication across platforms while keeping the user experience native to each.
The learning engine is built around a custom SRS implementation that tracks each item per user — vocabulary, phrases, and lesson content — assigning levels, calculating optimal review intervals, and triggering reminders at the right time. The algorithm accounts for performance history, difficulty weighting, and streak data to progressively build a personalized learning path. Pro users can also apply the SRS method to their own custom content, extending the system beyond language learning to any subject.
The backend runs as a set of independent Node.js microservices, each responsible for a specific domain: lesson creation with automated multilingual translation, speech audio generation, AI API connections for content generation and conversational practice, and SRS calculation and scheduling. This separation allows each service to be scaled, updated, or replaced independently without affecting the rest of the system.
AI APIs are integrated at two levels: content generation — producing lesson material, translations, and vocabulary sets — and interactive practice, where users engage in conversation-based exercises powered by LLM APIs. The speech service generates natural HD audio per language and term, stored and served from an internal audio database built and populated incrementally over time.
An internal image and audio database acts as the media backbone of the platform. Rather than relying on external services at runtime, assets are processed, stored, and indexed internally — images associated to vocabulary items and audio to terms and lessons. The database grows as new content is added, building a reusable, queryable asset layer.
Subscriptions are split by platform at the payment level — Stripe on web and Apple In-App Purchases on iOS — but unified at the subscription level: a single active plan is recognized and consumable across both platforms regardless of where it was purchased. This required custom logic to reconcile subscription state from two separate payment sources into a single entitlement model.