gpt-4o-mini Realtime
The gpt-4o-mini-realtime-preview model is a compact, lower-cost, realtime variant of GPT-4o designed to power speech and text interactions with low latency. It supports both text and audio inputs and outputs, enabling “speech in, speech out” conversational experiences via a persistent WebSocket or WebRTC connection. Unlike larger GPT-4o models, it currently does not support image or structured output modalities, focusing strictly on real-time voice/text use cases. Developers can open a real-time session via the /realtime/sessions endpoint to obtain an ephemeral key, then stream user audio (or text) and receive responses in real time over the same connection. The model is part of the early preview family (version 2024-12-17), intended primarily for testing and feedback rather than full production loads. Usage is subject to rate limits and may evolve during the preview period. Because it is multimodal in audio/text only, it enables use cases such as conversational voice agents.
Learn more
AudioLM
AudioLM is a pure audio language model that generates high‑fidelity, long‑term coherent speech and piano music by learning from raw audio alone, without requiring any text transcripts or symbolic representations. It represents audio hierarchically using two types of discrete tokens, semantic tokens extracted from a self‑supervised model to capture phonetic or melodic structure and global context, and acoustic tokens from a neural codec to preserve speaker characteristics and fine waveform details, and chains three Transformer stages to predict first semantic tokens for high‑level structure, then coarse and finally fine acoustic tokens for detailed synthesis. The resulting pipeline allows AudioLM to condition on a few seconds of input audio and produce seamless continuations that retain voice identity, prosody, and recording conditions in speech or melody, harmony, and rhythm in music. Human evaluations show that synthetic continuations are nearly indistinguishable from real recordings.
Learn more
Amazon Nova 2 Omni
Nova 2 Omni is a fully unified multimodal reasoning and generation model capable of understanding and producing content across text, images, video, and speech. It can take in extremely large inputs, ranging from hundreds of thousands of words to hours of audio and lengthy videos, while maintaining coherent analysis across formats. This allows it to digest full product catalogs, long-form documents, customer testimonials, and complete video libraries all at the same time, giving teams a single system that replaces the need for multiple specialized models. With its ability to handle mixed media in one workflow, Nova 2 Omni opens new possibilities for creative and operational automation. A marketing team, for example, can feed in product specs, brand guidelines, reference images, and video content and instantly generate an entire campaign, including messaging, social content, and visuals, in one pass.
Learn more
Qwen3-VL
Qwen3-VL is the newest vision-language model in the Qwen family (by Alibaba Cloud), designed to fuse powerful text understanding/generation with advanced visual and video comprehension into one unified multimodal model. It accepts inputs in mixed modalities, text, images, and video, and handles long, interleaved contexts natively (up to 256 K tokens, with extensibility beyond). Qwen3-VL delivers major advances in spatial reasoning, visual perception, and multimodal reasoning; the model architecture incorporates several innovations such as Interleaved-MRoPE (for robust spatio-temporal positional encoding), DeepStack (to leverage multi-level features from its Vision Transformer backbone for refined image-text alignment), and text–timestamp alignment (for precise reasoning over video content and temporal events). These upgrades enable Qwen3-VL to interpret complex scenes, follow dynamic video sequences, read and reason about visual layouts.
Learn more