OpenAI is reportedly preparing to roll out new audio-focused GPT models as part of its broader push toward a first-generation hardware device that relies primarily on voice-based interaction. According to a report by 9To5Mac, citing The Information, OpenAI’s maiden AI device is expected to be largely audio-driven, with no display and a strong emphasis on conversational AI.
OpenAI’s audio-first device: What to expect
Speaking previously at Emerson Collective’s Demo Day, Ive said the device could arrive in “less than” two years, while OpenAI CEO Sam Altman described the latest prototype as finally feeling “simple and beautiful,” after earlier versions failed to feel intuitive or approachable. Both have suggested that the core design direction is now locked in.
Earlier reports by Financial Times and Bloomberg indicated that the device is likely to be compact, screen-free, and designed to pick up audio — and possibly visual — cues from its surroundings. One report suggested OpenAI could rely on a small projector to display information on nearby surfaces instead of including a built-in display.
The latest reporting adds weight to the idea that audio will be the primary interface. According to The Information, OpenAI has discussed form factors such as smart glasses and a speaker-style device without a display. These ideas point toward a product designed to sit alongside users throughout the day, rather than replacing phones or laptops outright.