Adobe has expanded its Firefly generative AI platform to mobile, launching a new app for both iOS and Android devices.
The Firefly app allows users to create images and videos using text prompts, apply AI editing features like Generative Fill and Expand from Photoshop, and seamlessly sync projects to the Creative Cloud for continued editing across devices. As first reported by The Verge, the app brings the full power of Adobe’s desktop AI tools into a portable, user-friendly interface, catering to creatives who need to generate assets on the move.

Source: adobe.com.
Beyond Adobe’s own models, the app integrates with leading third-party AI tools, including Google’s Imagen 3 and 4, OpenAI’s image generator, and Google’s Veo 2 and 3 for video creation. This model-agnostic approach enables users to choose the AI tools best suited for their creative needs, directly from their smartphones. However, access to certain features still requires Firefly credits, which are bundled with Creative Cloud plans or available via a standalone subscription.
In parallel, Adobe has introduced new features to its Firefly Boards platform, a collaborative whiteboard-style interface that launched in beta in April. Users can now remix video content and generate new footage using either Adobe’s models or third-party tools like Google’s Veo 3. The company is also rolling out additional partnerships, with upcoming support for Luma AI’s Ray 2, Ideogram 3.0, Runway’s Gen-4, and Pika’s text-to-video generator. These models will soon become part of the broader Firefly ecosystem.
By bringing Firefly to mobile and expanding its collaborative video and whiteboard capabilities, Adobe is reinforcing its commitment to making generative AI tools more accessible, flexible, and integrated across platforms. This latest rollout strengthens Firefly’s position as a central hub for next-generation creativity, powered by both in-house innovations and an expanding network of AI partners.
