India’s GenAI Moment in Media & Entertainment is Here — Powered by #VisualDub NeuralGarage ! 🚀 🚀 We’re incredibly proud to announce a global-first, our proprietary #VisualDub technology has been used to visually transform and lip sync an entire film from one language to another — not as a dubbed version, but as an original cinematic experience in multiple languages. Visual Dubbing is now a category in production and post production and #VisualDub will globally create this at scale🔥 🎬 War 2, releasing tomorrow on 14th August, marks this revolution. Originally shot in Hindi, it has received a straight film certificate and distributed as a native Telugu-language film & not just a dubbed version — a first of its kind not only in India but globally. This isn’t just innovation. It’s a fundamental shift in how content will be produced, distributed, and monetised going forward. Why This Matters : 1)#Game-Changing Distribution Models Producers can now sell multi-language versions as original films and monetise at an 2-3X pre-release on distribution rights. 2)#Enhanced Audience Experience Regional viewers see a film that looks as if natively filmed in their language — driving higher engagement and box office potential. Exhibitors can command premium pricing for such immersive experiences. 3)#Global Original IP Creation Content owners can sell globally in multiple “original” languages, opening up unprecedented monetization routes. 4)#OTT Premium Offerings Platforms can bundle #VisualDub-enabled films as premium tiers, elevating user experience and ARPU. 5)#Talent Democratization Actors will transcend language barriers. Hrithik Roshan can now be a Telugu, Tamil, or even a Spanish actor. Tom Cruise in Bhojpuri? Now possible. 6)#No More Bad Dubbing Viewers get seamless experiences in the language of their choice. Language is no longer a barrier — it's a gateway. #VisualDub isn’t just a tool. It’s the beginning of a reimagined content ecosystem — where tech meets storytelling to redefine & reimagine how we produce, consume, and globalize content across all pipes. The #future of cinema is multilingual, global, and visually immersive — and it’s already here. 🔥 🔥 #VisualDub #GenAI #MediaTech #NeuralGarage #War2 #AIinEntertainment #FutureOfCinema Shailesh Ghorpade Sailesh Ramakrishnan Shivam Patel Benaifer Malandkar Sameer Nair Vivek Krishnani Akshaye Rathi Ashutosh Srivastava Kevin Vaz Subhabrata Debnath(Subho)Anjan Banerjee Subhashish Saha Deepak Pathak Aamir Bashir Ashish Kotekar Chaitanya Chinchlikar Vanita Kohli-Khandekar Shobu Yarlagadda Atul Phadnis Vitrina A.I. #visualdubbing #lipsync #lipsynchronisation #lipsynchronization HUNGAMA Naman Ramachandran The Hollywood Reporter Anupama Chopra Ranveer Allahbadia Viraj Sheth Raj Shamani Tanmay Bhat Rajan Anandan Shailendra J Singh Mohit Bhatnagar Lightspeed India Peak XV Partners Accel Blume Ventures Sajith Pai Sanjay Nath Arpit Agarwal Sampath P
So glad !!!!! 🧿
Even though not directly linked to the product/company, I have been following for a quite sometime now and also being AI Evangelist & Product Manager, I am really feeling proud! Just reminded me of one pitch where I suggested this (visualdub)to one OTT for trailers (as a PoC) and they kind of laughed that these are sales pitches talk something realistic 😅
What an achievement! Congratulations Mandar & Team!
Wow! This is an amazing achievement Mandar Natekar, Subhabrata Debnath(Subho)! Congrats to team NeuralGarage 🤩👏👊
Woww... Were the trailers dubbed by visual dub? Mandar Natekar
Love this, Mandar
Love this, Mandar. Amazing achievement
Congratulations team!! This is amazing
This is why we started up. #VisualDub enables actors to speak to new markets : — without re-shoots. — while preserving the original performance. — Keeping brand/face continuity across regions; no “voice–face” disconnect. — Unlock roles beyond native language. #Adoption For video-based generative AI models to gain adoption in real world post-production , one needs to account for quality and workflow challenges: 𝐅𝐫𝐚𝐦𝐞 𝐪𝐮𝐚𝐥𝐢𝐭𝐲: Sub-pixel facial muscle alignment, frame-accurate retiming, multi-frame optical-flow constraints for temporal consistency, grain synthesis that survives DI, lens-distortion aware warps, and strict QC for cadence, motion blur, and artifact-free close-ups. 𝐂𝐨𝐥𝐨𝐫 & 𝐝𝐚𝐭𝐚 𝐢𝐧𝐭𝐞𝐠𝐫𝐢𝐭𝐲: up to 32-bit EXR compatibility, tone-safe warps. 𝐄𝐝𝐢𝐭𝐨𝐫𝐢𝐚𝐥 𝐢𝐧𝐭𝐞𝐫𝐨𝐩𝐞𝐫𝐚𝐛𝐢𝐥𝐢𝐭𝐲: handle-aware shots, versioned IO, and deterministic renders for repeatable turnovers. Done right, visual dubbing doesn’t just translate dialogue—it extends an actor’s market and deepens connection with local audiences at theatrical quality.