Text2Mesh
Text2Mesh produces color and geometric details over a variety of source meshes, driven by a target text prompt. Our stylization results coherently blend unique and ostensibly unrelated combinations of text, capturing both global semantics and part-aware attributes. Our framework, Text2Mesh, stylizes a 3D mesh by predicting color and local geometric details which conform to a target text prompt. We consider a disentangled representation of a 3D object using a fixed mesh input (content) coupled with a learned neural network, which we term neural style field network. In order to modify style, we obtain a similarity score between a text prompt (describing style) and a stylized mesh by harnessing the representational power of CLIP. Text2Mesh requires neither a pre-trained generative model nor a specialized 3D mesh dataset. It can handle low-quality meshes (non-manifold, boundaries, etc.) with arbitrary genus, and does not require UV parameterization.
Learn more
Seed3D
Seed3D 1.0 is a foundation-model pipeline that takes a single input image and generates a simulation-ready 3D asset, including closed manifold geometry, UV-mapped textures, and physically-based rendering material maps, designed for immediate integration into physics engines and embodied-AI simulators. It uses a hybrid architecture combining a 3D variational autoencoder for latent geometry encoding, and a diffusion-transformer stack to generate detailed 3D shapes, followed by multi-view texture synthesis, PBR material estimation, and UV texture completion. The geometry branch produces watertight meshes with fine structural details (e.g., thin protrusions, holes, text), while the texture/material branch yields multi-view consistent albedo, metallic, and roughness maps at high resolution, enabling realistic appearance under varied lighting. Assets generated by Seed3D 1.0 require minimal cleanup or manual tuning.
Learn more
Poly
Poly is an AI-enabled texture creation tool that lets you quickly generate customized, 8K HD, and seamlessly tile-able textures with up to 32-bit PBR maps using a simple prompt (text and/or image) in seconds. It's perfect for use in 3D applications such as 3D modeling, character design, architecture visualization, game development, AR/VR world-building, and much more. We're thrilled to share the result of our team's research work with the community and hope you will find it useful and fun. Type in a prompt, select a texture material type, and watch as Poly creates a fully-formed 32-bit EXR texture for you. You can use this to play around with Poly's AI, seeing what it is capable of and experimenting with prompting strategies. The dock at the bottom of the screen lets you switch views. You can view your past prompts, view a model in 3D, or view any of the six available physical-based rendering maps.
Learn more
Next3D.tech
Next3D.tech is an AI-powered platform that generates production-ready 3D models from text descriptions or 2D images in under 30 seconds. It eliminates the need for complex 3D modeling skills or software by allowing users to simply describe their vision or upload an image. The platform supports export in all major 3D file formats like GLB, FBX, OBJ, and STL for seamless integration with engines like Unity and Unreal. Next3D offers high-fidelity textures and realistic materials generated automatically by AI, suitable for use in games, e-commerce, AR/VR, and architectural visualization. It drastically reduces the time and cost of 3D asset creation, saving up to 90% compared to traditional methods. Trusted by hundreds of creators worldwide, it’s currently available in a free beta with unlimited 3D model generation.
Learn more