ConsistentID:Portrait Generation with Multimodal Fine-Grained Identity Preserving

1. Shenzhen Campus of Sun Yat-sen University, 2. Zhuhai Campus of Sun Yat-sen University, 3. Lenovo Research, 4. Inception Institute of Artificial Intelligence
Teaser Image

Given some images of input IDs, our ConsistentID can generate diverse personalized ID images based on text prompts using only a single image.


Diffusion-based technologies have made significant strides, particularly in personalized and customized facialgeneration. However, existing methods face challenges in achieving high-fidelity and detailed identity (ID)consistency, primarily due to insufficient fine-grained control over facial areas and the lack of a comprehensive strategy for ID preservation by fully considering intricate facial details and the overall face. To address these limitations, we introduce ConsistentID, an innovative method crafted for diverseidentity-preserving portrait generation under fine-grained multimodal facial prompts, utilizing only a single reference image. ConsistentID comprises two key components: a multimodal facial prompt generator that combines facial features, corresponding facial descriptions and the overall facial context to enhance precision in facial details, and an ID-preservation network optimized through the facial attention localization strategy, aimed at preserving ID consistency in facial regions. Together, these components significantly enhance the accuracy of ID preservation by introducing fine-grained multimodal ID information from facial regions. To facilitate training of ConsistentID, we present a fine-grained portrait dataset, FGID, with over 500,000 facial images, offering greater diversity and comprehensiveness than existing public facial datasets. % such as LAION-Face, CelebA, FFHQ, and SFHQ. Experimental results substantiate that our ConsistentID achieves exceptional precision and diversity in personalized facial generation, surpassing existing methods in the MyStyle dataset. Furthermore, while ConsistentID introduces more multimodal ID information, it maintains a fast inference speed during generation.

Facial feature details


Comparison of facial feature details between our method and existing approaches.


Teaser Image

The overall framework of our proposed ConsistentID.

The framework comprises two key modules: a multimodal facial ID generator and a purposefully crafted ID-preservation network. The multimodal facial prompt generator consists of two essential components: a fine-grained multimodal feature extractor, which focuses on capturing detailed facial information, and a facial ID feature extractor dedicated to learning facial ID features. On the other hand, the ID-preservation network utilizes both facial textual and visual prompts, preventing the blending of ID information from different facial regions through the facial attention localization strategy. This approach ensures the preservation of ID consistency in the facial regions.




Visualization in re-contextualization settings. These examples demonstrate the high-identity fidelity and text editing capability of ConsistentID.

Ablation experiment