Skip to content

    [COLING 2025] Idea23D: Collaborative LMM Agents Enable 3D Model Generation from Interleaved Multimodal Inputs

    Notifications You must be signed in to change notification settings

    yisuanwang/Idea23D

    Repository files navigation

    Idea23D: Collaborative LMM Agents Enable 3D Model Generation from Interleaved Multimodal Inputs

    2024.11: ?? Idea-2-3D has been accepted by COLING 2025! ?? See you in Abu Dhabi, UAE, from January 19 to 24, 2025!

    2025.01: gradio demo is available at https://3389f4ca9cd69aae21.gradio.live

    ? GitHub Repo Stars ? arXiv ? ? ?

    Junhao Chen *, Xiang Li *, Xiaojun Ye, Chao Li, Zhaoxin Fan ?, Hao Zhao ?


    ?Introduction

    idea23d Based on the LMM we developed Idea23D, a multimodal iterative self-refinement system that enhances any T2I model for automatic 3D model design and generation, enabling various new image creation functionalities togther with better visual qualities while understanding high level multimodal inputs.

    ??Compatibility:

    ??Run

    The Gradio demo is coming soon, and you can also clone this repo to your local machine and run pipeline.py. he main dependencies we use include: python 3.10, torch==2.2.2+cu118, torchvision==0.17.2+cu118, transformers==4.47.0, tokenizers==0.21.0, numpy==1.26.4, diffusers==0.31.0, rembg==2.0.60, openai==0.28.0 These are compatible with gpt4o, instantMesh, hunyuan3d, sdxl, InternVL2.5-78B, and llava-CoT-11B.

    pip install -r requirements-local.txt
    

    You can add new LMM, T2I, and I23D support components by modifying the content under tool/api. An example of generating a watermelon fish is provided in idea23d_pipeline.ipynb. Open Idea23D/idea23d_pipeline.ipynb, Explore freely in the notebook ~

    from tool.api.I23Dapi import *
    from tool.api.LMMapi import *
    from tool.api.T2Iapi import *
    
    
    # Initialize LMM, T2I, I23D
    lmm = lmm_gpt4o(api_key = 'sk-xxx your openai api key')
    # lmm = lmm_InternVL2_5_78B(model_path='OpenGVLab/InternVL2_5-78B', gpuid=[0,1,2,3], load_in_8bit=True)
    # lmm = lmm_InternVL2_5_78B(model_path='OpenGVLab/InternVL2_5-78B', gpuid=[0,1,2,3], load_in_8bit=False)
    # lmm = lmm_InternVL2_8B(model_path = 'OpenGVLab/InternVL2-8B', gpuid=0)
    # lmm = lmm_llava_CoT_11B(model_path='Xkev/Llama-3.2V-11B-cot',gpuid=1)
    # lmm = lmm_qwen2vl_7b(model_path='Qwen/Qwen2-VL-7B-Instruct', gpuid=1)
    
    
    
    # t2i = text2img_sdxl_replicate(replicate_key='your api key')
    # t2i = t2i_sdxl(sdxl_base_path='stabilityai/stable-diffusion-xl-base-1.0', sdxl_refiner_path='stabilityai/stable-diffusion-xl-refiner-1.0', gpuid=6)
    t2i = t2i_flux(model_path='black-forest-labs/FLUX.1-dev', gpuid=2)
    
    
    # i23d = i23d_TripoSR(model_path = 'stabilityai/TripoSR' ,gpuid=7)
    i23d = i23d_InstantMesh(gpuid=3)
    # i23d = i23d_Hunyuan3D(mv23d_cfg_path="Hunyuan3D-1/svrm/configs/svrm.yaml",
    #         mv23d_ckt_path="weights/svrm/svrm.safetensors",
    #         text2image_path="weights/hunyuanDiT")
    

    If you want to test on the dataset, simply run the pipeline.py script, for example:

    python pipeline.py --lmm gpt4o --t2i flux --i23d instantmesh
    

    Evaluation dataset

    1. Download the required dataset dataset from Hugging Face.
    2. Place the downloaded dataset folder in the path Idea23D/dataset.
    cd Idea23D
    wget https://huggingface.co/yisuanwang/Idea23D/resolve/main/dataset.zip?download=true -O dataset.zip
    unzip dataset.zip
    rm dataset.zip
    

    Ensure the directory structure matches the path settings in the code for smooth execution.

    ??ToDO List

    ?1. Release Code

    ?2. Support for more models, such as SD3.5, CraftsMan3D, and more.

    ??Citations

    @article{chen2024idea23d,
      title={Idea-2-3D: Collaborative LMM Agents Enable 3D Model Generation from Interleaved Multimodal Inputs}, 
      author={Junhao Chen and Xiang Li and Xiaojun Ye and Chao Li and Zhaoxin Fan and Hao Zhao},
      year={2024},
      eprint={2404.04363},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
    }
    

    ??Acknowledgement

    We have intensively borrow codes from the following repositories. Many thanks to the authors for sharing their codes.

    llava-v1.6-34b, llava-v1.6-mistral-7b, llava-CoT-11B, InternVL2.5-78B, Qwen-VL2-8B, llava-CoT-11B, llama-3.2V-11B, intern-VL2-8B, SD-XL 1.0 base+refiner, DALL·E, Deepfloyd IF, FLUX.1.dev, TripoSR, Zero123, Wonder3D, InstantMesh, LGM, Hunyuan3D, stable-fast-3d,

    ?? Star History

    Star History Chart

    About

    [COLING 2025] Idea23D: Collaborative LMM Agents Enable 3D Model Generation from Interleaved Multimodal Inputs

    Topics

    Resources

    Stars

    Watchers

    Forks

    Releases

    No releases published

    Packages

    No packages published
    主站蜘蛛池模板: 亚洲AV午夜福利精品一区二区| 国产在线一区二区三区| 91在线视频一区| 亚洲国产精品一区二区第一页| 亚洲一区二区三区影院| 国产综合无码一区二区辣椒| 国产福利视频一区二区| 91精品一区二区三区在线观看| 亚洲色偷偷偷网站色偷一区| 欧美日韩精品一区二区在线视频 | 免费播放一区二区三区| 国产成人精品第一区二区| 麻豆视传媒一区二区三区| 国产麻豆精品一区二区三区v视界| 国产日本亚洲一区二区三区| 日韩精品无码Av一区二区| 日韩人妻不卡一区二区三区 | 日韩精品视频一区二区三区| 精品亚洲av无码一区二区柚蜜| 精品亚洲一区二区三区在线观看| 99精品一区二区三区| 日韩一区二区在线观看| 精品三级AV无码一区| 亚洲片国产一区一级在线观看| 少妇特黄A一区二区三区| 波多野结衣在线观看一区| 无码人妻一区二区三区在线| 波多野结衣一区二区三区高清av| 国产成人精品一区二区三区免费 | 国产伦精品一区二区三区四区 | 国产一区二区三区在线电影| 无码国产伦一区二区三区视频| 精品一区二区三区四区电影| 78成人精品电影在线播放日韩精品电影一区亚洲 | 国产精品小黄鸭一区二区三区 | 加勒比精品久久一区二区三区| 91视频国产一区| 东京热人妻无码一区二区av| 色天使亚洲综合一区二区| 麻豆aⅴ精品无码一区二区| 中文字幕一区精品|