Skip to content

    Official implementation of "DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion"

    License

    Notifications You must be signed in to change notification settings

    johannakarras/DreamPose

    Repository files navigation

    DreamPose

    Official implementation of "DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion" by Johanna Karras, Aleksander Holynski, Ting-Chun Wang, and Ira Kemelmacher-Shlizerman.

    Teaser Image

    Demo

    You can generate a video using DreamPose using our pretrained models.

    1. Download and unzip the pretrained models inside demo/custom-chkpts.zip
    2. Download and unzip the input poses inside demo/sample/poses.zip
    3. Run demo.py using the command below:
      python test.py --epoch 499 --folder demo/custom-chkpts --pose_folder demo/sample/poses  --key_frame_path demo/sample/key_frame.png --s1 8 --s2 3 --n_steps 100 --output_dir demo/sample/results --custom_vae demo/custom-chkpts/vae_1499.pth
      

    Data Preparation

    To prepare a sample for finetuning, create a directory containing train and test subdirectories containing the train frames (desired subject) and test frames (desired pose sequence), respectively. Note that the test frames are not expected to be of the same subject. See demo/sample for an example.

    Then, run DensePose using the "densepose_rcnn_R_50_FPN_s1x" checkpoint on all images in the sample directory. Finally, reformat the pickled DensePose output using utils/densepose.py. You need to change the "outpath" filepath to point to the pickled DensePose output.

    Download or Finetune Base Model

    DreamPose is finetuned on the UBC Fashion Dataset from a pretrained Stable Diffusion checkpoint. You can download our pretrained base model from Google Drive, or finetune pretrained Stable Diffusion on your own image dataset. We train on 2 NVIDIA A100 GPUs.

    accelerate launch --num_processes=4 train.py --pretrained_model_name_or_path="CompVis/stable-diffusion-v1-4" --instance_data_dir=../path/to/dataset --output_dir=checkpoints --resolution=512 --train_batch_size=2 --gradient_accumulation_steps=4 --learning_rate=5e-6 --lr_scheduler="constant" --lr_warmup_steps=0 --num_train_epochs=300 --run_name dreampose --dropout_rate=0.15 --revision "ebb811dd71cdc38a204ecbdd6ac5d580f529fd8c"
    

    Finetune on Sample

    In this next step, we finetune DreamPose on a one or more input frames to create a subject-specific model.

    1. Finetune the UNet

      accelerate launch finetune-unet.py --pretrained_model_name_or_path="CompVis/stable-diffusion-v1-4" --instance_data_dir=demo/sample/train --output_dir=demo/custom-chkpts --resolution=512 --train_batch_size=1 --gradient_accumulation_steps=1 --learning_rate=1e-5 --num_train_epochs=500 --dropout_rate=0.0 --custom_chkpt=checkpoints/unet_epoch_20.pth --revision "ebb811dd71cdc38a204ecbdd6ac5d580f529fd8c"
      
    2. Finetune the VAE decoder

      accelerate launch --num_processes=1 finetune-vae.py --pretrained_model_name_or_path="CompVis/stable-diffusion-v1-4"  --instance_data_dir=demo/sample/train --output_dir=demo/custom-chkpts --resolution=512  --train_batch_size=4 --gradient_accumulation_steps=4 --learning_rate=5e-5 --num_train_epochs=1500 --run_name finetuning/ubc-vae --revision "ebb811dd71cdc38a204ecbdd6ac5d580f529fd8c"
      

    Testing

    Once you have finetuned your custom, subject-specific DreamPose model, you can generate frames using the following command:

    python test.py --epoch 499 --folder demo/custom-chkpts --pose_folder demo/sample/poses  --key_frame_path demo/sample/key_frame.png --s1 8 --s2 3 --n_steps 100 --output_dir results --custom_vae demo/custom-chkpts/vae_1499.pth
    

    Acknowledgment

    This code is largely adapted from the Hugging Face diffusers repo.

    About

    Official implementation of "DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion"

    Resources

    License

    Stars

    Watchers

    Forks

    Releases

    No releases published

    Packages

    No packages published

    Languages

    主站蜘蛛池模板: 51视频国产精品一区二区| 亚洲片一区二区三区| 国产日韩高清一区二区三区| 成人精品一区二区三区中文字幕| 少妇激情av一区二区| 又紧又大又爽精品一区二区| 国产在线精品一区二区中文| 国产精品视频一区| 在线播放国产一区二区三区| 亚洲综合无码精品一区二区三区| 99偷拍视频精品一区二区| 国产高清在线精品一区小说| 国产av一区最新精品| 在线播放一区二区| 亚洲AV无码一区东京热久久| 国产一区二区三区久久| 精品免费久久久久国产一区| 国产91精品一区二区麻豆亚洲 | 99久久综合狠狠综合久久一区| 免费一区二区三区在线视频| 91亚洲一区二区在线观看不卡| 日本一区精品久久久久影院| 性色AV 一区二区三区| 精品视频一区二区| 一区三区三区不卡| 91精品一区国产高清在线| 中文字幕日本精品一区二区三区| 国产一区二区精品尤物| 一区二区三区在线| 视频一区精品自拍| 亚洲综合无码一区二区| 伊人色综合网一区二区三区| 久久精品无码一区二区三区不卡| 日韩精品一区二区三区影院| 国产一区在线播放| 农村人乱弄一区二区| 日韩人妻不卡一区二区三区| 无码播放一区二区三区| 日本一区中文字幕日本一二三区视频 | 亚洲AV无码一区二区三区牲色| 无码日韩精品一区二区免费|