(new title) Improved Hairstyle Transfer: Latent Code Optimization for Vivid Hair Representation and Sketch Hair Editing
Abstract Recent advances in deep generative models have enabled realistic hairstyle editing. However, hair editing remains a challenging problem because it requires a convenient and intuitive interface that accurately reflects the user's preference, and the capability to precisely reconstruct the complex features of hair. Hair transfer, applying a hairstyle from a reference image to a source image, is widely used for its simplicity. Nevertheless, semantic misalignment and spatial feature discrepancies between the reference and source images lead to the detailed features of the reference hairstyle, such as hair color and strand texture, often not being accurately reflected in the source image. Free from this issue, sketch tools allow users to intuitively depict the desired hairstyles on specific areas of the source image, but they impose a significant design burden on users and present a technical challenge in generating natural-looking hair that seamlessly incorporates the sketch details into the source image. In this paper, we present an improved hair transfer system that utilizes latent space optimizations with masked perceptual and style losses. Our system effectively captures detailed hair features, including vibrant hair colors and strain textures, resulting in more realistic and visually compelling hair transfers. Additionally, we introduce user-controllable components used in our hair transfer process, empowering users to refine the desired hairstyle. Our sketch interfaces can efficiently manipulate these components, providing enhanced editing effects through our improved hair transfer capabilities. Quantitative and qualitative evaluations, including user preference studies, demonstrate that our hairstyle editing system outperforms current state-of-the-art techniques in both hairstyle generation quality and usability.
Official Implementation of "FS Code Style Transfer". KEEP UPDATING! Please Git Pull the latest version.
2024/06/07
All source codes have been uploaded
-
System requirement: Ubuntu22.04/Windows 11, Cuda 12.3
-
Tested GPUs: RTX4090
-
Dependencies:
We recommend running this repository using Anaconda. All dependencies for defining the environment are provided inenvironment.yaml
. -
Create conda environment:
conda create -n HairTrans python=3.10
conda activate HairTrans
- Clone the repository:
git clone https://github.com/korfriend/VividHairStyler.git
cd VividHairStyler
- Install packages with
pip
:
pip install -r requirements.txt
Please download the FFHQ and put them in the /${PROJECT_ROOT}/database/ffhq
directory.
$ pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
$ pip install --upgrade diffusers[torch]
Clone the the pretrained models into /${PROJECT_ROOT}/pretrained_models
directory :
Model | Description |
---|---|
FFHQ StyleGAN | StyleGAN model pretrained on FFHQ with 1024x1024 output resolution. This includes ffhq_PCA.npz and ffhq.pt , which are automatically downloaded. |
Face Parser Model (BiSeNet) | Pretrained face parse model taken from Barbershop. This model file is seg.pth , which is automatically downloaded. |
Face Landmark Model | Used to align unprocessed images. |
FFHQ Inversion Model | Pretrained image embedding model taken from encoder4editing. This model file is e4e_ffhq_encode.pt , which is automatically downloaded. |
Sketch2Image Model | Pretrained sketch hair model taken from SketchHairSalon. This includes 400_net_D.pth , 400_net_G.pth for S2I_braid , 200_net_D.pth , 200_net_G.pth for S2I_unbraid , and 200_net_D.pth , 200_net_G.pth for S2M , which must be manually downloaded and placed in /${PROJECT_ROOT}/pretrained_models . |
HairMapper | Pretrained removing hair model taken from HairMapper (You can get it by filling out their Google form for pre-trained models access). This model file is best_model.pt located in the final folder, which must be manually downloaded and placed in /${PROJECT_ROOT}/pretrained_models . |
The pretrained models should be organized as follows:
./pretrained_models/
├── e4e_ffhq_encode.pt (Automatic download)
├── ffhq_PCA.npz (Automatic download)
├── ffhq.pt (Automatic download)
├── final
│ └── best_model.pt
├── S2I_braid
│ ├── 400_net_D.pth
│ └── 400_net_G.pth
├── S2I_unbraid
│ ├── 200_net_D.pth
│ └── 200_net_G.pth
├── S2M
│ ├── 200_net_D.pth
│ └── 200_net_G.pth
└── seg.pth (Automatic download)
You can use the web UI by running the following command in the /VividHairStyler
directory:
streamlit run VividHairStyler.py
This code borrows heavily from BARBERSHOP.