-
Notifications
You must be signed in to change notification settings - Fork 97
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
PeizhuoLi
committed
May 4, 2021
0 parents
commit e899354
Showing
63 changed files
with
289,773 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,96 @@ | ||
# Learning Skeletal Articulations with Neural Blend Shapes | ||
|
||
  | ||
 | ||
|
||
This repository provides an end-to-end library for automatic character rigging and blend shapes generation. It is based on our work [Learning Skeletal Articulations with Neural Blend Shapes](https://peizhuoli.github.io/neural-blend-shapes/index.html), which is published in SIGGRAPH 2021. | ||
|
||
<img src="https://peizhuoli.github.io/neural-blend-shapes/images/video_teaser.gif" slign="center"> | ||
|
||
## Prerequisites | ||
|
||
- Linux | ||
- Python 3 | ||
- PyTorch | ||
|
||
## Quick Start | ||
|
||
We provide a pretrained model that is dedicated for biped character. Download and extracat the pretrained model from [Google Drive](https://drive.google.com/file/d/1S_JQY2N4qx1V6micWiIiNkHercs557rG/view?usp=sharing) or [Baidu Disk](https://pan.baidu.com/s/1y8iBqf1QfxcPWO0AWd2aVw) (9ras) and put the `pre_trained` folder under the project directory. Run | ||
|
||
~~~bash | ||
python demo.py --pose_file=./eval_constant/sequences/greeting.npy --obj_path=./eval_constant/meshes/maynard.obj | ||
~~~ | ||
|
||
The nice greeting animation showed above will be saved in `demo/obj` as obj files. Besides, the generated skeleton will be saved as `demo/skeleton.bvh` and the skinning weight matrix will be saved as `demo/weight.npy`. | ||
|
||
If you are also interested in traditional linear blend skinning(LBS) technique results generated with our rig, you can also specify `--envelope_only=1` to evaluate our model with only envelope branch. | ||
|
||
We also provided several other meshes and animation sequences, feel free to try their combinations! | ||
|
||
### Test on Customized Meshes | ||
|
||
You may also try to run our model with your own meshes. Please make sure your mesh is triangulated and has a consistent upright and front facing orientation. Most importantly, our model requires the input meshes are spatially aligned, so please also specify `--normalize=1`. Alternatively, you can try to scale and translate your mesh to align the provided `eval_constant/meshes/smpl_std.obj` and specify `--normalize=0`. | ||
|
||
### Evaluation | ||
|
||
To reconstruct the quantitative result with the pretrained model, you need to download the test dataset from [Google Drive](https://drive.google.com/file/d/1RwdnnFYT30L8CkUb1E36uQwLNZd1EmvP/view?usp=sharing) or [Baidu Disk](https://pan.baidu.com/s/1c5QCQE3RXzqZo6PeYjhtqQ) (8b0f) and put the two extracted folders under `./dataset`. Then run | ||
|
||
~~~bash | ||
python evaluation.py | ||
~~~ | ||
|
||
|
||
|
||
## Blender Visualization | ||
|
||
We provide a simple wrapper of blender's python API (>=2.80) for rendering 3D mesh animations and visualize skinning weight. The following code has been tested on Ubuntu 18.04 and macOS Big Sur. | ||
|
||
Note that due to the limitation of Blender, you cannot run Eevee render engine with a headless machine. | ||
|
||
To pass parameters to python script in blender, please do following: | ||
|
||
~~~bash | ||
blender [blend file path(optional)] -P [python script path] [-b] -- --arg1 [ARG1] --arg2 [ARG2] | ||
~~~ | ||
|
||
|
||
|
||
### Animation | ||
|
||
We provide a simple light and camera setting in `eval_constant/simple_scene.blend`. You may need to adjust it before using. To render the obj files genrated above, run | ||
|
||
~~~bash | ||
cd blender_script | ||
blender ../eval_constant/simple_scene.blend -P render_mesh.py -b | ||
~~~ | ||
|
||
The rendered per-frame image will be saved in `demo/images` and composited video will be saved as `demo/video.mov`. We use `ffmpeg` to convert images into video. Pealse make sure you have installed it. | ||
|
||
### Skinning Weight | ||
|
||
Visualize the skinning weight is a good sanity check to see whether the model works as expected. We provide a script using Blender's built-in ShaderNodeVertexColor to visualize the skinning weight. Simply run | ||
|
||
~~~bash | ||
cd blender_script | ||
blender -P vertex_color.py | ||
~~~ | ||
|
||
You will see something similar to this if the model works as expected: | ||
|
||
<img src="https://peizhuoli.github.io/neural-blend-shapes/images/skinning_vis.png" slign="center" width="30%"> | ||
|
||
Mean while, you can import the generated skeleton (in `demo/skeleton.bvh`) to Blender. For skeleton rendering, please refer to [deep-motion-editing](https://github.com/DeepMotionEditing/deep-motion-editing). | ||
|
||
## Acknowledgement | ||
|
||
The code in `meshcnn` is adapted from [MeshCNN](https://github.com/ranahanocka/MeshCNN) by [@ranahanocka](https://github.com/ranahanocka/). | ||
|
||
The code in `models/skeleton.py` is adapted from [deep-motion-editing](https://github.com/DeepMotionEditing/deep-motion-editing) by [@kfiraberman](https://github.com/kfiraberman), [@PeizhuoLi](https://github.com/PeizhuoLi) and [@HalfSummer11](https://github.com/HalfSummer11). | ||
|
||
The code in `dataset/smpl_layer` is adapted from [smpl_pytorch](https://github.com/gulvarol/smplpytorch) by [@gulvarol](https://github.com/gulvarol). | ||
|
||
Part of the test models are taken from and [SMPL](https://smpl.is.tue.mpg.de/en), [MultiGarmentNetwork](https://github.com/bharat-b7/MultiGarmentNetwork) and [Adobe Mixamo](https://www.mixamo.com). | ||
|
||
|
||
|
||
This repository is still under construction. We are planning to release the code and dataset for training soon. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,100 @@ | ||
from models.networks import MeshReprConv, MLP, MLPSkeleton | ||
from architecture.blend_shapes import BlendShapesModel | ||
from os.path import join as pjoin | ||
|
||
|
||
def create_envelope_model(device, args, topo_loader, parents=None, is_train=True): | ||
base = args.base | ||
layers = args.num_layers | ||
bone_num = 24 | ||
|
||
channel_list = [base] | ||
|
||
for i in range(layers - 1): | ||
channel_list.append(channel_list[-1] * 2) | ||
geo_list = [3] + channel_list # This is for vertex position | ||
|
||
gen_list = geo_list[::-1] | ||
|
||
if not args.skeleton_aware: | ||
gen_list = [c * bone_num for c in gen_list] | ||
else: | ||
gen_list = [c * bone_num for c in gen_list] | ||
|
||
channel_list = [args.att_base] | ||
for i in range(layers - 2): | ||
channel_list.append(channel_list[-1] * 2) | ||
att_list = [3] + channel_list + [bone_num] | ||
|
||
save_path = args.save_path | ||
|
||
geometry_branch = MeshReprConv(device, is_train=is_train, save_path=pjoin(save_path, 'geo/'), | ||
channels=geo_list, | ||
topo_loader=topo_loader, requires_recorder=True, is_cont=args.cont, | ||
save_freq=args.save_freq) | ||
|
||
att_branch = MeshReprConv(device, is_train=is_train, save_path=pjoin(save_path, 'att/'), | ||
channels=att_list, | ||
topo_loader=topo_loader, last_activate=False, requires_recorder=False, | ||
pool_ratio=args.pool_ratio, pool_method=args.pool_method, | ||
is_cont=args.cont, save_freq=args.att_save_freq) | ||
|
||
if not args.skeleton_aware: | ||
gen_branch = MLP(layers=gen_list, | ||
save_path=pjoin(save_path, 'gen/'), | ||
is_train=is_train, | ||
device=device).to(device) | ||
else: | ||
gen_branch = MLPSkeleton(layers=gen_list, parents=parents, | ||
save_path=pjoin(save_path, 'gen/'), | ||
is_train=is_train, save_freq=args.save_freq, | ||
device=device).to(device) | ||
|
||
return geometry_branch, att_branch, gen_branch | ||
|
||
|
||
def create_residual_model(device, args, topo_loader, is_train=True, parents=None, requires_att=True): | ||
base = args.base | ||
layers = args.num_layers | ||
bone_num = 24 | ||
|
||
channel_list = [base] | ||
|
||
for i in range(layers - 1): | ||
channel_list.append(channel_list[-1] * 2) | ||
geo_list = [3] + channel_list # This is for vertex position | ||
|
||
gen_list = geo_list[::-1] | ||
gen_list = gen_list[:2] + [args.basis_per_bone * 3] | ||
gen_list[0] += bone_num | ||
|
||
channel_list = [args.att_base] | ||
for i in range(layers - 2): | ||
channel_list.append(channel_list[-1] * 2) | ||
att_list = [3] + channel_list + [bone_num] | ||
|
||
save_path = args.save_path | ||
|
||
geometry_branch = MeshReprConv(device, is_train=is_train, save_path=pjoin(save_path, 'geo2/'), | ||
channels=geo_list, | ||
topo_loader=topo_loader, requires_recorder=True, is_cont=args.cont, | ||
save_freq=args.save_freq) | ||
|
||
if requires_att: | ||
att_branch = MeshReprConv(device, is_train=False, save_path=pjoin(args.att_load_path, 'att/'), | ||
channels=att_list, | ||
topo_loader=topo_loader, last_activate=False, requires_recorder=False, | ||
pool_ratio=args.pool_ratio, pool_method=args.pool_method, | ||
is_cont=args.cont, save_freq=args.att_save_freq) | ||
else: | ||
att_branch = None | ||
|
||
gen_branch = MeshReprConv(device, is_train=is_train, save_path=pjoin(save_path, 'dec/'), | ||
channels=gen_list, | ||
topo_loader=topo_loader, last_activate=False, requires_recorder=False, | ||
is_cont=args.cont, last_init_div=args.offset_init_div) | ||
|
||
coff_branch = BlendShapesModel(1, bone_num - 1, args.basis_per_bone, parent=parents, basis_as_model=False, | ||
save_freq=args.save_freq, save_path=pjoin(save_path, 'coff/'), device=device).to(device) | ||
|
||
return geometry_branch, att_branch, gen_branch, coff_branch |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,148 @@ | ||
import torch | ||
import torch.nn as nn | ||
from models.transforms import aa2quat, aa2mat | ||
from models.networks import MLP | ||
from os.path import join as pjoin | ||
import os | ||
|
||
|
||
class BlendShapesModel(nn.Module): | ||
def __init__(self, n_vert, n_joint, basis_per_joint, | ||
weight=None, parent=None, basis_as_model=True, save_freq=500, save_path=None, device=None, | ||
threshold=0.05): | ||
super(BlendShapesModel, self).__init__() | ||
self.epoch_count = 0 | ||
|
||
self.n_vert = n_vert | ||
self.n_joint = n_joint | ||
self.basis_per_joint = basis_per_joint | ||
self.parent = parent | ||
self.save_path = save_path | ||
self.save_freq = save_freq | ||
self.device = device | ||
self.threshold = threshold | ||
|
||
if save_path is not None: | ||
os.makedirs(pjoin(save_path, 'model'), exist_ok=True) | ||
os.makedirs(pjoin(save_path, 'optimizer'), exist_ok=True) | ||
|
||
basis = torch.randn((6890, basis_per_joint, 3)) / 10000 | ||
if basis_as_model: | ||
self.basis = nn.Parameter(basis) | ||
else: | ||
self.basis = basis | ||
|
||
coff_list = [9, 18, 32, basis_per_joint] | ||
self.coff_branch = nn.ModuleList() | ||
for i in range(n_joint): | ||
coff_branch = MLP(coff_list) | ||
self.coff_branch.append(coff_branch) | ||
|
||
if weight is not None: | ||
mask = torch.empty((n_vert, n_joint), dtype=torch.bool) | ||
for i in range(n_joint): | ||
p = parent[i + 1] | ||
x = i + 1 | ||
threshold = self.threshold if i not in [19, 20] else 0.02 | ||
mask[:, i] = (weight[:, x] > threshold) + (weight[:, p] > threshold) | ||
mask = mask.float() | ||
self.register_buffer('mask', mask) # shape = (n_vert, n_bone) | ||
|
||
def set_mask(self, weight): | ||
self.n_vert = weight.shape[0] | ||
mask = torch.empty((weight.shape[0], weight.shape[1] - 1), dtype=torch.bool, device=weight.device) | ||
for i in range(weight.shape[1] - 1): | ||
p = self.parent[i + 1] | ||
x = i + 1 | ||
threshold = self.threshold if i not in [19, 20] else 0.02 | ||
# Larger control field of wrist joints (joint 19 and 20) | ||
mask[:, i] = (weight[:, x] > threshold) + (weight[:, p] > threshold) | ||
# A joint should affect the vertices associated with itself and it parent joint | ||
mask = mask.float() | ||
self.mask = mask | ||
|
||
def set_optimizer(self, lr=1e-3, optimizer=torch.optim.Adam): | ||
params = self.parameters() | ||
self.optimizer = optimizer(params, lr=lr) | ||
|
||
def get_coff(self, pose): | ||
""" | ||
@return: (batch_size, n_vert, n_basis_per_bone) | ||
""" | ||
batch_size = pose.shape[0] | ||
device = pose.device | ||
if len(pose.shape) == 2: | ||
pose_repr = aa2mat(pose.reshape(pose.shape[0], -1, 3)) | ||
elif len(pose.shape) == 4: | ||
pose_repr = pose.reshape(batch_size, -1, 3, 3) | ||
else: | ||
raise Exception('Wrong input format') | ||
pose_repr = pose_repr[:, 1:] | ||
pose_repr = pose_repr.reshape(-1, 9) | ||
identical = torch.eye(3, device=device).reshape(-1) | ||
|
||
pose_repr = pose_repr - identical | ||
|
||
pose_repr = pose_repr.reshape(pose.shape[0], self.n_joint, -1) | ||
coff = [] | ||
for i in range(pose_repr.shape[1]): | ||
coff.append(self.coff_branch[i](pose_repr[:, i]).unsqueeze(1)) | ||
coff = torch.cat(coff, dim=1) | ||
|
||
return coff | ||
|
||
def forward(self, pose, basis=None, mem_eff=True, requires_per_joint_off=False): | ||
""" | ||
Get per-vertex displacement | ||
@param mem_eff: Use a for loop to increase memory efficiency | ||
""" | ||
coff = self.get_coff(pose) # (batch_size, n_bone, n_basis) | ||
mask_full = self.mask.reshape(self.n_vert, self.n_joint, 1, 1) | ||
if basis is None: | ||
basis = self.basis | ||
basis = basis.reshape(self.n_vert, 1, self.basis_per_joint, 3) | ||
basis_full = basis * mask_full # (n_vert, n_bone, n_basis, 3) | ||
basis_full = basis_full.reshape(1, self.n_vert, -1, 3) | ||
coff = coff.reshape(coff.shape[0], 1, -1, 1) | ||
if requires_per_joint_off: | ||
per_joint_off = coff * basis_full | ||
per_joint_off = (per_joint_off * per_joint_off).sum(dim=-1).mean(dim=1) | ||
per_joint_off = per_joint_off.reshape(per_joint_off.shape[0], -1, self.basis_per_joint) | ||
per_joint_off = per_joint_off.mean(dim=-1) | ||
per_joint_off = torch.cat((torch.zeros_like(per_joint_off[:, :1]), per_joint_off), dim=1) | ||
self.per_joint_off = per_joint_off | ||
if mem_eff: | ||
res = [] | ||
for i in range(coff.shape[0]): | ||
res.append((coff[[i]] * basis_full).sum(dim=-2)) | ||
res = torch.cat(res, dim=0) | ||
else: | ||
res = (coff * basis_full).sum(dim=-2) | ||
return res | ||
|
||
def epoch(self): | ||
self.epoch_count += 1 | ||
|
||
def save_model(self, epoch=None): | ||
if epoch is None: | ||
epoch = self.epoch_count | ||
|
||
if epoch % self.save_freq == 0: | ||
torch.save(self.state_dict(), pjoin(self.save_path, 'model/%05d.pt' % epoch)) | ||
torch.save(self.optimizer.state_dict(), pjoin(self.save_path, 'optimizer/%05d.pt' % epoch)) | ||
|
||
torch.save(self.state_dict(), pjoin(self.save_path, 'model/latest.pt')) | ||
torch.save(self.optimizer.state_dict(), pjoin(self.save_path, 'optimizer/latest.pt')) | ||
|
||
def load_model(self, epoch=None): | ||
if epoch is None: | ||
epoch = self.epoch_count | ||
|
||
if isinstance(epoch, str): | ||
state_dict = torch.load(epoch, map_location=self.device) | ||
self.load_state_dict(state_dict) | ||
|
||
else: | ||
filename = ('%05d.pt' % epoch) if epoch != -1 else 'latest.pt' | ||
state_dict = torch.load(pjoin(self.save_path, f'model/{filename}'), map_location=self.device) | ||
self.load_state_dict(state_dict, strict=False) |
Oops, something went wrong.