You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This could be a very efficient alternative to training regional moth models, or models focused on subsets of insect taxa.
My brief understanding is that you can train a LoRA for a subset of classes, and then add it to the base/global model at inference time rather than retraining or fine-tuning the base/global model for a region. And each LoRA can be under 10mb! Which means the global model can stay in memory on our shared inference servers, then only the appropriate LoRAs need to be swapped out based on the incoming request. That could solve some of the scaling issues Fieldguide.ai had with so many models running as well.
https://huggingface.co/docs/peft/main/en/task_guides/image_classification_lora
The text was updated successfully, but these errors were encountered: