Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perf: use F.linear for MLP #4513

Open
wants to merge 2 commits into
base: devel
Choose a base branch
from
Open

Perf: use F.linear for MLP #4513

wants to merge 2 commits into from

Conversation

caic99
Copy link
Member

@caic99 caic99 commented Dec 26, 2024

It brings <1% speedup.

Summary by CodeRabbit

  • Refactor
    • Simplified linear transformation implementation in the neural network layer
    • Improved code readability and efficiency in matrix operations

Copy link
Contributor

coderabbitai bot commented Dec 26, 2024

📝 Walkthrough

Walkthrough

The changes involve refactoring the MLPLayer class in the deepmd/pt/model/network/mlp.py file. The primary modification is the replacement of manual matrix multiplication and addition with PyTorch's F.linear function. This simplifies the linear transformation process by integrating bias handling and streamlining the method's implementation. The code maintains its original functionality while becoming more concise and leveraging built-in PyTorch functionality.

Changes

File Change Summary
deepmd/pt/model/network/mlp.py - Added import torch.nn.functional as F
- Replaced manual matrix multiplication with F.linear()
- Simplified bias handling
- Adjusted conditional logic for identity and concatenation transformations

Sequence Diagram

sequenceDiagram
    participant MLPLayer
    participant F.linear
    participant Activation
    
    MLPLayer->>F.linear: Perform linear transformation
    F.linear-->>MLPLayer: Transformed tensor
    MLPLayer->>Activation: Apply activation function
    Activation-->>MLPLayer: Activated output
    
    alt Identity Transformation
        MLPLayer->>MLPLayer: Add input tensor
    end
    
    alt Concatenation
        MLPLayer->>MLPLayer: Concatenate tensors
    end
Loading

The sequence diagram illustrates the simplified flow of the MLPLayer forward method, showing how F.linear() replaces manual matrix multiplication, followed by activation and optional identity or concatenation transformations.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e0e9df9 and b39cdf4.

📒 Files selected for processing (1)
  • deepmd/pt/model/network/mlp.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • deepmd/pt/model/network/mlp.py
⏰ Context from checks skipped due to timeout of 90000ms (9)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Test C++ (true)
  • GitHub Check: Test Python (1, 3.9)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
deepmd/pt/utils/utils.py (1)

48-48: One-line activation application
Directly returning self.activation(x) simplifies the code for applying the activation. Consider adding docstrings or references in the code to remind contributors of the available activation keys in get_activation_fn.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ff9b75e and e0e9df9.

📒 Files selected for processing (2)
  • deepmd/pt/model/network/mlp.py (2 hunks)
  • deepmd/pt/utils/utils.py (1 hunks)
🔇 Additional comments (5)
deepmd/pt/model/network/mlp.py (3)

11-11: Use of torch.nn.functional for linear transformations
Importing torch.nn.functional helps consolidate linear operations and other function calls (e.g., non-linear activation) in a functional style. This aligns well with the change to F.linear, improving code clarity and performance.


206-207: Efficient linear transformation & activation chaining
Replacing manual matrix multiplication with F.linear(xx, self.matrix.t(), self.bias) is more concise and may marginally improve performance. Cloning the result after applying the activation is a good safeguard against in-place modifications.


209-210: Configurable skip connection with idt
Multiplying by self.idt here offers a flexible skip or gating mechanism. However, if self.idt is large or uninitialized, it could introduce training instability. Consider adding constraints or initialization strategies to keep it within a valid scale.

deepmd/pt/utils/utils.py (2)

24-25: Dynamic activation assignment
The constructor’s assignment to self.activation via get_activation_fn(activation) centralizes activation function logic, reducing conditional branches in the forward pass. This design is both clean and extensible.


26-43: Centralized activation function selection
Using a standardized method for retrieving activation functions streamlines updates and new additions. The RuntimeError for unsupported activations provides clear feedback to developers.

Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why introducing the change?
do you observe explicit improvement in the efficiency of the linear transformations?

@caic99
Copy link
Member Author

caic99 commented Dec 27, 2024

why introducing the change?

I found this possibility when investigating on the matmuls not using Tensor Cores. This op fuses matmul and add when bias is present.

do you observe explicit improvement in the efficiency of the linear transformations?

No, since currently no models are using MLP layer with bias.
However, with bias=True and large input/output dimensions, the improvement is considerable. I set bias=True with g1_dim=3840 (10x larger for demonstration), and the end-to-end speedup of this modification is ~1.5x.

Maybe we could just keep this change for future use?

@njzjz
Copy link
Member

njzjz commented Dec 28, 2024

I remember that torchscript could do the optimization automatically.

@caic99
Copy link
Member Author

caic99 commented Dec 30, 2024

I remember that torchscript could do the optimization automatically.

Yes, but we are not using torchscript in the training loop by now. I'll test torchscript later.

@caic99
Copy link
Member Author

caic99 commented Dec 30, 2024

I'll test torchscript later.

@njzjz Setting JIT=True in env.py results in torch.jit.Error: Unable to infer type of dictionary: Cannot infer concrete type of torch.nn.Module for DPA-2 models. I'm not sure whether it is a bug.

deepmd/pt/model/network/mlp.py Outdated Show resolved Hide resolved
deepmd/pt/model/network/mlp.py Outdated Show resolved Hide resolved
Copy link

codecov bot commented Jan 8, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 84.55%. Comparing base (3cecca4) to head (b39cdf4).
Report is 12 commits behind head on devel.

Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #4513      +/-   ##
==========================================
- Coverage   84.59%   84.55%   -0.04%     
==========================================
  Files         675      677       +2     
  Lines       63575    63903     +328     
  Branches     3486     3486              
==========================================
+ Hits        53779    54035     +256     
- Misses       8671     8742      +71     
- Partials     1125     1126       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@caic99 caic99 requested a review from wanghan-iapcm January 8, 2025 04:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants