Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update qml.math.norm for dispatching autograd; stop rounding when postselecting #4766

Merged
merged 24 commits into from
Nov 29, 2023

Conversation

mudit2812
Copy link
Contributor

@mudit2812 mudit2812 commented Oct 31, 2023

Context:
Differentiation of qml.math.norm does not work for L2 norm. This was causing incorrect gradients with autograd. Moreover, due to the rounding function, the gradient of QNodes being postselected is incorrect, so that needs to be removed.

Description of the Change:

  • Add private function to compute the norm for autograd interface with qml.math.norm when ord=None and axis=None. Otherwise, we dispatch to scipy.linalg.norm as we did before.
  • Stop rounding the norm of the state when renormalizing the state vector after postselection. Instead, we check if the norm is close to 0 and set it to exactly 0 if it is. This condition is only checked is the state vector is not abstract.
  • Added warning to qml.measure docs about jitting with postselection on zero probability states.

Benefits:

  • qml.math.norm is differentiable for all interfaces for ord=None and axis=None.
  • Postselection doesn't lead to incorrect gradients.

Possible Drawbacks:
Postselection with jitting can lead to incorrect results and errors if postselecting on a state with zero probability. However, this is an edge case that is not causing problems frequently.

Related GitHub Issues:
#4867

Copy link
Contributor

Hello. You may have forgotten to update the changelog!
Please edit doc/releases/changelog-dev.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

@mudit2812 mudit2812 changed the title Update qml.math.norm and qml.math.ndim Update qml.math.norm for dispatching autograd Oct 31, 2023
@mudit2812
Copy link
Contributor Author

mudit2812 commented Oct 31, 2023

[sc-50617]

pennylane/math/multi_dispatch.py Outdated Show resolved Hide resolved
pennylane/math/multi_dispatch.py Outdated Show resolved Hide resolved
@mudit2812 mudit2812 linked an issue Nov 21, 2023 that may be closed by this pull request
1 task
Copy link

codecov bot commented Nov 21, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (c7cda37) 99.65% compared to head (20a982a) 99.65%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #4766      +/-   ##
==========================================
- Coverage   99.65%   99.65%   -0.01%     
==========================================
  Files         387      387              
  Lines       34967    34711     -256     
==========================================
- Hits        34847    34590     -257     
- Misses        120      121       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mudit2812 mudit2812 changed the title Update qml.math.norm for dispatching autograd Update qml.math.norm for dispatching autograd; stop rounding when postselecting Nov 24, 2023
@mudit2812 mudit2812 marked this pull request as ready for review November 24, 2023 22:38
@mudit2812 mudit2812 requested a review from a team November 24, 2023 22:39
@mudit2812 mudit2812 requested review from a team and removed request for a team November 27, 2023 17:37
Copy link
Contributor

@timmysilv timmysilv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good! just one concern about docs.

also curious, but this might be a more general discussion - is qml.math.norm a standard thing to use? I figured we'd generally try to match the numpy API, in which case we should use qml.math.linalg.norm.

Also, if that's the only problem, I wonder if we're better off fixing autograd norms in the pennylane/numpy module instead of the math module. do you have thoughts on any of those things?

pennylane/measurements/mid_measure.py Outdated Show resolved Hide resolved
@mudit2812
Copy link
Contributor Author

also curious, but this might be a more general discussion - is qml.math.norm a standard thing to use? I figured we'd generally try to match the numpy API, in which case we should use qml.math.linalg.norm.

There aren't any functions in qml.math that use submodules similar to the numpy API, or at least not functions implemented in multi_dispatch.py. Nothing is really stopping us from doing qml.math.linalg.norm. It works. But only with autograd and numpy, because we haven't overriden it so it automatically dispatches to <interface>.linalg.norm.

Also, if that's the only problem, I wonder if we're better off fixing autograd norms in the pennylane/numpy module instead of the math module. do you have thoughts on any of those things?

That's a good point, we could do that. @trbromley what are your thoughts about this? We could fix pennylane.numpy.linalg.norm and then dispatch to pennylane.numpy in qml.math instead of dispatching directly to autograd.numpy. The only issue I can think of is the L-inf norm, which is used in BlockEncode and is not differentiable with autograd. We could also override that 🤔

Copy link
Contributor

@timmysilv timmysilv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks great! we can ignore my norm rant for now. the function already exists in multi_dispatch, and we might as well lean on that. but fwiw, we do use qml.math.linalg.matrix_power in a few places 😁

@mudit2812 mudit2812 enabled auto-merge (squash) November 29, 2023 20:16
@mudit2812 mudit2812 merged commit 669c86c into master Nov 29, 2023
34 checks passed
@mudit2812 mudit2812 deleted the qml_math_tweaks branch November 29, 2023 20:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] Can't differentiate qnodes with postselection
4 participants