Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lower _conj_copy operation. #8686

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Conversation

ysiraichi
Copy link
Collaborator

Fix: #3070

This PR adds a lowering for _conj_copy. This operation is called by torch.conj, and was being executed using the fallback path. With this PR, torch.conj and its decomposed functions do not fallback.

@ysiraichi ysiraichi marked this pull request as ready for review February 6, 2025 16:38
@ysiraichi
Copy link
Collaborator Author

Update: I'm currently investigating this odd CI failure when functionalization is disabled.

  • It looks like I'm not able to get the XLATensorImpl instance of the input when cloning (inside ConjugateFallback.cpp)
    • i.e. there's a tensor whose device is XLA that doesn't hold a XLATensorImpl instance
  • Not sure why...
 Traceback (most recent call last):
  File "/__w/xla/xla/pytorch/xla/test/test_operations.py", line 2397, in test_conj_no_fallback
    self.assertEqual(actual, expected.cpu())
RuntimeError: torch_xla/csrc/aten_xla_bridge.cpp:110 : Check failed: xtensor 
*** Begin stack trace ***
	tsl::CurrentStackTrace()
	torch_xla::bridge::GetXlaTensor(at::Tensor const&)
	torch_xla::XLANativeFunctions::clone(at::Tensor const&, std::optional<c10::MemoryFormat>)
	
	at::_ops::clone::call(at::Tensor const&, std::optional<c10::MemoryFormat>)
	
	
	at::_ops::_to_copy::redispatch(c10::DispatchKeySet, at::Tensor const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, bool, std::optional<c10::MemoryFormat>)
	
	
	at::_ops::_to_copy::call(at::Tensor const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, bool, std::optional<c10::MemoryFormat>)
	at::native::to(at::Tensor const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, bool, bool, std::optional<c10::MemoryFormat>)
	
	at::_ops::to_dtype_layout::call(at::Tensor const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, bool, bool, std::optional<c10::MemoryFormat>)
	at::Tensor::to(c10::TensorOptions, bool, bool, std::optional<c10::MemoryFormat>) const
	...
*** End stack trace ***
Input tensor is not an XLA tensor: XLAComplexFloatType

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

lower complex number operations (view_as_real, view_as_complex, conj, abs)
1 participant