Forward XLATensorImpl::is_contiguous_custom
to TensorImpl
.
#8032
+89
−5
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR fixes #7998. Instead of always returning true, we forward this call to the base class
TensorImpl::is_contiguous_custom()
.The reason is that after pytorch/pytorch#135498 is merged, XLA tensors' metadata might stop reflecting on the actual XLA storage. Which means that the tensors' strides might not always be contiguous. Whenever that happens,
tensor.is_contiguous()
call should be consistent with the tensors' strides.cc @miladm @JackCaoG @alanwaketan