Qualcomm AI Engine Direct - Adding QNN backend support for randn core ATen op#19377
Qualcomm AI Engine Direct - Adding QNN backend support for randn core ATen op#19377qti-horodnic wants to merge 1 commit into
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19377
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ You can merge normally! (2 Unrelated Failures)As of commit 3f95804 with merge base 371cb1c ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "release notes: qualcomm" |
|
So, Before this PR, if a model contained torch.randn or torch.randn_like, the QNN partitioner had no builder for it, so it would not delegate that op to HTP, instead it would be running on CPU , causing potential perf degradation |
Yes, correct. All the prs which I called Note that we are working on supporting most of the ops in the ATen core opset, so expect several more such prs over the coming weeks. |
Summary
Added support for the
randncore ATen op using the existing QNN backend implementation ofRandomNormalLike. Note onlyINT8outputs are supported.Also fixed a minor bug in the
randop's implementation and test and removed theFPtest as it doesn't serve a purpose sinceranddoesn't supportFPoutputs either.Test plan