Skip to content

Conversation

@Gasoonjia
Copy link
Contributor

@Gasoonjia Gasoonjia commented Jan 13, 2026

This diff makes cuda backend actually use slimtensor.
It:
updates cuda_backends to create slimtensor from given etensor
removed duplicate etensor-driven shim layers under cuda_backend
update cmake logic in both cuda backend and aoti backend
Perf maintains the same. Shows as before.

image

Worth to notice that we are still keeping two sets of common shims, one is etensor-based and for metal backend, the other is slimtensor-based which used by cuda backend, to not impact metal backend work.
When Metal backend finishs the migration, we should delete the duplicate common shims and only keep slimtensor-based one.

Stack from ghstack (oldest at bottom):

Differential Revision: D90606409

@pytorch-bot
Copy link

pytorch-bot bot commented Jan 13, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16565

Note: Links to docs will display an error until the docs builds have been completed.

❌ 4 New Failures

As of commit ff05337 with merge base 15ad846 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 13, 2026
This was referenced Jan 13, 2026
Gasoonjia added a commit that referenced this pull request Jan 13, 2026
Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)

ghstack-source-id: 333239044
Pull Request resolved: #16565
Gasoonjia added a commit that referenced this pull request Jan 22, 2026
Pull Request resolved: #16565


ghstack-source-id: 335005909
@exported-using-ghexport

Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)
Gasoonjia added a commit that referenced this pull request Jan 23, 2026
Pull Request resolved: #16565


ghstack-source-id: 335280573
@exported-using-ghexport

Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)
Gasoonjia added a commit that referenced this pull request Jan 23, 2026
Pull Request resolved: #16565


ghstack-source-id: 335418194
@exported-using-ghexport

Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)
@Gasoonjia Gasoonjia temporarily deployed to upload-benchmark-results January 27, 2026 08:01 — with GitHub Actions Inactive
@Gasoonjia Gasoonjia temporarily deployed to upload-benchmark-results January 27, 2026 17:18 — with GitHub Actions Inactive
Gasoonjia added a commit that referenced this pull request Jan 27, 2026
Pull Request resolved: #16565



perf maintains as before.
{F1984962152}

ghstack-source-id: 336200461
@exported-using-ghexport

Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)
Gasoonjia added a commit that referenced this pull request Jan 27, 2026
Pull Request resolved: #16565



perf maintains as before.
{F1984962152}

ghstack-source-id: 336233120
@exported-using-ghexport

Differential Revision: [D90606409](https://our.internmc.facebook.com/intern/diff/D90606409/)
Gasoonjia added a commit that referenced this pull request Jan 27, 2026
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* #16565
* #16551
* #16469
* #16457
* #16455
* #16454
* #16453
* #16452
* #16451
* #16450
* #16449
* #16448
* #16447
* #16446
* __->__ #16724

Copy CUDAGuard and CUDAStreamGuard from cuda/runtime/ to aoti/slim/cuda/
to support slimtensor requirement while get rid of potential circular
dependency:
- cuda_backend/main_functionalities -> aoti/slimtensor ->
cuda_backend/cuda_guard

This change:
- copy guard.h, guard.cpp and test files from backend/cuda_backend to
backend/aoti/slim/cuda/

Differential Revision:
[D91056808](https://our.internmc.facebook.com/intern/diff/D91056808/)
Gasoonjia added a commit that referenced this pull request Jan 27, 2026
…v2 (#16446)

Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* #16565
* #16551
* #16469
* #16457
* #16455
* #16454
* #16453
* #16452
* #16451
* #16450
* #16449
* #16448
* #16447
* __->__ #16446
* #16724

Add SlimTensor-based implementations of AOTI shim functions for tensor
creation:

1. `aoti_torch_create_tensor_from_blob_v2()` - Creates a non-owning
SlimTensor that wraps existing memory using the `from_blob()` factory

Both functions support CPU and CUDA devices and handle all 7 SlimTensor
dtypes.

Also add `memory_slim.h` and `memory_slim.cpp` with SlimTensor-based
shim implementations for working on new API while not impact the current
pipeline. Will use memory_slim.{h/cpp} to replace current memory.{h/cpp}
when everything has been set up.

Differential Revision:
[D90126247](https://our.internmc.facebook.com/intern/diff/D90126247/)
Gasoonjia added a commit that referenced this pull request Jan 27, 2026
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* #16565
* #16551
* #16469
* #16457
* #16455
* #16454
* #16453
* #16452
* #16451
* #16450
* #16449
* #16448
* __->__ #16447
* #16446
* #16724

Add SlimTensor-based implementations of AOTI shim functions for tensor
creation:

`aoti_torch_create_tensor_from_blob_v2()` - Creates a non-owning
SlimTensor that wraps existing memory using the `from_blob()` factory

Both functions support CPU and CUDA devices and handle all 7 SlimTensor
dtypes.

Changes:
- Add `memory_slim.h` and `memory_slim.cpp` with SlimTensor-based shim
implementations
- Add `runtime_shims_slim` library target to TARGETS with
`CUDA_AVAILABLE=1` preprocessor flag
- Add `cuda_shim_slim_cpp_unittest()` function for SlimTensor test
targets

Differential Revision:
[D90126244](https://our.internmc.facebook.com/intern/diff/D90126244/)
Copy link
Contributor

@larryliu0820 larryliu0820 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review automatically exported from Phabricator review in Meta.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants