Skip to content

Conversation

@Gasoonjia
Copy link
Contributor

@Gasoonjia Gasoonjia commented Jan 12, 2026

Stack from ghstack (oldest at bottom):

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:

  • Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
  • Supports CPU and CUDA target devices via storage()->copy_()
  • Preserves tensor strides (non-contiguous layouts)
  • Provides both reference and pointer overloads

Differential Revision: D90539554

…sor conversion

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Jan 12, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16551

Note: Links to docs will display an error until the docs builds have been completed.

⏳ 42 Pending, 2 Unrelated Failures

As of commit 173572d with merge base 99348ed (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following job failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 12, 2026
Gasoonjia added a commit that referenced this pull request Jan 12, 2026
…sor conversion

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

ghstack-source-id: 333060891
Pull Request resolved: #16551
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

… to SlimTensor conversion"

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

[ghstack-poisoned]
Gasoonjia added a commit that referenced this pull request Jan 12, 2026
…sor conversion

Pull Request resolved: #16551

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads
ghstack-source-id: 333062343
@exported-using-ghexport

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)
… to SlimTensor conversion"

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

[ghstack-poisoned]
… to SlimTensor conversion"

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

[ghstack-poisoned]
… to SlimTensor conversion"

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

[ghstack-poisoned]
… to SlimTensor conversion"

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

[ghstack-poisoned]
… to SlimTensor conversion"

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

[ghstack-poisoned]
… to SlimTensor conversion"

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

[ghstack-poisoned]
… to SlimTensor conversion"

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

[ghstack-poisoned]
Gasoonjia added a commit that referenced this pull request Jan 27, 2026
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* #16565
* #16551
* #16469
* #16457
* #16455
* #16454
* #16453
* #16452
* #16451
* #16450
* #16449
* #16448
* #16447
* #16446
* __->__ #16724

Copy CUDAGuard and CUDAStreamGuard from cuda/runtime/ to aoti/slim/cuda/
to support slimtensor requirement while get rid of potential circular
dependency:
- cuda_backend/main_functionalities -> aoti/slimtensor ->
cuda_backend/cuda_guard

This change:
- copy guard.h, guard.cpp and test files from backend/cuda_backend to
backend/aoti/slim/cuda/

Differential Revision:
[D91056808](https://our.internmc.facebook.com/intern/diff/D91056808/)
Gasoonjia added a commit that referenced this pull request Jan 27, 2026
…v2 (#16446)

Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* #16565
* #16551
* #16469
* #16457
* #16455
* #16454
* #16453
* #16452
* #16451
* #16450
* #16449
* #16448
* #16447
* __->__ #16446
* #16724

Add SlimTensor-based implementations of AOTI shim functions for tensor
creation:

1. `aoti_torch_create_tensor_from_blob_v2()` - Creates a non-owning
SlimTensor that wraps existing memory using the `from_blob()` factory

Both functions support CPU and CUDA devices and handle all 7 SlimTensor
dtypes.

Also add `memory_slim.h` and `memory_slim.cpp` with SlimTensor-based
shim implementations for working on new API while not impact the current
pipeline. Will use memory_slim.{h/cpp} to replace current memory.{h/cpp}
when everything has been set up.

Differential Revision:
[D90126247](https://our.internmc.facebook.com/intern/diff/D90126247/)
Gasoonjia added a commit that referenced this pull request Jan 27, 2026
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* #16565
* #16551
* #16469
* #16457
* #16455
* #16454
* #16453
* #16452
* #16451
* #16450
* #16449
* #16448
* __->__ #16447
* #16446
* #16724

Add SlimTensor-based implementations of AOTI shim functions for tensor
creation:

`aoti_torch_create_tensor_from_blob_v2()` - Creates a non-owning
SlimTensor that wraps existing memory using the `from_blob()` factory

Both functions support CPU and CUDA devices and handle all 7 SlimTensor
dtypes.

Changes:
- Add `memory_slim.h` and `memory_slim.cpp` with SlimTensor-based shim
implementations
- Add `runtime_shims_slim` library target to TARGETS with
`CUDA_AVAILABLE=1` preprocessor flag
- Add `cuda_shim_slim_cpp_unittest()` function for SlimTensor test
targets

Differential Revision:
[D90126244](https://our.internmc.facebook.com/intern/diff/D90126244/)
… to SlimTensor conversion"

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

[ghstack-poisoned]
Gasoonjia added a commit that referenced this pull request Jan 28, 2026
…sor conversion

Pull Request resolved: #16551

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads
ghstack-source-id: 336360657
@exported-using-ghexport

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)
… to SlimTensor conversion"

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

[ghstack-poisoned]
Gasoonjia added a commit that referenced this pull request Jan 28, 2026
…sor conversion

Pull Request resolved: #16551

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads
ghstack-source-id: 336512812
@exported-using-ghexport

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)
… to SlimTensor conversion"

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

[ghstack-poisoned]
Gasoonjia added a commit that referenced this pull request Jan 28, 2026
…sor conversion

Pull Request resolved: #16551

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads
ghstack-source-id: 336530258
@exported-using-ghexport

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)
… to SlimTensor conversion"

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)

[ghstack-poisoned]
Gasoonjia added a commit that referenced this pull request Jan 28, 2026
…sor conversion

Pull Request resolved: #16551

Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.

Key features:
- Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t)
- Supports CPU and CUDA target devices via storage()->copy_()
- Preserves tensor strides (non-contiguous layouts)
- Provides both reference and pointer overloads
ghstack-source-id: 336538672
@exported-using-ghexport

Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants