GitHub / pytorch/pytorch issues and pull requests
Labelled with: module: pt2-dispatcher
#140764 - [Feature request] Enabling padding-free training with FlexAttention
Issue -
State: closed - Opened by iiLaurens about 1 year ago
- 2 comments
Labels: triaged, oncall: pt2, module: pt2-dispatcher, module: flex attention
#140760 - Footgun: tracer.root.register_module( in HOPs
Issue -
State: closed - Opened by zou3519 about 1 year ago
- 1 comment
Labels: triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher
#140363 - FlexAttention gives me an INTERNAL_ASSERT_FAILED during mask_mod
Issue -
State: closed - Opened by moinnadeem about 1 year ago
- 2 comments
Labels: needs reproduction, triaged, bug, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention, module: sdpa
#139998 - DISABLED test_aot_export_with_torch_cond (__main__.TestAOTExport)
Issue -
State: open - Opened by pytorch-bot[bot] about 1 year ago
- 10 comments
Labels: triaged, module: flaky-tests, skipped, oncall: pt2, module: higher order operators, module: pt2-dispatcher
#139836 - LoweringException: AssertionError: convert FlexibleLayout to FixedLayout first when using score_mod
Issue -
State: closed - Opened by cat-state about 1 year ago
- 4 comments
Labels: triaged, oncall: pt2, module: inductor, module: higher order operators, module: pt2-dispatcher, module: flex attention
#139754 - [ROCm] [Flex attention] Memory access fault on nested_tensor UT
Issue -
State: closed - Opened by jataylo about 1 year ago
- 2 comments
Labels: module: rocm, triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#139544 - [flex attention][torch.compile] LoweringException: TypeError: cannot determine truth value of Relational
Issue -
State: closed - Opened by JeffSHF about 1 year ago
- 2 comments
Labels: triaged, oncall: pt2, module: dynamic shapes, module: higher order operators, module: pt2-dispatcher, module: flex attention
#138493 - Flex attention underperforms SDPA (cuDNN), constructing T5 attention bias via embedding weights
Issue -
State: closed - Opened by Birch-san about 1 year ago
- 4 comments
Labels: triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#138422 - vmap with torch.autograd.grad does not work on output of compiled function
Issue -
State: open - Opened by ValerianRey about 1 year ago
- 3 comments
Labels: triaged, module: vmap, oncall: pt2, module: functorch, module: pt2-dispatcher
#138150 - torch.cond should support omitting arguments to pass in when it is empty
Issue -
State: closed - Opened by ezyang about 1 year ago
- 3 comments
Labels: good first issue, triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher
#137801 - FlexAttention: create_block_mask() passes out-of-range indices to mask_mod
Issue -
State: closed - Opened by jbschlosser about 1 year ago
- 3 comments
Labels: oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#137779 - Flex attention with mask depending on queries and keys lengths (or how to implement `causal_lower_right` masking)
Issue -
State: closed - Opened by janchorowski about 1 year ago
- 4 comments
Labels: triaged, oncall: pt2, module: pt2-dispatcher, module: flex attention
#137639 - HOP input mutation analysis is not comprehensive
Pull Request -
State: closed - Opened by zou3519 about 1 year ago
- 6 comments
Labels: triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher
#137536 - Actually no-op custom ops when torch::deploy is on
Issue -
State: closed - Opened by zou3519 about 1 year ago
- 2 comments
Labels: high priority, triaged, module: custom-operators, module: deploy, oncall: pt2, module: pt2-dispatcher
#137481 - [FlexAttention] Using FlexAttention with DDP complains about a "higher order optimizer"
Issue -
State: closed - Opened by moinnadeem about 1 year ago
- 4 comments
Labels: oncall: distributed, triaged, oncall: transformer/mha, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#137057 - Inductor max used memory perform worse with as_strided vs split_with_sizes + subscripts
Issue -
State: closed - Opened by laithsakka about 1 year ago
- 10 comments
Labels: triaged, oncall: pt2, module: inductor, module: pt2-dispatcher, module: reinplacing, vllm-compile
#136989 - Flex attention reports key error, but works with an additional print message
Issue -
State: closed - Opened by why-in-Shanghaitech about 1 year ago
- 6 comments
Labels: triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#136914 - FlexAttention errors with dynamic shapes when closing over a symbolic shape
Issue -
State: closed - Opened by Chillee about 1 year ago
- 2 comments
Labels: triaged, oncall: pt2, module: dynamic shapes, module: higher order operators, module: pt2-dispatcher, module: flex attention
#136852 - AOTAutograd has_same_metadata call in collect_metadata_analysis.py is quadratic
Issue -
State: closed - Opened by ezyang about 1 year ago
- 1 comment
Labels: triaged, oncall: pt2, module: aotdispatch, module: startup-tracing-compile, module: pt2-dispatcher
#136784 - compilation of rrelu_with_noise with bfloat16 input does not capture noise mutation
Issue -
State: closed - Opened by IvanKobzarev about 1 year ago
- 9 comments
Labels: triaged, bug, oncall: pt2, module: pt2-dispatcher
#136525 - Can't run Flex-Attention on CPU - NoValidChoicesError during autotuneSelectAlgorithm
Issue -
State: closed - Opened by cathalobrien about 1 year ago
- 4 comments
Labels: enhancement, oncall: pt2, module: higher order operators, oncall: cpu inductor, module: pt2-dispatcher, module: flex attention
#136427 - [Flex attention] RuntimeError with vmap when using torch.compile in create_mask
Issue -
State: closed - Opened by kebijuelun about 1 year ago
- 2 comments
Labels: triaged, module: vmap, oncall: pt2, module: functorch, module: higher order operators, module: pt2-dispatcher, module: flex attention
#136375 - Reshaping Fake meta tensor results in real meta tensor
Issue -
State: closed - Opened by tugsbayasgalan about 1 year ago
Labels: oncall: pt2, module: pt2-dispatcher
#136306 - [Flex attention] Error in create_block_mask with _compile=True on Torch 2.6
Issue -
State: closed - Opened by kebijuelun about 1 year ago
- 8 comments
Labels: triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#136261 - Flex Attention Extremely Slow
Issue -
State: closed - Opened by why-in-Shanghaitech about 1 year ago
- 2 comments
Labels: oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#136232 - NaN in Flex Attention backward if BlockMask is larger than first run seq_len when return_lse=True and torch compiled
Issue -
State: closed - Opened by cora-codes about 1 year ago
- 2 comments
Labels: high priority, triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#136196 - Compiling flex attention with batch dependent block mask and dynamic shapes
Issue -
State: closed - Opened by SamGalanakis about 1 year ago
- 9 comments
Labels: triaged, oncall: pt2, module: inductor, module: higher order operators, module: pt2-dispatcher, module: flex attention
#136109 - `torch._custom_ops.custom_ops` could cause a crash in pytorch
Issue -
State: closed - Opened by Justobe about 1 year ago
- 2 comments
Labels: module: crash, triaged, module: custom-operators, oncall: pt2, module: pt2-dispatcher
#136078 - ValueError: Pointer argument (at 3) cannot be accessed from Triton
Issue -
State: closed - Opened by foreverpiano about 1 year ago
- 1 comment
Labels: triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#136064 - [AOT Autograd][functionalization] mutations on input view of same dtype fail functionalization checks
Issue -
State: closed - Opened by davidberard98 about 1 year ago
- 2 comments
Labels: triaged, module: functionalization, oncall: pt2, module: aotdispatch, module: pt2-dispatcher
#135723 - [FlexAttention] Compiled `flex_attention' crashes when training with mixed precision on RTX 3090
Issue -
State: closed - Opened by davidbuterez about 1 year ago
- 1 comment
Labels: triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#135664 - `randint(max)` causes a graph break, but not `rand().mul(max).floor().to(torch.long)` (on CPU)
Issue -
State: closed - Opened by vmoens about 1 year ago
- 7 comments
Labels: triaged, oncall: pt2, module: fakeTensor, module: dynamic shapes, module: dynamo, module: pt2-dispatcher, dynamo-triage-jan2025
#135441 - Inconsistent Tensor._version Behaviour with torch.compile()
Issue -
State: closed - Opened by SamPruden over 1 year ago
- 10 comments
Labels: module: autograd, triaged, module: functionalization, oncall: pt2, module: aotdispatch, module: pt2-dispatcher
#135206 - Flexattention: compilation fails when using block mask
Issue -
State: closed - Opened by dabeschte over 1 year ago
- 2 comments
Labels: triaged, oncall: transformer/mha, oncall: pt2, module: multi-headed-attention, module: higher order operators, module: pt2-dispatcher, module: flex attention
#135161 - Significant Accuracy Difference between Compiled and Eager Flex Attention
Issue -
State: closed - Opened by cora-codes over 1 year ago
- 22 comments
Labels: high priority, module: numerical-stability, triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#135099 - "TypeError: unhashable type: non-nested SymInt" with `torch.compile`
Issue -
State: closed - Opened by SnzFor16Min over 1 year ago
- 6 comments
Labels: triaged, oncall: pt2, module: dynamic shapes, module: aotdispatch, module: pt2-dispatcher
#135083 - torch.rrelu does not compile in inductor mode
Issue -
State: closed - Opened by intellinjun over 1 year ago
- 5 comments
Labels: triaged, module: functionalization, oncall: pt2, module: aotdispatch, module: pt2-dispatcher
#135057 - compiled autograd doesn't work with torch.library.custom_op
Issue -
State: closed - Opened by zou3519 over 1 year ago
- 1 comment
Labels: module: custom-operators, oncall: pt2, module: dynamo, module: compiled autograd, module: pt2-dispatcher
#135028 - [FlexAttention] create_block_mask with AssertionError: increase TRITON_MAX_BLOCK['X'] to 4096
Issue -
State: closed - Opened by shuuul over 1 year ago
- 6 comments
Labels: triaged, oncall: pt2, module: inductor, module: higher order operators, module: pt2-dispatcher, module: flex attention
#134888 - "RuntimeError: view size is not compatible with input tensor's size and stride" Error when using Flex Attention
Issue -
State: closed - Opened by cora-codes over 1 year ago
- 9 comments
Labels: triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#134852 - Flexattention: CUDA error: an illegal memory access was encountered
Issue -
State: closed - Opened by foreverpiano over 1 year ago
- 1 comment
Labels: triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#134756 - Flexattention: creating mask[1,1,96000,96000] causes OOM error
Issue -
State: closed - Opened by foreverpiano over 1 year ago
- 7 comments
Labels: triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#134739 - Compile fails on Flex attention + FSDP
Issue -
State: open - Opened by platers over 1 year ago
- 5 comments
Labels: oncall: distributed, triaged, module: fsdp, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flex attention
#134644 - Deduce Tangents Stride For Channels Last Tensor
Issue -
State: closed - Opened by alpha0422 over 1 year ago
- 4 comments
Labels: triaged, oncall: pt2, module: aotdispatch, module: inductor, module: pt2-dispatcher
#134385 - FlopCounterMode doesn't support HOP
Issue -
State: open - Opened by yanboliang over 1 year ago
- 6 comments
Labels: triaged, oncall: pt2, module: higher order operators, module: pt2-dispatcher, module: flop counter
#134330 - Partitioner seems unable to recompute full+permute
Pull Request -
State: closed - Opened by zou3519 over 1 year ago
- 4 comments
Labels: triaged, oncall: pt2, module: aotdispatch, module: pt2-dispatcher
#134278 - Improve custom ops aliasing error message
Issue -
State: closed - Opened by zou3519 over 1 year ago
Labels: triaged, module: custom-operators, oncall: pt2, module: pt2-dispatcher
#133858 - Dynamo treats dataclasses as UserDefinedVariable, prevents proxying into graph
Pull Request -
State: closed - Opened by bdhirsh over 1 year ago
- 2 comments
Labels: triaged, module: __torch_dispatch__, tensor subclass, oncall: pt2, module: aotdispatch, module: dynamo, module: pt2-dispatcher, dynamo-tensor-subclasses
#133592 - [torchbind x compile] Can't register a torchbind operator that mutates a tensor
Issue -
State: open - Opened by zou3519 over 1 year ago
Labels: module: torchbind, module: pt2-dispatcher, vllm-compile
#133571 - Errors with torch.compile after upgrading to 2.4.0
Issue -
State: closed - Opened by GLivshits over 1 year ago
- 25 comments
Labels: high priority, needs reproduction, triaged, oncall: transformer/mha, oncall: pt2, module: dynamic shapes, module: aotdispatch, module: inductor, module: pt2-dispatcher
#133529 - Dynamo complains about number of arguments of autograd.Function
Issue -
State: closed - Opened by vmoens over 1 year ago
- 4 comments
Labels: triaged, oncall: pt2, module: dynamo, module: higher order operators, module: pt2-dispatcher, dynamo-autograd-function
#133431 - [flexAttention] compiled flexAttention with sliding window runs in 32bit float precision but crashes on 16bit float & mixed precision
Issue -
State: closed - Opened by cathalobrien over 1 year ago
- 6 comments
Labels: needs reproduction, module: crash, triaged, oncall: pt2, module: multi-headed-attention, module: higher order operators, module: pt2-dispatcher, module: flex attention
#133318 - Update python custom ops tutorial per user feedback
Issue -
State: closed - Opened by zou3519 over 1 year ago
Labels: triaged, topic: docs, oncall: pt2, module: pt2-dispatcher
#132609 - [Dynamo]Illegal getattr invocation requires_grad in strict mode
Issue -
State: closed - Opened by xinyu-intel over 1 year ago
- 9 comments
Labels: triaged, oncall: pt2, module: dynamo, module: higher order operators, module: pt2-dispatcher, dynamo-autograd-function
#132418 - [export] Remove_effect_tokens_pass does not work well with other HOO
Issue -
State: closed - Opened by angelayi over 1 year ago
- 5 comments
Labels: triaged, oncall: pt2, module: higher order operators, export-triaged, oncall: export, module: pt2-dispatcher
#132301 - flash attention triton kernel x pt2 silently incorrect
Issue -
State: closed - Opened by zou3519 over 1 year ago
- 5 comments
Labels: triaged, oncall: pt2, module: inductor, module: pt2-dispatcher, module: user triton
#132200 - torch.ops.fsdp.set_ with torch.compile silently incorrect
Issue -
State: closed - Opened by zou3519 over 1 year ago
- 3 comments
Labels: high priority, triage review, triaged, module: correctness (silent), module: fsdp, module: functionalization, oncall: pt2, module: pt2-dispatcher
#132197 - torch.ops.fsdp.set_ on input doesn't actually modify the input (under torch.compile)
Issue -
State: closed - Opened by zou3519 over 1 year ago
- 3 comments
Labels: triaged, module: fsdp, oncall: pt2, module: pt2-dispatcher
#132196 - triton kernels (and maybe custom ops) that mutate multiple inputs silently incorrect with torch.compile
Pull Request -
State: closed - Opened by zou3519 over 1 year ago
Labels: high priority, triage review, oncall: pt2, module: inductor, module: pt2-dispatcher, module: reinplacing
#131192 - custom ops don't reinplace when mutated arg is a view of a graph input
Issue -
State: closed - Opened by zou3519 over 1 year ago
- 10 comments
Labels: triaged, module: custom-operators, oncall: pt2, module: inductor, module: pt2-dispatcher, module: reinplacing, vllm-compile
#130740 - torch.compile slows down paged flash attention
Issue -
State: closed - Opened by platers over 1 year ago
- 8 comments
Labels: high priority, triaged, module: custom-operators, oncall: pt2, module: inductor, module: pt2-dispatcher, module: reinplacing
#130736 - autograd.Function x torch.compile backward stride access
Issue -
State: open - Opened by zou3519 over 1 year ago
Labels: high priority, oncall: pt2, module: dynamo, module: pt2-dispatcher
#130284 - [custom_ops] vmap registration API
Pull Request -
State: closed - Opened by zou3519 over 1 year ago
Labels: triaged, module: custom-operators, oncall: pt2, module: pt2-dispatcher
#130104 - `torch.compile` graph breaks on `TorchFunctionMode`
Issue -
State: closed - Opened by Chillee over 1 year ago
- 4 comments
Labels: triaged, tensor subclass, oncall: pt2, module: dynamo, module: graph breaks, module: pt2-dispatcher
#129963 - autograd.Function x Dynamo tracing incorrectly returns Tensors that don't require grad
Issue -
State: closed - Opened by bdhirsh over 1 year ago
- 15 comments
Labels: high priority, triaged, tensor subclass, oncall: pt2, module: dynamo, module: pt2-dispatcher
#129617 - [custom_op] torch.library.define should be able to auto-infer schema.
Issue -
State: closed - Opened by zou3519 over 1 year ago
- 1 comment
Labels: triaged, module: custom-operators, oncall: pt2, module: pt2-dispatcher
#129496 - Mark Saved Activations As Donated Buffers to Inductor
Issue -
State: closed - Opened by eellison over 1 year ago
- 4 comments
Labels: triaged, enhancement, oncall: pt2, module: aotdispatch, module: inductor, module: pt2-dispatcher
#129486 - Compiling tensor subclasses fails when using custom op with effect tokens
Issue -
State: closed - Opened by soulitzer over 1 year ago
- 5 comments
Labels: triaged, tensor subclass, oncall: pt2, module: aotdispatch, module: pt2-dispatcher
#129395 - [custom_op] Figure out what to do with random custom ops
Issue -
State: open - Opened by zou3519 over 1 year ago
Labels: module: custom-operators, module: pt2-dispatcher
#129389 - [custom_op] factory functions don't work
Issue -
State: closed - Opened by zou3519 over 1 year ago
Labels: triaged, module: custom-operators, oncall: pt2, module: pt2-dispatcher
#129372 - [custom_op] Add a `mutated_args="unknown"` flag
Issue -
State: closed - Opened by zou3519 over 1 year ago
Labels: triaged, module: custom-operators, actionable, oncall: pt2, module: pt2-dispatcher
#129371 - [custom_op] Support default device types
Issue -
State: closed - Opened by zou3519 over 1 year ago
Labels: triaged, module: custom-operators, actionable, oncall: pt2, module: pt2-dispatcher
#128961 - `torch.compile` fails with `fullgraph=True` when accessing `getitem` of a `Tensor` subclass
Issue -
State: open - Opened by nikitaved over 1 year ago
- 4 comments
Labels: triaged, tensor subclass, oncall: pt2, module: dynamo, module: pt2-dispatcher
#128809 - [custom ops] convert string type annotation to real type
Pull Request -
State: closed - Opened by yushangdi over 1 year ago
- 5 comments
Labels: module: custom-operators, Merged, ciflow/trunk, release notes: composability, merging, module: pt2-dispatcher
#128281 - Fast path detach()/alias() in FakeTensor
Issue -
State: closed - Opened by ezyang over 1 year ago
- 3 comments
Labels: high priority, triaged, actionable, oncall: pt2, module: fakeTensor, module: dynamic shapes, module: pt2-dispatcher
#128202 - Wrong result for Inplace tensor update on transpose for some devices with torch 2.3.0
Issue -
State: closed - Opened by jerrychenhf over 1 year ago
- 5 comments
Labels: triaged, oncall: pt2, module: pt2-dispatcher
#128160 - [dynamo] Dynamo traces through __torch_dispatch__ on custom tensor subclasses
Issue -
State: open - Opened by williamwen42 over 1 year ago
- 3 comments
Labels: triaged, oncall: pt2, module: dynamo, module: pt2-dispatcher, dynamo-tensor-subclasses
#128084 - custom ops with needs_fixed_stride_order doesn't work with auto_functionalized
Issue -
State: closed - Opened by zou3519 over 1 year ago
- 9 comments
Labels: triaged, module: custom-operators, oncall: pt2, module: inductor, module: pt2-dispatcher, module: reinplacing
#128061 - torch.compile doesn't work well with custom triton kernel from Mamba
Issue -
State: closed - Opened by yanboliang over 1 year ago
- 2 comments
Labels: triaged, oncall: pt2, module: pt2-dispatcher, module: user triton
#128035 - torch.compile fails to preserve gradients when an input requiring grad is mutated
Issue -
State: closed - Opened by jamesjwu over 1 year ago
- 1 comment
Labels: triaged, actionable, bug, oncall: pt2, module: pt2-dispatcher
#127821 - Inductor lowering ignores auto_functionalized custom op
Issue -
State: closed - Opened by zou3519 over 1 year ago
- 8 comments
Labels: high priority, triaged, module: regression, oncall: pt2, module: inductor, module: pt2-dispatcher
#127660 - Inductor generates unnecessary allocation + copy operations for custom ops with mutable inputs
Issue -
State: closed - Opened by HanGuo97 over 1 year ago
- 23 comments
Labels: triaged, module: functionalization, oncall: pt2, module: inductor, module: pt2-dispatcher, module: reinplacing
#127572 - AOTAutograd: allow input mutations in the bw that occur under no_grad
Issue -
State: closed - Opened by bdhirsh over 1 year ago
- 1 comment
Labels: triaged, oncall: pt2, module: aotdispatch, module: pt2-dispatcher, internal ramp-up task
#127320 - [While_loop] How to use layer like `torch.nn.BatchNorm2d` with while_loop?
Issue -
State: closed - Opened by ManfeiBai over 1 year ago
- 8 comments
Labels: triaged, module: xla, oncall: pt2, module: higher order operators, module: pt2-dispatcher
#127174 - dynamo doesn't support `__torch_function__` on non-tensor classes
Issue -
State: closed - Opened by vmoens over 1 year ago
- 4 comments
Labels: triaged, module: __torch_function__, oncall: pt2, module: dynamo, module: pt2-dispatcher
#126936 - [dynamo, functorch] trace through FunctionalTensor
Pull Request -
State: closed - Opened by williamwen42 over 1 year ago
- 4 comments
Labels: topic: not user facing, module: functorch, module: dynamo, ciflow/inductor, module: pt2-dispatcher
#126871 - running opcheck leads to `Fail to import hypothesis in common_utils, tests are not derandomized` print
Issue -
State: closed - Opened by zou3519 over 1 year ago
- 1 comment
Labels: triaged, oncall: pt2, module: opcheck, module: pt2-dispatcher
#126870 - opcheck has dependency on expecttest, which is not a pytorch runtime dependency, leading to "module not found" error message
Issue -
State: closed - Opened by zou3519 over 1 year ago
- 1 comment
Labels: triaged, oncall: pt2, module: opcheck, module: pt2-dispatcher
#126799 - Check for error messages on torch.compile with pybind'ed functions
Issue -
State: closed - Opened by zou3519 over 1 year ago
Labels: triaged, enhancement, oncall: pt2, module: pt2-dispatcher
#126683 - Python version agnostic C++ extensions
Issue -
State: open - Opened by zou3519 over 1 year ago
Labels: module: cpp-extensions, module: custom-operators, module: pt2-dispatcher
#125989 - torch.compile uses customed trition kernel reports: RuntimeError: Inference tensors do not track version counter
Issue -
State: closed - Opened by arthursunbao over 1 year ago
- 6 comments
Labels: triaged, oncall: pt2, module: aotdispatch, module: pt2-dispatcher, module: user triton
#125745 - torch.compile error: Attempting to broadcast a dimension of length 2 at -1
Issue -
State: closed - Opened by syheliel over 1 year ago
- 5 comments
Labels: triaged, module: complex, oncall: pt2, module: decompositions, module: pt2-dispatcher, internal ramp-up task
#125671 - DISABLED test_view_and_inplace_view (__main__.TestAOTAutograd)
Issue -
State: closed - Opened by pytorch-bot[bot] over 1 year ago
- 2 comments
Labels: triaged, module: flaky-tests, skipped, oncall: pt2, module: aotdispatch, module: pt2-dispatcher
#125593 - DISABLED test_some_outputs_dont_require_grad_view (__main__.TestAOTAutograd)
Issue -
State: closed - Opened by pytorch-bot[bot] over 1 year ago
- 3 comments
Labels: triaged, module: flaky-tests, skipped, oncall: pt2, module: pt2-dispatcher
#125402 - DISABLED test_some_output_requires_grad_input_doesnt (__main__.TestAOTAutograd)
Issue -
State: closed - Opened by izaitsevfb over 1 year ago
- 2 comments
Labels: triaged, skipped, oncall: pt2, module: aotdispatch, module: pt2-dispatcher
#125078 - `torch.compile` fails with `jacfwd` when multiplying/dividing float and tensor
Pull Request -
State: closed - Opened by cw-tan over 1 year ago
- 9 comments
Labels: high priority, triaged, module: vmap, oncall: pt2, module: functorch, module: dynamo, module: pt2-dispatcher
#125044 - Support auto_functionalized for None returns
Issue -
State: closed - Opened by zou3519 over 1 year ago
- 1 comment
Labels: triaged, module: custom-operators, actionable, oncall: pt2, module: pt2-dispatcher, internal ramp-up task, module: reinplacing
#124933 - [RFC] Support reinplaceble ops for custom ops in Inductor
Issue -
State: closed - Opened by jgong5 over 1 year ago
- 14 comments
Labels: triaged, module: custom-operators, oncall: pt2, module: pt2-dispatcher, module: reinplacing
#124878 - Nested wrapper subclasses with torch.compile is broken
Pull Request -
State: closed - Opened by bdhirsh over 1 year ago
Labels: triaged, module: __torch_dispatch__, tensor subclass, oncall: pt2, module: pt2-dispatcher
#124731 - torch_dispatch has unfaithful behavior w.r.t. wrapped numbers
Issue -
State: closed - Opened by zou3519 over 1 year ago
- 4 comments
Labels: high priority, triaged, oncall: pt2, module: pt2-dispatcher