Skip to content

Commit 8796b3b

Browse files
committed
Auto merge of #149114 - BoxyUwU:mgca_adt_exprs, r=lcnr
MGCA: Support struct expressions without intermediary anon consts r? oli-obk tracking issue: #132980 Fixes #127972 Fixes #137888 Fixes #140275 due to delaying a bug instead of ICEing in HIR ty lowering. ### High level goal Under `feature(min_generic_const_args)` this PR adds another kind of const argument. A struct/variant construction const arg kind. We represent the values of the fields as themselves being const arguments which allows for uses of generic parameters subject to the existing restrictions present in `min_generic_const_args`: ```rust fn foo<const N: Option<u32>>() {} trait Trait { #[type_const] const ASSOC: usize; } fn bar<T: Trait, const N: u32>() { // the initializer of `_0` is a `N` which is a legal const argument // so this is ok. foo::<{ Some::<u32> { 0: N } }>(); // this is allowed as mgca supports uses of assoc consts in the // type system. ie `<T as Trait>::ASSOC` is a legal const argument foo::<{ Some::<u32> { 0: <T as Trait>::ASSOC } }>(); // this on the other hand is not allowed as `N + 1` is not a legal // const argument foo::<{ Some::<u32> { 0: N + 1 } }>(); } ``` This PR does not support uses of const ctors, e.g. `None`. And also does not support tuple constructors, e.g. `Some(N)`. I believe that it would not be difficult to add support for such functionality after this PR lands so have left it out deliberately. We currently require that all generic parameters on the type being constructed be explicitly specified. I haven't really looked into why that is but it doesn't seem desirable to me as it should be legal to write `Some { ... }` in a const argument inside of a body and have that desugar to `Some::<_> { ... }`. Regardless this can definitely be a follow-up PR and I assume this is some underlying consistency with the way that elided args are handled with type paths elsewhere. This PRs implementation of supporting struct expressions is somewhat incomplete. We don't handle `Foo { ..expr }` at all and aren't handling privacy/stability. The printing of `ConstArgKind::Struct` HIR nodes doesn't really exist either :') I've tried to keep the implementation here somewhat deliberately incomplete as I think a number of these issues are actually quite small and self contained after this PR lands and I'm hoping it could be a good set of issues to mentor newer contributors on 🤔 I just wanted the "bare minimum" required to actually demonstrate that the previous changes are "necessary". ### `ValTree` now recurse through `ty::Const` In order to actually represent struct/variant construction in `ty::Const` without going through an anon const we would need to introduce some new `ConstKind` variant. Let's say some hypothetical `ConstKind::ADT(Ty<'tcx>, List<Const<'tcx>>)`. This variant would represent things the same way that `ValTree` does with the first element representing the `VariantIdx` of the enum (if its an enum), and then followed by a list of field values in definition order. This *could* work but there are a few reasons why it's suboptimal. First it would mean we have a second kind of `Const` that can be normalized. Right now we only have `ConstKind::Unevaluated` which possibly needs normalization. Similarly with `TyKind` we *only* have `TyKind::Alias`. If we introduced `ConstKind::ADT` it would need to be normalized to a `ConstKind::Value` eventually. This feels to me like it has the potential to cause bugs in the long run where only `ConstKind::Unevaluated` is handled by some code paths. Secondly it would make type equality/inference be kind of... weird... It's desirable for `Some { 0: ?x } eq Some { 0: 1_u32 }` to result in `?x=1_u32`. I can't see a way for this to work with this `ConstKind::ADT` design under the current architecture for how we represent types/consts and generally do equality operations. We would need to wholly special case these two variants in type equality and have a custom recursive walker separate from the existing architecture for doing type equality. It would also be somewhat unique in that it's a non-rigid `ty::Const` (it can be normalized more later on in type inference) while also having somewhat "structural" equality behaviour. Lastly, it's worth noting that its not *actually* `ConstKind::ADT` that we want. It's desirable to extend this setup to also support tuples and arrays, or even references if we wind up supporting those in const generics. Therefore this isn't really `ConstKind::ADT` but a more general `ConstKind::ShallowValue` or something to that effect. It represents at least one "layer" of a types value :') Instead of doing this implementation choice we instead change `ValTree::Branch`: ```rust enum ValTree<'tcx> { Leaf(ScalarInt), // Before this PR: Branch(Box<[ValTree<'tcx>]>), // After this PR Branch(Box<[Const<'tcx>]>), } ``` The representation for so called "shallow values" is now the same as the representation for the *entire* full value. The desired inference/type equality behaviour just falls right out of this. We also don't wind up with these shallow values actually being non-rigid. And `ValTree` *already* supports references/tuples/arrays so we can handle those just fine. I think in the future it might be worth considering inlining `ValTree` into `ty::ConstKind`. E.g: ```rust enum ConstKind { Scalar(Ty<'tcx>, ScalarInt), ShallowValue(Ty<'tcx>, List<Const<'tcx>>), Unevaluated(UnevaluatedConst<'tcx>), ... } ``` This would imply that the usage of `ValTree`s in patterns would now be using `ty::Const` but they already kind of are anyway and I think that's probably okay in the long run. It also would mean that the set of things we *could* represent in const patterns is greater which may be desirable in the long run for supporting things such as const patterns of const generic parameters. Regardless, this PR doesn't actually inline `ValTree` into `ty::ConstKind`, it only changes `Branch` to recurse through `Const`. This change could be split out of this PR if desired. I'm not sure if there'll be a perf impact from this change. It's somewhat plausible as now all const pattern values that have nesting will be interning a lot more `Ty`s. We shall see :> ### Forbidding generic parameters under mgca Under mgca we now allow all const arguments to resolve paths to generic parameters. We then *later* actually validate that the const arg should be allowed to access generic parameters if it did wind up resolving to any. This winds up just being a lot simpler to implement than trying to make name resolution "keep track" of whether we're inside of a non-anon-const const arg and then encounter a `const { ... }` indicating we should now stop allowing resolving to generic parameters. It's also somewhat in line with what we'll need for a `feature(generic_const_args)` where we'll want to decide whether an anon const should have any generic parameters based off syntactically whether any generic parameters were used. Though that design is entirely hypothetical at this point :) ### Followup Work - Make HIR ty lowering check whether lowering generic parameters is supported and if not lower to an error type/const. Should make the code cleaner, fix some other bugs, and maybe(?) recover perf since we'll be accessing less queries which I think is part of the perf regression of this PR - Make the ValTree setup less scuffed. We should find a new name for `ConstKind::Value` and the `Val` part of `ValTree` and `ty::Value` as they no longer correspond to a fully normalized structure. It may also be worth looking into inlining `ValTreeKind` into `ConstKind` or atleast into `ty::Value` or sth 🤔 - Support tuple constructors and const constructors not just struct expressions. - Reduce code duplication between HIR ty lowering's handling of struct expressions, and HIR typeck's handling of struct expressions - Try fix perf #149114 (comment). Maybe this will clear up once we clean up `ValTree` a bit and stop doing double interning and whatnot
2 parents 2ca7bcd + 79fd535 commit 8796b3b

File tree

80 files changed

+1163
-450
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

80 files changed

+1163
-450
lines changed

compiler/rustc_ast_lowering/src/index.rs

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -281,6 +281,13 @@ impl<'a, 'hir> Visitor<'hir> for NodeCollector<'a, 'hir> {
281281
});
282282
}
283283

284+
fn visit_const_arg_expr_field(&mut self, field: &'hir ConstArgExprField<'hir>) {
285+
self.insert(field.span, field.hir_id, Node::ConstArgExprField(field));
286+
self.with_parent(field.hir_id, |this| {
287+
intravisit::walk_const_arg_expr_field(this, field);
288+
})
289+
}
290+
284291
fn visit_stmt(&mut self, stmt: &'hir Stmt<'hir>) {
285292
self.insert(stmt.span, stmt.hir_id, Node::Stmt(stmt));
286293

compiler/rustc_ast_lowering/src/lib.rs

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2410,6 +2410,47 @@ impl<'a, 'hir> LoweringContext<'a, 'hir> {
24102410

24112411
ConstArg { hir_id: self.next_id(), kind: hir::ConstArgKind::Path(qpath) }
24122412
}
2413+
ExprKind::Struct(se) => {
2414+
let path = self.lower_qpath(
2415+
expr.id,
2416+
&se.qself,
2417+
&se.path,
2418+
// FIXME(mgca): we may want this to be `Optional` instead, but
2419+
// we would also need to make sure that HIR ty lowering errors
2420+
// when these paths wind up in signatures.
2421+
ParamMode::Explicit,
2422+
AllowReturnTypeNotation::No,
2423+
ImplTraitContext::Disallowed(ImplTraitPosition::Path),
2424+
None,
2425+
);
2426+
2427+
let fields = self.arena.alloc_from_iter(se.fields.iter().map(|f| {
2428+
let hir_id = self.lower_node_id(f.id);
2429+
// FIXME(mgca): This might result in lowering attributes that
2430+
// then go unused as the `Target::ExprField` is not actually
2431+
// corresponding to `Node::ExprField`.
2432+
self.lower_attrs(hir_id, &f.attrs, f.span, Target::ExprField);
2433+
2434+
let expr = if let ExprKind::ConstBlock(anon_const) = &f.expr.kind {
2435+
let def_id = self.local_def_id(anon_const.id);
2436+
let def_kind = self.tcx.def_kind(def_id);
2437+
assert_eq!(DefKind::AnonConst, def_kind);
2438+
2439+
self.lower_anon_const_to_const_arg_direct(anon_const)
2440+
} else {
2441+
self.lower_expr_to_const_arg_direct(&f.expr)
2442+
};
2443+
2444+
&*self.arena.alloc(hir::ConstArgExprField {
2445+
hir_id,
2446+
field: self.lower_ident(f.ident),
2447+
expr: self.arena.alloc(expr),
2448+
span: self.lower_span(f.span),
2449+
})
2450+
}));
2451+
2452+
ConstArg { hir_id: self.next_id(), kind: hir::ConstArgKind::Struct(path, fields) }
2453+
}
24132454
ExprKind::Underscore => ConstArg {
24142455
hir_id: self.lower_node_id(expr.id),
24152456
kind: hir::ConstArgKind::Infer(expr.span, ()),

compiler/rustc_codegen_cranelift/src/intrinsics/simd.rs

Lines changed: 8 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ pub(super) fn codegen_simd_intrinsic_call<'tcx>(
130130
return;
131131
}
132132

133-
let idx = generic_args[2].expect_const().to_value().valtree.unwrap_branch();
133+
let idx = generic_args[2].expect_const().to_branch();
134134

135135
assert_eq!(x.layout(), y.layout());
136136
let layout = x.layout();
@@ -143,7 +143,7 @@ pub(super) fn codegen_simd_intrinsic_call<'tcx>(
143143

144144
let total_len = lane_count * 2;
145145

146-
let indexes = idx.iter().map(|idx| idx.unwrap_leaf().to_u32()).collect::<Vec<u32>>();
146+
let indexes = idx.iter().map(|idx| idx.to_leaf().to_u32()).collect::<Vec<u32>>();
147147

148148
for &idx in &indexes {
149149
assert!(u64::from(idx) < total_len, "idx {} out of range 0..{}", idx, total_len);
@@ -961,9 +961,8 @@ pub(super) fn codegen_simd_intrinsic_call<'tcx>(
961961
let lane_clif_ty = fx.clif_type(val_lane_ty).unwrap();
962962
let ptr_val = ptr.load_scalar(fx);
963963

964-
let alignment = generic_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
965-
.unwrap_leaf()
966-
.to_simd_alignment();
964+
let alignment =
965+
generic_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment();
967966

968967
let memflags = match alignment {
969968
SimdAlign::Unaligned => MemFlags::new().with_notrap(),
@@ -1006,9 +1005,8 @@ pub(super) fn codegen_simd_intrinsic_call<'tcx>(
10061005
let lane_clif_ty = fx.clif_type(val_lane_ty).unwrap();
10071006
let ret_lane_layout = fx.layout_of(ret_lane_ty);
10081007

1009-
let alignment = generic_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
1010-
.unwrap_leaf()
1011-
.to_simd_alignment();
1008+
let alignment =
1009+
generic_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment();
10121010

10131011
let memflags = match alignment {
10141012
SimdAlign::Unaligned => MemFlags::new().with_notrap(),
@@ -1059,9 +1057,8 @@ pub(super) fn codegen_simd_intrinsic_call<'tcx>(
10591057
let ret_lane_layout = fx.layout_of(ret_lane_ty);
10601058
let ptr_val = ptr.load_scalar(fx);
10611059

1062-
let alignment = generic_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
1063-
.unwrap_leaf()
1064-
.to_simd_alignment();
1060+
let alignment =
1061+
generic_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment();
10651062

10661063
let memflags = match alignment {
10671064
SimdAlign::Unaligned => MemFlags::new().with_notrap(),

compiler/rustc_codegen_llvm/src/intrinsic.rs

Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -345,7 +345,7 @@ impl<'ll, 'tcx> IntrinsicCallBuilderMethods<'tcx> for Builder<'_, 'll, 'tcx> {
345345
_ => bug!(),
346346
};
347347
let ptr = args[0].immediate();
348-
let locality = fn_args.const_at(1).to_value().valtree.unwrap_leaf().to_i32();
348+
let locality = fn_args.const_at(1).to_leaf().to_i32();
349349
self.call_intrinsic(
350350
"llvm.prefetch",
351351
&[self.val_ty(ptr)],
@@ -1527,7 +1527,7 @@ fn generic_simd_intrinsic<'ll, 'tcx>(
15271527
}
15281528

15291529
if name == sym::simd_shuffle_const_generic {
1530-
let idx = fn_args[2].expect_const().to_value().valtree.unwrap_branch();
1530+
let idx = fn_args[2].expect_const().to_branch();
15311531
let n = idx.len() as u64;
15321532

15331533
let (out_len, out_ty) = require_simd!(ret_ty, SimdReturn);
@@ -1546,7 +1546,7 @@ fn generic_simd_intrinsic<'ll, 'tcx>(
15461546
.iter()
15471547
.enumerate()
15481548
.map(|(arg_idx, val)| {
1549-
let idx = val.unwrap_leaf().to_i32();
1549+
let idx = val.to_leaf().to_i32();
15501550
if idx >= i32::try_from(total_len).unwrap() {
15511551
bx.sess().dcx().emit_err(InvalidMonomorphization::SimdIndexOutOfBounds {
15521552
span,
@@ -1958,9 +1958,7 @@ fn generic_simd_intrinsic<'ll, 'tcx>(
19581958
// those lanes whose `mask` bit is enabled.
19591959
// The memory addresses corresponding to the “off” lanes are not accessed.
19601960

1961-
let alignment = fn_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
1962-
.unwrap_leaf()
1963-
.to_simd_alignment();
1961+
let alignment = fn_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment();
19641962

19651963
// The element type of the "mask" argument must be a signed integer type of any width
19661964
let mask_ty = in_ty;
@@ -2053,9 +2051,7 @@ fn generic_simd_intrinsic<'ll, 'tcx>(
20532051
// those lanes whose `mask` bit is enabled.
20542052
// The memory addresses corresponding to the “off” lanes are not accessed.
20552053

2056-
let alignment = fn_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
2057-
.unwrap_leaf()
2058-
.to_simd_alignment();
2054+
let alignment = fn_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment();
20592055

20602056
// The element type of the "mask" argument must be a signed integer type of any width
20612057
let mask_ty = in_ty;

compiler/rustc_codegen_ssa/src/mir/constant.rs

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -77,22 +77,21 @@ impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx> {
7777
.flatten()
7878
.map(|val| {
7979
// A SIMD type has a single field, which is an array.
80-
let fields = val.unwrap_branch();
80+
let fields = val.to_branch();
8181
assert_eq!(fields.len(), 1);
82-
let array = fields[0].unwrap_branch();
82+
let array = fields[0].to_branch();
8383
// Iterate over the array elements to obtain the values in the vector.
8484
let values: Vec<_> = array
8585
.iter()
8686
.map(|field| {
87-
if let Some(prim) = field.try_to_scalar() {
88-
let layout = bx.layout_of(field_ty);
89-
let BackendRepr::Scalar(scalar) = layout.backend_repr else {
90-
bug!("from_const: invalid ByVal layout: {:#?}", layout);
91-
};
92-
bx.scalar_to_backend(prim, scalar, bx.immediate_backend_type(layout))
93-
} else {
87+
let Some(prim) = field.try_to_scalar() else {
9488
bug!("field is not a scalar {:?}", field)
95-
}
89+
};
90+
let layout = bx.layout_of(field_ty);
91+
let BackendRepr::Scalar(scalar) = layout.backend_repr else {
92+
bug!("from_const: invalid ByVal layout: {:#?}", layout);
93+
};
94+
bx.scalar_to_backend(prim, scalar, bx.immediate_backend_type(layout))
9695
})
9796
.collect();
9897
bx.const_vector(&values)

compiler/rustc_codegen_ssa/src/mir/intrinsic.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx> {
102102
};
103103

104104
let parse_atomic_ordering = |ord: ty::Value<'tcx>| {
105-
let discr = ord.valtree.unwrap_branch()[0].unwrap_leaf();
105+
let discr = ord.to_branch()[0].to_leaf();
106106
discr.to_atomic_ordering()
107107
};
108108

compiler/rustc_const_eval/src/const_eval/valtrees.rs

Lines changed: 18 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -36,13 +36,17 @@ fn branches<'tcx>(
3636
// For enums, we prepend their variant index before the variant's fields so we can figure out
3737
// the variant again when just seeing a valtree.
3838
if let Some(variant) = variant {
39-
branches.push(ty::ValTree::from_scalar_int(*ecx.tcx, variant.as_u32().into()));
39+
branches.push(ty::Const::new_value(
40+
*ecx.tcx,
41+
ty::ValTree::from_scalar_int(*ecx.tcx, variant.as_u32().into()),
42+
ecx.tcx.types.u32,
43+
));
4044
}
4145

4246
for i in 0..field_count {
4347
let field = ecx.project_field(&place, FieldIdx::from_usize(i)).unwrap();
4448
let valtree = const_to_valtree_inner(ecx, &field, num_nodes)?;
45-
branches.push(valtree);
49+
branches.push(ty::Const::new_value(*ecx.tcx, valtree, field.layout.ty));
4650
}
4751

4852
// Have to account for ZSTs here
@@ -65,7 +69,7 @@ fn slice_branches<'tcx>(
6569
for i in 0..n {
6670
let place_elem = ecx.project_index(place, i).unwrap();
6771
let valtree = const_to_valtree_inner(ecx, &place_elem, num_nodes)?;
68-
elems.push(valtree);
72+
elems.push(ty::Const::new_value(*ecx.tcx, valtree, place_elem.layout.ty));
6973
}
7074

7175
Ok(ty::ValTree::from_branches(*ecx.tcx, elems))
@@ -200,8 +204,8 @@ fn reconstruct_place_meta<'tcx>(
200204
&ObligationCause::dummy(),
201205
|ty| ty,
202206
|| {
203-
let branches = last_valtree.unwrap_branch();
204-
last_valtree = *branches.last().unwrap();
207+
let branches = last_valtree.to_branch();
208+
last_valtree = branches.last().unwrap().to_value().valtree;
205209
debug!(?branches, ?last_valtree);
206210
},
207211
);
@@ -212,7 +216,7 @@ fn reconstruct_place_meta<'tcx>(
212216
};
213217

214218
// Get the number of elements in the unsized field.
215-
let num_elems = last_valtree.unwrap_branch().len();
219+
let num_elems = last_valtree.to_branch().len();
216220
MemPlaceMeta::Meta(Scalar::from_target_usize(num_elems as u64, &tcx))
217221
}
218222

@@ -274,7 +278,7 @@ pub fn valtree_to_const_value<'tcx>(
274278
mir::ConstValue::ZeroSized
275279
}
276280
ty::Bool | ty::Int(_) | ty::Uint(_) | ty::Float(_) | ty::Char | ty::RawPtr(_, _) => {
277-
mir::ConstValue::Scalar(Scalar::Int(cv.valtree.unwrap_leaf()))
281+
mir::ConstValue::Scalar(Scalar::Int(cv.to_leaf()))
278282
}
279283
ty::Pat(ty, _) => {
280284
let cv = ty::Value { valtree: cv.valtree, ty };
@@ -301,12 +305,13 @@ pub fn valtree_to_const_value<'tcx>(
301305
|| matches!(cv.ty.kind(), ty::Adt(def, _) if def.is_struct()))
302306
{
303307
// A Scalar tuple/struct; we can avoid creating an allocation.
304-
let branches = cv.valtree.unwrap_branch();
308+
let branches = cv.to_branch();
305309
// Find the non-ZST field. (There can be aligned ZST!)
306310
for (i, &inner_valtree) in branches.iter().enumerate() {
307311
let field = layout.field(&LayoutCx::new(tcx, typing_env), i);
308312
if !field.is_zst() {
309-
let cv = ty::Value { valtree: inner_valtree, ty: field.ty };
313+
let cv =
314+
ty::Value { valtree: inner_valtree.to_value().valtree, ty: field.ty };
310315
return valtree_to_const_value(tcx, typing_env, cv);
311316
}
312317
}
@@ -381,7 +386,7 @@ fn valtree_into_mplace<'tcx>(
381386
// Zero-sized type, nothing to do.
382387
}
383388
ty::Bool | ty::Int(_) | ty::Uint(_) | ty::Float(_) | ty::Char | ty::RawPtr(..) => {
384-
let scalar_int = valtree.unwrap_leaf();
389+
let scalar_int = valtree.to_leaf();
385390
debug!("writing trivial valtree {:?} to place {:?}", scalar_int, place);
386391
ecx.write_immediate(Immediate::Scalar(scalar_int.into()), place).unwrap();
387392
}
@@ -391,13 +396,13 @@ fn valtree_into_mplace<'tcx>(
391396
ecx.write_immediate(imm, place).unwrap();
392397
}
393398
ty::Adt(_, _) | ty::Tuple(_) | ty::Array(_, _) | ty::Str | ty::Slice(_) => {
394-
let branches = valtree.unwrap_branch();
399+
let branches = valtree.to_branch();
395400

396401
// Need to downcast place for enums
397402
let (place_adjusted, branches, variant_idx) = match ty.kind() {
398403
ty::Adt(def, _) if def.is_enum() => {
399404
// First element of valtree corresponds to variant
400-
let scalar_int = branches[0].unwrap_leaf();
405+
let scalar_int = branches[0].to_leaf();
401406
let variant_idx = VariantIdx::from_u32(scalar_int.to_u32());
402407
let variant = def.variant(variant_idx);
403408
debug!(?variant);
@@ -425,7 +430,7 @@ fn valtree_into_mplace<'tcx>(
425430
};
426431

427432
debug!(?place_inner);
428-
valtree_into_mplace(ecx, &place_inner, *inner_valtree);
433+
valtree_into_mplace(ecx, &place_inner, inner_valtree.to_value().valtree);
429434
dump_place(ecx, &place_inner);
430435
}
431436

compiler/rustc_const_eval/src/interpret/intrinsics/simd.rs

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -545,15 +545,15 @@ impl<'tcx, M: Machine<'tcx>> InterpCx<'tcx, M> {
545545
let (right, right_len) = self.project_to_simd(&args[1])?;
546546
let (dest, dest_len) = self.project_to_simd(&dest)?;
547547

548-
let index = generic_args[2].expect_const().to_value().valtree.unwrap_branch();
548+
let index = generic_args[2].expect_const().to_branch();
549549
let index_len = index.len();
550550

551551
assert_eq!(left_len, right_len);
552552
assert_eq!(u64::try_from(index_len).unwrap(), dest_len);
553553

554554
for i in 0..dest_len {
555555
let src_index: u64 =
556-
index[usize::try_from(i).unwrap()].unwrap_leaf().to_u32().into();
556+
index[usize::try_from(i).unwrap()].to_leaf().to_u32().into();
557557
let dest = self.project_index(&dest, i)?;
558558

559559
let val = if src_index < left_len {
@@ -657,9 +657,7 @@ impl<'tcx, M: Machine<'tcx>> InterpCx<'tcx, M> {
657657
self.check_simd_ptr_alignment(
658658
ptr,
659659
dest_layout,
660-
generic_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
661-
.unwrap_leaf()
662-
.to_simd_alignment(),
660+
generic_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment(),
663661
)?;
664662

665663
for i in 0..dest_len {
@@ -689,9 +687,7 @@ impl<'tcx, M: Machine<'tcx>> InterpCx<'tcx, M> {
689687
self.check_simd_ptr_alignment(
690688
ptr,
691689
args[2].layout,
692-
generic_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
693-
.unwrap_leaf()
694-
.to_simd_alignment(),
690+
generic_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment(),
695691
)?;
696692

697693
for i in 0..vals_len {

0 commit comments

Comments
 (0)