Vector Multiply Subtract multiplies corresponding elements in two vectors, subtracts the products from corresponding elements of the destination vector, and places the results in the destination vector.
Arm recommends that software does not use the VMLS instruction in the Round towards Plus Infinity and Round towards Minus Infinity rounding modes, because the rounding of the product and of the sum can change the result of the instruction in opposite directions, defeating the purpose of these rounding modes.
Depending on settings in the CPACR, NSACR, HCPTR, and FPEXC registers, and the Security state and PE mode in which the instruction is executed, an attempt to execute the instruction might be undefined, or trapped to Hyp mode. For more information see Enabling Advanced SIMD and floating-point support.
It has encodings from the following instruction sets: A32 ( A1 and A2 ) and T32 ( T1 and T2 ) .
31 | 30 | 29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | D | 1 | sz | Vn | Vd | 1 | 1 | 0 | 1 | N | Q | M | 1 | Vm | |||||||||
op |
if Q == '1' && (Vd<0> == '1' || Vn<0> == '1' || Vm<0> == '1') then UNDEFINED; if sz == '1' && !HaveFP16Ext() then UNDEFINED; advsimd = TRUE; add = (op == '0'); integer esize; integer elements; case sz of when '0' esize = 32; elements = 2; when '1' esize = 16; elements = 4; d = UInt(D:Vd); n = UInt(N:Vn); m = UInt(M:Vm); regs = if Q == '0' then 1 else 2;
31 | 30 | 29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
!= 1111 | 1 | 1 | 1 | 0 | 0 | D | 0 | 0 | Vn | Vd | 1 | 0 | size | N | 1 | M | 0 | Vm | |||||||||||||
cond | op |
if size == '00' || (size == '01' && !HaveFP16Ext()) then UNDEFINED; if size == '01' && cond != '1110' then UNPREDICTABLE; if FPSCR.Len != '000' || FPSCR.Stride != '00' then UNDEFINED; advsimd = FALSE; add = (op == '0'); integer esize; integer d; integer n; integer m; case size of when '01' esize = 16; d = UInt(Vd:D); n = UInt(Vn:N); m = UInt(Vm:M); when '10' esize = 32; d = UInt(Vd:D); n = UInt(Vn:N); m = UInt(Vm:M); when '11' esize = 64; d = UInt(D:Vd); n = UInt(N:Vn); m = UInt(M:Vm); boolean floating_point = boolean UNKNOWN; integer regs = integer UNKNOWN; integer elements = integer UNKNOWN;
If size == '01' && cond != '1110', then one of the following behaviors must occur:
15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | D | 1 | sz | Vn | Vd | 1 | 1 | 0 | 1 | N | Q | M | 1 | Vm | |||||||||
op |
if Q == '1' && (Vd<0> == '1' || Vn<0> == '1' || Vm<0> == '1') then UNDEFINED; if sz == '1' && !HaveFP16Ext() then UNDEFINED; if sz == '1' && InITBlock() then UNPREDICTABLE; advsimd = TRUE; add = (op == '0'); integer esize; integer elements; case sz of when '0' esize = 32; elements = 2; when '1' esize = 16; elements = 4; d = UInt(D:Vd); n = UInt(N:Vn); m = UInt(M:Vm); regs = if Q == '0' then 1 else 2;
If sz == '1' && InITBlock(), then one of the following behaviors must occur:
15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | D | 0 | 0 | Vn | Vd | 1 | 0 | size | N | 1 | M | 0 | Vm | ||||||||||
op |
if size == '00' || (size == '01' && !HaveFP16Ext()) then UNDEFINED; if size == '01' && InITBlock() then UNPREDICTABLE; if FPSCR.Len != '000' || FPSCR.Stride != '00' then UNDEFINED; advsimd = FALSE; add = (op == '0'); integer esize; integer d; integer n; integer m; case size of when '01' esize = 16; d = UInt(Vd:D); n = UInt(Vn:N); m = UInt(Vm:M); when '10' esize = 32; d = UInt(Vd:D); n = UInt(Vn:N); m = UInt(Vm:M); when '11' esize = 64; d = UInt(D:Vd); n = UInt(N:Vn); m = UInt(M:Vm); boolean floating_point = boolean UNKNOWN; integer regs = integer UNKNOWN; integer elements = integer UNKNOWN;
If size == '01' && InITBlock(), then one of the following behaviors must occur:
<c> |
For encoding A1: see Standard assembler syntax fields. This encoding must be unconditional. |
For encoding A2, T1 and T2: see Standard assembler syntax fields. |
<q> |
<dt> |
Is the data type for the elements of the vectors,
encoded in
|
<Qd> |
Is the 128-bit name of the SIMD&FP destination register, encoded in the "D:Vd" field as <Qd>*2. |
<Qn> |
Is the 128-bit name of the first SIMD&FP source register, encoded in the "N:Vn" field as <Qn>*2. |
<Qm> |
Is the 128-bit name of the second SIMD&FP source register, encoded in the "M:Vm" field as <Qm>*2. |
<Dd> |
Is the 64-bit name of the SIMD&FP destination register, encoded in the "D:Vd" field. |
<Dn> |
Is the 64-bit name of the first SIMD&FP source register, encoded in the "N:Vn" field. |
<Dm> |
Is the 64-bit name of the second SIMD&FP source register, encoded in the "M:Vm" field. |
<Sd> |
Is the 32-bit name of the SIMD&FP destination register, encoded in the "Vd:D" field. |
<Sn> |
Is the 32-bit name of the first SIMD&FP source register, encoded in the "Vn:N" field. |
<Sm> |
Is the 32-bit name of the second SIMD&FP source register, encoded in the "Vm:M" field. |
if ConditionPassed() then EncodingSpecificOperations(); CheckAdvSIMDOrVFPEnabled(TRUE, advsimd); if advsimd then // Advanced SIMD instruction for r = 0 to regs-1 for e = 0 to elements-1 product = FPMul(Elem[D[n+r],e,esize], Elem[D[m+r],e,esize], StandardFPSCRValue()); addend = if add then product else FPNeg(product); Elem[D[d+r],e,esize] = FPAdd(Elem[D[d+r],e,esize], addend, StandardFPSCRValue()); else // VFP instruction case esize of when 16 addend16 = (if add then FPMul(S[n]<15:0>, S[m]<15:0>, FPSCR[]) else FPNeg(FPMul(S[n]<15:0>, S[m]<15:0>, FPSCR[]))); S[d] = Zeros(16) : FPAdd(S[d]<15:0>, addend16, FPSCR[]); when 32 addend32 = (if add then FPMul(S[n], S[m], FPSCR[]) else FPNeg(FPMul(S[n], S[m], FPSCR[]))); S[d] = FPAdd(S[d], addend32, FPSCR[]); when 64 addend64 = (if add then FPMul(D[n], D[m], FPSCR[]) else FPNeg(FPMul(D[n], D[m], FPSCR[]))); D[d] = FPAdd(D[d], addend64, FPSCR[]);
Internal version only: isa v01_31, pseudocode v2023-06_rel, sve v2023-06_rel ; Build timestamp: 2023-07-04T18:06
Copyright © 2010-2023 Arm Limited or its affiliates. All rights reserved. This document is Non-Confidential.