llvm.org GIT mirror llvm / e369611
Implementation of asm-goto support in LLVM This patch accompanies the RFC posted here: http://lists.llvm.org/pipermail/llvm-dev/2018-October/127239.html This patch adds a new CallBr IR instruction to support asm-goto inline assembly like gcc as used by the linux kernel. This instruction is both a call instruction and a terminator instruction with multiple successors. Only inline assembly usage is supported today. This also adds a new INLINEASM_BR opcode to SelectionDAG and MachineIR to represent an INLINEASM block that is also considered a terminator instruction. There will likely be more bug fixes and optimizations to follow this, but we felt it had reached a point where we would like to switch to an incremental development model. Patch by Craig Topper, Alexander Ivchenko, Mikhail Dvoretckii Differential Revision: https://reviews.llvm.org/D53765 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@353563 91177308-0d34-0410-b5e6-96231b3b80d8 Craig Topper 7 months ago
87 changed file(s) with 1814 addition(s) and 221 deletion(s). Raw diff Collapse all Expand all
65126512 The terminator instructions are: ':ref:`ret `',
65136513 ':ref:`br `', ':ref:`switch `',
65146514 ':ref:`indirectbr `', ':ref:`invoke `',
6515 ':ref:`callbr `'
65156516 ':ref:`resume `', ':ref:`catchswitch `',
65166517 ':ref:`catchret `',
65176518 ':ref:`cleanupret `',
68366837 %retval = invoke coldcc i32 %Testfnptr(i32 15) to label %Continue
68376838 unwind label %TestCleanup ; i32:retval set
68386839
6840 .. _i_callbr:
6841
6842 '``callbr``' Instruction
6843 ^^^^^^^^^^^^^^^^^^^^^^^^
6844
6845 Syntax:
6846 """""""
6847
6848 ::
6849
6850 = callbr [cconv] [ret attrs] [addrspace()] [| () [fn attrs]
6851 [operand bundles] to label or jump [other labels]
6852
6853 Overview:
6854 """""""""
6855
6856 The '``callbr``' instruction causes control to transfer to a specified
6857 function, with the possibility of control flow transfer to either the
6858 '``normal``' label or one of the '``other``' labels.
6859
6860 This instruction should only be used to implement the "goto" feature of gcc
6861 style inline assembly. Any other usage is an error in the IR verifier.
6862
6863 Arguments:
6864 """"""""""
6865
6866 This instruction requires several arguments:
6867
6868 #. The optional "cconv" marker indicates which :ref:`calling
6869 convention ` the call should use. If none is
6870 specified, the call defaults to using C calling conventions.
6871 #. The optional :ref:`Parameter Attributes ` list for return
6872 values. Only '``zeroext``', '``signext``', and '``inreg``' attributes
6873 are valid here.
6874 #. The optional addrspace attribute can be used to indicate the address space
6875 of the called function. If it is not specified, the program address space
6876 from the :ref:`datalayout string` will be used.
6877 #. '``ty``': the type of the call instruction itself which is also the
6878 type of the return value. Functions that return no value are marked
6879 ``void``.
6880 #. '``fnty``': shall be the signature of the function being called. The
6881 argument types must match the types implied by this signature. This
6882 type can be omitted if the function is not varargs.
6883 #. '``fnptrval``': An LLVM value containing a pointer to a function to
6884 be called. In most cases, this is a direct function call, but
6885 indirect ``callbr``'s are just as possible, calling an arbitrary pointer
6886 to function value.
6887 #. '``function args``': argument list whose types match the function
6888 signature argument types and parameter attributes. All arguments must
6889 be of :ref:`first class ` type. If the function signature
6890 indicates the function accepts a variable number of arguments, the
6891 extra arguments can be specified.
6892 #. '``normal label``': the label reached when the called function
6893 executes a '``ret``' instruction.
6894 #. '``other labels``': the labels reached when a callee transfers control
6895 to a location other than the normal '``normal label``'
6896 #. The optional :ref:`function attributes ` list.
6897 #. The optional :ref:`operand bundles ` list.
6898
6899 Semantics:
6900 """"""""""
6901
6902 This instruction is designed to operate as a standard '``call``'
6903 instruction in most regards. The primary difference is that it
6904 establishes an association with additional labels to define where control
6905 flow goes after the call.
6906
6907 The only use of this today is to implement the "goto" feature of gcc inline
6908 assembly where additional labels can be provided as locations for the inline
6909 assembly to jump to.
6910
6911 Example:
6912 """"""""
6913
6914 .. code-block:: llvm
6915
6916 callbr void asm "", "r,x"(i32 %x, i8 *blockaddress(@foo, %fail))
6917 to label %normal or jump [label %fail]
6918
68396919 .. _i_resume:
68406920
68416921 '``resume``' Instruction
328328 return;
329329 }
330330
331 if (TI.isExceptionalTerminator()) {
332 Succs.assign(Succs.size(), true);
333 return;
334 }
335
336 if (isa(TI)) {
331 if (TI.isExceptionalTerminator() ||
332 TI.isIndirectTerminator()) {
337333 Succs.assign(Succs.size(), true);
338334 return;
339335 }
534534 // 54 is unused.
535535 FUNC_CODE_OPERAND_BUNDLE = 55, // OPERAND_BUNDLE: [tag#, value...]
536536 FUNC_CODE_INST_UNOP = 56, // UNOP: [opcode, ty, opval]
537 FUNC_CODE_INST_CALLBR = 57, // CALLBR: [attr, cc, norm, transfs,
538 // fnty, fnid, args...]
537539 };
538540
539541 enum UseListCodes {
251251
252252 bool translateInvoke(const User &U, MachineIRBuilder &MIRBuilder);
253253
254 bool translateCallBr(const User &U, MachineIRBuilder &MIRBuilder);
255
254256 bool translateLandingPad(const User &U, MachineIRBuilder &MIRBuilder);
255257
256258 /// Translate one of LLVM's cast instructions into MachineInstrs, with the
665665 /// modes as a single "operand", even though they may have multiple
666666 /// SDOperands.
667667 INLINEASM,
668
669 /// INLINEASM_BR - Terminator version of inline asm. Used by asm-goto.
670 INLINEASM_BR,
668671
669672 /// EH_LABEL - Represents a label in mid basic block used to track
670673 /// locations needed for debug and exception handling tables. These nodes
10101010 }
10111011 bool isKill() const { return getOpcode() == TargetOpcode::KILL; }
10121012 bool isImplicitDef() const { return getOpcode()==TargetOpcode::IMPLICIT_DEF; }
1013 bool isInlineAsm() const { return getOpcode() == TargetOpcode::INLINEASM; }
1013 bool isInlineAsm() const {
1014 return getOpcode() == TargetOpcode::INLINEASM ||
1015 getOpcode() == TargetOpcode::INLINEASM_BR;
1016 }
10141017
10151018 bool isMSInlineAsm() const {
10161019 return isInlineAsm() && getInlineAsmDialect() == InlineAsm::AD_Intel;
301301 private:
302302
303303 // Calls to these functions are generated by tblgen.
304 void Select_INLINEASM(SDNode *N);
304 void Select_INLINEASM(SDNode *N, bool Branch);
305305 void Select_READ_REGISTER(SDNode *Op);
306306 void Select_WRITE_REGISTER(SDNode *Op);
307307 void Select_UNDEF(SDNode *N);
66 //===----------------------------------------------------------------------===//
77 //
88 // This file defines the CallSite class, which is a handy wrapper for code that
9 // wants to treat Call and Invoke instructions in a generic way. When in non-
10 // mutation context (e.g. an analysis) ImmutableCallSite should be used.
9 // wants to treat Call, Invoke and CallBr instructions in a generic way. When
10 // in non-mutation context (e.g. an analysis) ImmutableCallSite should be used.
1111 // Finally, when some degree of customization is necessary between these two
1212 // extremes, CallSiteBase<> can be supplied with fine-tuned parameters.
1313 //
1616 // They are efficiently copyable, assignable and constructable, with cost
1717 // equivalent to copying a pointer (notice that they have only a single data
1818 // member). The internal representation carries a flag which indicates which of
19 // the two variants is enclosed. This allows for cheaper checks when various
19 // the three variants is enclosed. This allows for cheaper checks when various
2020 // accessors of CallSite are employed.
2121 //
2222 //===----------------------------------------------------------------------===//
4747 enum ID : unsigned;
4848 }
4949
50 template
51 typename BBTy = const BasicBlock,
52 typename ValTy = const Value,
53 typename UserTy = const User,
54 typename UseTy = const Use,
55 typename InstrTy = const Instruction,
50 template ,
51 typename ValTy = const Value, typename UserTy = const User,
52 typename UseTy = const Use, typename InstrTy = const Instruction,
5653 typename CallTy = const CallInst,
5754 typename InvokeTy = const InvokeInst,
55 typename CallBrTy = const CallBrInst,
5856 typename IterTy = User::const_op_iterator>
5957 class CallSiteBase {
6058 protected:
61 PointerIntPair*, 1, bool> I;
59 PointerIntPair *, 2, int> I;
6260
6361 CallSiteBase() = default;
64 CallSiteBase(CallTy *CI) : I(CI, true) { assert(CI); }
65 CallSiteBase(InvokeTy *II) : I(II, false) { assert(II); }
62 CallSiteBase(CallTy *CI) : I(CI, 1) { assert(CI); }
63 CallSiteBase(InvokeTy *II) : I(II, 0) { assert(II); }
64 CallSiteBase(CallBrTy *CBI) : I(CBI, 2) { assert(CBI); }
6665 explicit CallSiteBase(ValTy *II) { *this = get(II); }
6766
6867 private:
6968 /// This static method is like a constructor. It will create an appropriate
70 /// call site for a Call or Invoke instruction, but it can also create a null
71 /// initialized CallSiteBase object for something which is NOT a call site.
69 /// call site for a Call, Invoke or CallBr instruction, but it can also create
70 /// a null initialized CallSiteBase object for something which is NOT a call
71 /// site.
7272 static CallSiteBase get(ValTy *V) {
7373 if (InstrTy *II = dyn_cast(V)) {
7474 if (II->getOpcode() == Instruction::Call)
7575 return CallSiteBase(static_cast(II));
76 else if (II->getOpcode() == Instruction::Invoke)
76 if (II->getOpcode() == Instruction::Invoke)
7777 return CallSiteBase(static_cast(II));
78 if (II->getOpcode() == Instruction::CallBr)
79 return CallSiteBase(static_cast(II));
7880 }
7981 return CallSiteBase();
8082 }
8183
8284 public:
83 /// Return true if a CallInst is enclosed. Note that !isCall() does not mean
84 /// an InvokeInst is enclosed. It may also signify a NULL instruction pointer.
85 bool isCall() const { return I.getInt(); }
86
87 /// Return true if a InvokeInst is enclosed.
88 bool isInvoke() const { return getInstruction() && !I.getInt(); }
85 /// Return true if a CallInst is enclosed.
86 bool isCall() const { return I.getInt() == 1; }
87
88 /// Return true if a InvokeInst is enclosed. !I.getInt() may also signify a
89 /// NULL instruction pointer, so check that.
90 bool isInvoke() const { return getInstruction() && I.getInt() == 0; }
91
92 /// Return true if a CallBrInst is enclosed.
93 bool isCallBr() const { return I.getInt() == 2; }
8994
9095 InstrTy *getInstruction() const { return I.getPointer(); }
9196 InstrTy *operator->() const { return I.getPointer(); }
96101
97102 /// Return the pointer to function that is being called.
98103 ValTy *getCalledValue() const {
99 assert(getInstruction() && "Not a call or invoke instruction!");
104 assert(getInstruction() && "Not a call, invoke or callbr instruction!");
100105 return *getCallee();
101106 }
102107
113118 return false;
114119 if (isa(V) || isa(V))
115120 return false;
116 if (const CallInst *CI = dyn_cast(getInstruction())) {
117 if (CI->isInlineAsm())
121 if (const CallBase *CB = dyn_cast(getInstruction()))
122 if (CB->isInlineAsm())
118123 return false;
119 }
120124 return true;
121125 }
122126
123127 /// Set the callee to the specified value. Unlike the function of the same
124128 /// name on CallBase, does not modify the type!
125129 void setCalledFunction(Value *V) {
126 assert(getInstruction() && "Not a call or invoke instruction!");
130 assert(getInstruction() && "Not a call, callbr, or invoke instruction!");
127131 assert(cast(V->getType())->getElementType() ==
128132 cast(getInstruction())->getFunctionType() &&
129133 "New callee type does not match FunctionType on call");
191195 }
192196
193197 void setArgument(unsigned ArgNo, Value* newVal) {
194 assert(getInstruction() && "Not a call or invoke instruction!");
198 assert(getInstruction() && "Not a call, invoke or callbr instruction!");
195199 assert(arg_begin() + ArgNo < arg_end() && "Argument # out of range!");
196200 getInstruction()->setOperand(ArgNo, newVal);
197201 }
205209 /// Given a use for an argument, get the argument number that corresponds to
206210 /// it.
207211 unsigned getArgumentNo(const Use *U) const {
208 assert(getInstruction() && "Not a call or invoke instruction!");
212 assert(getInstruction() && "Not a call, invoke or callbr instruction!");
209213 assert(isArgOperand(U) && "Argument # out of range!");
210214 return U - arg_begin();
211215 }
229233 /// Given a use for a data operand, get the data operand number that
230234 /// corresponds to it.
231235 unsigned getDataOperandNo(const Use *U) const {
232 assert(getInstruction() && "Not a call or invoke instruction!");
236 assert(getInstruction() && "Not a call, invoke or callbr instruction!");
233237 assert(isDataOperand(U) && "Data operand # out of range!");
234238 return U - data_operands_begin();
235239 }
239243 using data_operand_iterator = IterTy;
240244
241245 /// data_operands_begin/data_operands_end - Return iterators iterating over
242 /// the call / invoke argument list and bundle operands. For invokes, this is
243 /// the set of instruction operands except the invoke target and the two
244 /// successor blocks; and for calls this is the set of instruction operands
245 /// except the call target.
246 /// the call / invoke / callbr argument list and bundle operands. For invokes,
247 /// this is the set of instruction operands except the invoke target and the
248 /// two successor blocks; for calls this is the set of instruction operands
249 /// except the call target; for callbrs the number of labels to skip must be
250 /// determined first.
246251
247252 IterTy data_operands_begin() const {
248253 assert(getInstruction() && "Not a call or invoke instruction!");
279284 return isCall() && cast(getInstruction())->isTailCall();
280285 }
281286
282 #define CALLSITE_DELEGATE_GETTER(METHOD) \
283 InstrTy *II = getInstruction(); \
284 return isCall() \
285 ? cast(II)->METHOD \
286 : cast(II)->METHOD
287
288 #define CALLSITE_DELEGATE_SETTER(METHOD) \
289 InstrTy *II = getInstruction(); \
290 if (isCall()) \
291 cast(II)->METHOD; \
292 else \
287 #define CALLSITE_DELEGATE_GETTER(METHOD) \
288 InstrTy *II = getInstruction(); \
289 return isCall() ? cast(II)->METHOD \
290 : isCallBr() ? cast(II)->METHOD \
291 : cast(II)->METHOD
292
293 #define CALLSITE_DELEGATE_SETTER(METHOD) \
294 InstrTy *II = getInstruction(); \
295 if (isCall()) \
296 cast(II)->METHOD; \
297 else if (isCallBr()) \
298 cast(II)->METHOD; \
299 else \
293300 cast(II)->METHOD
294301
295302 unsigned getNumArgOperands() const {
305312 }
306313
307314 bool isInlineAsm() const {
308 if (isCall())
309 return cast(getInstruction())->isInlineAsm();
310 return false;
315 return cast(getInstruction())->isInlineAsm();
311316 }
312317
313318 /// Get the calling convention of the call.
391396 /// Return true if the data operand at index \p i directly or indirectly has
392397 /// the attribute \p A.
393398 ///
394 /// Normal call or invoke arguments have per operand attributes, as specified
395 /// in the attribute set attached to this instruction, while operand bundle
396 /// operands may have some attributes implied by the type of its containing
397 /// operand bundle.
399 /// Normal call, invoke or callbr arguments have per operand attributes, as
400 /// specified in the attribute set attached to this instruction, while operand
401 /// bundle operands may have some attributes implied by the type of its
402 /// containing operand bundle.
398403 bool dataOperandHasImpliedAttr(unsigned i, Attribute::AttrKind Kind) const {
399404 CALLSITE_DELEGATE_GETTER(dataOperandHasImpliedAttr(i, Kind));
400405 }
660665
661666 class CallSite : public CallSiteBase
662667 Instruction, CallInst, InvokeInst,
663 User::op_iterator> {
668 CallBrInst, User::op_iterator> {
664669 public:
665670 CallSite() = default;
666671 CallSite(CallSiteBase B) : CallSiteBase(B) {}
667672 CallSite(CallInst *CI) : CallSiteBase(CI) {}
668673 CallSite(InvokeInst *II) : CallSiteBase(II) {}
674 CallSite(CallBrInst *CBI) : CallSiteBase(CBI) {}
669675 explicit CallSite(Instruction *II) : CallSiteBase(II) {}
670676 explicit CallSite(Value *V) : CallSiteBase(V) {}
671677
887893 ImmutableCallSite() = default;
888894 ImmutableCallSite(const CallInst *CI) : CallSiteBase(CI) {}
889895 ImmutableCallSite(const InvokeInst *II) : CallSiteBase(II) {}
896 ImmutableCallSite(const CallBrInst *CBI) : CallSiteBase(CBI) {}
890897 explicit ImmutableCallSite(const Instruction *II) : CallSiteBase(II) {}
891898 explicit ImmutableCallSite(const Value *V) : CallSiteBase(V) {}
892899 ImmutableCallSite(CallSite CS) : CallSiteBase(CS.getInstruction()) {}
942942 Callee, NormalDest, UnwindDest, Args, Name);
943943 }
944944
945 /// \brief Create a callbr instruction.
946 CallBrInst *CreateCallBr(FunctionType *Ty, Value *Callee,
947 BasicBlock *DefaultDest,
948 ArrayRef IndirectDests,
949 ArrayRef Args = None,
950 const Twine &Name = "") {
951 return Insert(CallBrInst::Create(Ty, Callee, DefaultDest, IndirectDests,
952 Args), Name);
953 }
954 CallBrInst *CreateCallBr(FunctionType *Ty, Value *Callee,
955 BasicBlock *DefaultDest,
956 ArrayRef IndirectDests,
957 ArrayRef Args,
958 ArrayRef OpBundles,
959 const Twine &Name = "") {
960 return Insert(
961 CallBrInst::Create(Ty, Callee, DefaultDest, IndirectDests, Args,
962 OpBundles), Name);
963 }
964
965 CallBrInst *CreateCallBr(FunctionCallee Callee, BasicBlock *DefaultDest,
966 ArrayRef IndirectDests,
967 ArrayRef Args = None,
968 const Twine &Name = "") {
969 return CreateCallBr(Callee.getFunctionType(), Callee.getCallee(),
970 DefaultDest, IndirectDests, Args, Name);
971 }
972 CallBrInst *CreateCallBr(FunctionCallee Callee, BasicBlock *DefaultDest,
973 ArrayRef IndirectDests,
974 ArrayRef Args,
975 ArrayRef OpBundles,
976 const Twine &Name = "") {
977 return CreateCallBr(Callee.getFunctionType(), Callee.getCallee(),
978 DefaultDest, IndirectDests, Args, Name);
979 }
980
945981 ResumeInst *CreateResume(Value *Exn) {
946982 return Insert(ResumeInst::Create(Exn));
947983 }
216216 RetTy visitVACopyInst(VACopyInst &I) { DELEGATE(IntrinsicInst); }
217217 RetTy visitIntrinsicInst(IntrinsicInst &I) { DELEGATE(CallInst); }
218218
219 // Call and Invoke are slightly different as they delegate first through
220 // a generic CallSite visitor.
219 // Call, Invoke and CallBr are slightly different as they delegate first
220 // through a generic CallSite visitor.
221221 RetTy visitCallInst(CallInst &I) {
222222 return static_cast(this)->visitCallSite(&I);
223223 }
224224 RetTy visitInvokeInst(InvokeInst &I) {
225225 return static_cast(this)->visitCallSite(&I);
226 }
227 RetTy visitCallBrInst(CallBrInst &I) {
228 return static_cast(this)->visitCallSite(&I);
226229 }
227230
228231 // While terminators don't have a distinct type modeling them, we support
269272 // The next level delegation for `CallBase` is slightly more complex in order
270273 // to support visiting cases where the call is also a terminator.
271274 RetTy visitCallBase(CallBase &I) {
272 if (isa(I))
275 if (isa(I) || isa(I))
273276 return static_cast(this)->visitTerminator(I);
274277
275278 DELEGATE(Instruction);
276279 }
277280
278 // Provide a legacy visitor for a 'callsite' that visits both calls and
279 // invokes.
281 // Provide a legacy visitor for a 'callsite' that visits calls, invokes,
282 // and calbrs.
280283 //
281284 // Prefer overriding the type system based `CallBase` instead.
282285 RetTy visitCallSite(CallSite CS) {
10321032 return 0;
10331033 case Instruction::Invoke:
10341034 return 2;
1035 case Instruction::CallBr:
1036 return getNumSubclassExtraOperandsDynamic();
10351037 }
10361038 llvm_unreachable("Invalid opcode!");
10371039 }
10381040
1041 /// Get the number of extra operands for instructions that don't have a fixed
1042 /// number of extra operands.
1043 unsigned getNumSubclassExtraOperandsDynamic() const;
1044
10391045 public:
10401046 using Instruction::getContext;
10411047
10421048 static bool classof(const Instruction *I) {
10431049 return I->getOpcode() == Instruction::Call ||
1044 I->getOpcode() == Instruction::Invoke;
1050 I->getOpcode() == Instruction::Invoke ||
1051 I->getOpcode() == Instruction::CallBr;
10451052 }
10461053 static bool classof(const Value *V) {
10471054 return isa(V) && classof(cast(V));
133133 HANDLE_TERM_INST ( 8, CleanupRet , CleanupReturnInst)
134134 HANDLE_TERM_INST ( 9, CatchRet , CatchReturnInst)
135135 HANDLE_TERM_INST (10, CatchSwitch , CatchSwitchInst)
136 LAST_TERM_INST (10)
136 HANDLE_TERM_INST (11, CallBr , CallBrInst) // A call-site terminator
137 LAST_TERM_INST (11)
137138
138139 // Standard unary operators...
139 FIRST_UNARY_INST(11)
140 HANDLE_UNARY_INST(11, FNeg , UnaryOperator)
141 LAST_UNARY_INST(11)
140 FIRST_UNARY_INST(12)
141 HANDLE_UNARY_INST(12, FNeg , UnaryOperator)
142 LAST_UNARY_INST(12)
142143
143144 // Standard binary operators...
144 FIRST_BINARY_INST(12)
145 HANDLE_BINARY_INST(12, Add , BinaryOperator)
146 HANDLE_BINARY_INST(13, FAdd , BinaryOperator)
147 HANDLE_BINARY_INST(14, Sub , BinaryOperator)
148 HANDLE_BINARY_INST(15, FSub , BinaryOperator)
149 HANDLE_BINARY_INST(16, Mul , BinaryOperator)
150 HANDLE_BINARY_INST(17, FMul , BinaryOperator)
151 HANDLE_BINARY_INST(18, UDiv , BinaryOperator)
152 HANDLE_BINARY_INST(19, SDiv , BinaryOperator)
153 HANDLE_BINARY_INST(20, FDiv , BinaryOperator)
154 HANDLE_BINARY_INST(21, URem , BinaryOperator)
155 HANDLE_BINARY_INST(22, SRem , BinaryOperator)
156 HANDLE_BINARY_INST(23, FRem , BinaryOperator)
145 FIRST_BINARY_INST(13)
146 HANDLE_BINARY_INST(13, Add , BinaryOperator)
147 HANDLE_BINARY_INST(14, FAdd , BinaryOperator)
148 HANDLE_BINARY_INST(15, Sub , BinaryOperator)
149 HANDLE_BINARY_INST(16, FSub , BinaryOperator)
150 HANDLE_BINARY_INST(17, Mul , BinaryOperator)
151 HANDLE_BINARY_INST(18, FMul , BinaryOperator)
152 HANDLE_BINARY_INST(19, UDiv , BinaryOperator)
153 HANDLE_BINARY_INST(20, SDiv , BinaryOperator)
154 HANDLE_BINARY_INST(21, FDiv , BinaryOperator)
155 HANDLE_BINARY_INST(22, URem , BinaryOperator)
156 HANDLE_BINARY_INST(23, SRem , BinaryOperator)
157 HANDLE_BINARY_INST(24, FRem , BinaryOperator)
157158
158159 // Logical operators (integer operands)
159 HANDLE_BINARY_INST(24, Shl , BinaryOperator) // Shift left (logical)
160 HANDLE_BINARY_INST(25, LShr , BinaryOperator) // Shift right (logical)
161 HANDLE_BINARY_INST(26, AShr , BinaryOperator) // Shift right (arithmetic)
162 HANDLE_BINARY_INST(27, And , BinaryOperator)
163 HANDLE_BINARY_INST(28, Or , BinaryOperator)
164 HANDLE_BINARY_INST(29, Xor , BinaryOperator)
165 LAST_BINARY_INST(29)
160 HANDLE_BINARY_INST(25, Shl , BinaryOperator) // Shift left (logical)
161 HANDLE_BINARY_INST(26, LShr , BinaryOperator) // Shift right (logical)
162 HANDLE_BINARY_INST(27, AShr , BinaryOperator) // Shift right (arithmetic)
163 HANDLE_BINARY_INST(28, And , BinaryOperator)
164 HANDLE_BINARY_INST(29, Or , BinaryOperator)
165 HANDLE_BINARY_INST(30, Xor , BinaryOperator)
166 LAST_BINARY_INST(30)
166167
167168 // Memory operators...
168 FIRST_MEMORY_INST(30)
169 HANDLE_MEMORY_INST(30, Alloca, AllocaInst) // Stack management
170 HANDLE_MEMORY_INST(31, Load , LoadInst ) // Memory manipulation instrs
171 HANDLE_MEMORY_INST(32, Store , StoreInst )
172 HANDLE_MEMORY_INST(33, GetElementPtr, GetElementPtrInst)
173 HANDLE_MEMORY_INST(34, Fence , FenceInst )
174 HANDLE_MEMORY_INST(35, AtomicCmpXchg , AtomicCmpXchgInst )
175 HANDLE_MEMORY_INST(36, AtomicRMW , AtomicRMWInst )
176 LAST_MEMORY_INST(36)
169 FIRST_MEMORY_INST(31)
170 HANDLE_MEMORY_INST(31, Alloca, AllocaInst) // Stack management
171 HANDLE_MEMORY_INST(32, Load , LoadInst ) // Memory manipulation instrs
172 HANDLE_MEMORY_INST(33, Store , StoreInst )
173 HANDLE_MEMORY_INST(34, GetElementPtr, GetElementPtrInst)
174 HANDLE_MEMORY_INST(35, Fence , FenceInst )
175 HANDLE_MEMORY_INST(36, AtomicCmpXchg , AtomicCmpXchgInst )
176 HANDLE_MEMORY_INST(37, AtomicRMW , AtomicRMWInst )
177 LAST_MEMORY_INST(37)
177178
178179 // Cast operators ...
179180 // NOTE: The order matters here because CastInst::isEliminableCastPair
180181 // NOTE: (see Instructions.cpp) encodes a table based on this ordering.
181 FIRST_CAST_INST(37)
182 HANDLE_CAST_INST(37, Trunc , TruncInst ) // Truncate integers
183 HANDLE_CAST_INST(38, ZExt , ZExtInst ) // Zero extend integers
184 HANDLE_CAST_INST(39, SExt , SExtInst ) // Sign extend integers
185 HANDLE_CAST_INST(40, FPToUI , FPToUIInst ) // floating point -> UInt
186 HANDLE_CAST_INST(41, FPToSI , FPToSIInst ) // floating point -> SInt
187 HANDLE_CAST_INST(42, UIToFP , UIToFPInst ) // UInt -> floating point
188 HANDLE_CAST_INST(43, SIToFP , SIToFPInst ) // SInt -> floating point
189 HANDLE_CAST_INST(44, FPTrunc , FPTruncInst ) // Truncate floating point
190 HANDLE_CAST_INST(45, FPExt , FPExtInst ) // Extend floating point
191 HANDLE_CAST_INST(46, PtrToInt, PtrToIntInst) // Pointer -> Integer
192 HANDLE_CAST_INST(47, IntToPtr, IntToPtrInst) // Integer -> Pointer
193 HANDLE_CAST_INST(48, BitCast , BitCastInst ) // Type cast
194 HANDLE_CAST_INST(49, AddrSpaceCast, AddrSpaceCastInst) // addrspace cast
195 LAST_CAST_INST(49)
196
197 FIRST_FUNCLETPAD_INST(50)
198 HANDLE_FUNCLETPAD_INST(50, CleanupPad, CleanupPadInst)
199 HANDLE_FUNCLETPAD_INST(51, CatchPad , CatchPadInst)
200 LAST_FUNCLETPAD_INST(51)
182 FIRST_CAST_INST(38)
183 HANDLE_CAST_INST(38, Trunc , TruncInst ) // Truncate integers
184 HANDLE_CAST_INST(39, ZExt , ZExtInst ) // Zero extend integers
185 HANDLE_CAST_INST(40, SExt , SExtInst ) // Sign extend integers
186 HANDLE_CAST_INST(41, FPToUI , FPToUIInst ) // floating point -> UInt
187 HANDLE_CAST_INST(42, FPToSI , FPToSIInst ) // floating point -> SInt
188 HANDLE_CAST_INST(43, UIToFP , UIToFPInst ) // UInt -> floating point
189 HANDLE_CAST_INST(44, SIToFP , SIToFPInst ) // SInt -> floating point
190 HANDLE_CAST_INST(45, FPTrunc , FPTruncInst ) // Truncate floating point
191 HANDLE_CAST_INST(46, FPExt , FPExtInst ) // Extend floating point
192 HANDLE_CAST_INST(47, PtrToInt, PtrToIntInst) // Pointer -> Integer
193 HANDLE_CAST_INST(48, IntToPtr, IntToPtrInst) // Integer -> Pointer
194 HANDLE_CAST_INST(49, BitCast , BitCastInst ) // Type cast
195 HANDLE_CAST_INST(50, AddrSpaceCast, AddrSpaceCastInst) // addrspace cast
196 LAST_CAST_INST(50)
197
198 FIRST_FUNCLETPAD_INST(51)
199 HANDLE_FUNCLETPAD_INST(51, CleanupPad, CleanupPadInst)
200 HANDLE_FUNCLETPAD_INST(52, CatchPad , CatchPadInst)
201 LAST_FUNCLETPAD_INST(52)
201202
202203 // Other operators...
203 FIRST_OTHER_INST(52)
204 HANDLE_OTHER_INST(52, ICmp , ICmpInst ) // Integer comparison instruction
205 HANDLE_OTHER_INST(53, FCmp , FCmpInst ) // Floating point comparison instr.
206 HANDLE_OTHER_INST(54, PHI , PHINode ) // PHI node instruction
207 HANDLE_OTHER_INST(55, Call , CallInst ) // Call a function
208 HANDLE_OTHER_INST(56, Select , SelectInst ) // select instruction
209 HANDLE_USER_INST (57, UserOp1, Instruction) // May be used internally in a pass
210 HANDLE_USER_INST (58, UserOp2, Instruction) // Internal to passes only
211 HANDLE_OTHER_INST(59, VAArg , VAArgInst ) // vaarg instruction
212 HANDLE_OTHER_INST(60, ExtractElement, ExtractElementInst)// extract from vector
213 HANDLE_OTHER_INST(61, InsertElement, InsertElementInst) // insert into vector
214 HANDLE_OTHER_INST(62, ShuffleVector, ShuffleVectorInst) // shuffle two vectors.
215 HANDLE_OTHER_INST(63, ExtractValue, ExtractValueInst)// extract from aggregate
216 HANDLE_OTHER_INST(64, InsertValue, InsertValueInst) // insert into aggregate
217 HANDLE_OTHER_INST(65, LandingPad, LandingPadInst) // Landing pad instruction.
218 LAST_OTHER_INST(65)
204 FIRST_OTHER_INST(53)
205 HANDLE_OTHER_INST(53, ICmp , ICmpInst ) // Integer comparison instruction
206 HANDLE_OTHER_INST(54, FCmp , FCmpInst ) // Floating point comparison instr.
207 HANDLE_OTHER_INST(55, PHI , PHINode ) // PHI node instruction
208 HANDLE_OTHER_INST(56, Call , CallInst ) // Call a function
209 HANDLE_OTHER_INST(57, Select , SelectInst ) // select instruction
210 HANDLE_USER_INST (58, UserOp1, Instruction) // May be used internally in a pass
211 HANDLE_USER_INST (59, UserOp2, Instruction) // Internal to passes only
212 HANDLE_OTHER_INST(60, VAArg , VAArgInst ) // vaarg instruction
213 HANDLE_OTHER_INST(61, ExtractElement, ExtractElementInst)// extract from vector
214 HANDLE_OTHER_INST(62, InsertElement, InsertElementInst) // insert into vector
215 HANDLE_OTHER_INST(63, ShuffleVector, ShuffleVectorInst) // shuffle two vectors.
216 HANDLE_OTHER_INST(64, ExtractValue, ExtractValueInst)// extract from aggregate
217 HANDLE_OTHER_INST(65, InsertValue, InsertValueInst) // insert into aggregate
218 HANDLE_OTHER_INST(66, LandingPad, LandingPadInst) // Landing pad instruction.
219 LAST_OTHER_INST(66)
219220
220221 #undef FIRST_TERM_INST
221222 #undef HANDLE_TERM_INST
134134 bool isExceptionalTerminator() const {
135135 return isExceptionalTerminator(getOpcode());
136136 }
137 bool isIndirectTerminator() const {
138 return isIndirectTerminator(getOpcode());
139 }
137140
138141 static const char* getOpcodeName(unsigned OpCode);
139142
195198 case Instruction::CleanupRet:
196199 case Instruction::Invoke:
197200 case Instruction::Resume:
201 return true;
202 default:
203 return false;
204 }
205 }
206
207 /// Returns true if the OpCode is a terminator with indirect targets.
208 static inline bool isIndirectTerminator(unsigned OpCode) {
209 switch (OpCode) {
210 case Instruction::IndirectBr:
211 case Instruction::CallBr:
198212 return true;
199213 default:
200214 return false;
38863886 }
38873887
38883888 //===----------------------------------------------------------------------===//
3889 // CallBrInst Class
3890 //===----------------------------------------------------------------------===//
3891
3892 /// CallBr instruction, tracking function calls that may not return control but
3893 /// instead transfer it to a third location. The SubclassData field is used to
3894 /// hold the calling convention of the call.
3895 ///
3896 class CallBrInst : public CallBase {
3897
3898 unsigned NumIndirectDests;
3899
3900 CallBrInst(const CallBrInst &BI);
3901
3902 /// Construct a CallBrInst given a range of arguments.
3903 ///
3904 /// Construct a CallBrInst from a range of arguments
3905 inline CallBrInst(FunctionType *Ty, Value *Func, BasicBlock *DefaultDest,
3906 ArrayRef IndirectDests,
3907 ArrayRef Args,
3908 ArrayRef Bundles, int NumOperands,
3909 const Twine &NameStr, Instruction *InsertBefore);
3910
3911 inline CallBrInst(FunctionType *Ty, Value *Func, BasicBlock *DefaultDest,
3912 ArrayRef IndirectDests,
3913 ArrayRef Args,
3914 ArrayRef Bundles, int NumOperands,
3915 const Twine &NameStr, BasicBlock *InsertAtEnd);
3916
3917 void init(FunctionType *FTy, Value *Func, BasicBlock *DefaultDest,
3918 ArrayRef IndirectDests, ArrayRef Args,
3919 ArrayRef Bundles, const Twine &NameStr);
3920
3921 /// Compute the number of operands to allocate.
3922 static int ComputeNumOperands(int NumArgs, int NumIndirectDests,
3923 int NumBundleInputs = 0) {
3924 // We need one operand for the called function, plus our extra operands and
3925 // the input operand counts provided.
3926 return 2 + NumIndirectDests + NumArgs + NumBundleInputs;
3927 }
3928
3929 protected:
3930 // Note: Instruction needs to be a friend here to call cloneImpl.
3931 friend class Instruction;
3932
3933 CallBrInst *cloneImpl() const;
3934
3935 public:
3936 static CallBrInst *Create(FunctionType *Ty, Value *Func,
3937 BasicBlock *DefaultDest,
3938 ArrayRef IndirectDests,
3939 ArrayRef Args, const Twine &NameStr,
3940 Instruction *InsertBefore = nullptr) {
3941 int NumOperands = ComputeNumOperands(Args.size(), IndirectDests.size());
3942 return new (NumOperands)
3943 CallBrInst(Ty, Func, DefaultDest, IndirectDests, Args, None,
3944 NumOperands, NameStr, InsertBefore);
3945 }
3946
3947 static CallBrInst *Create(FunctionType *Ty, Value *Func,
3948 BasicBlock *DefaultDest,
3949 ArrayRef IndirectDests,
3950 ArrayRef Args,
3951 ArrayRef Bundles = None,
3952 const Twine &NameStr = "",
3953 Instruction *InsertBefore = nullptr) {
3954 int NumOperands = ComputeNumOperands(Args.size(), IndirectDests.size(),
3955 CountBundleInputs(Bundles));
3956 unsigned DescriptorBytes = Bundles.size() * sizeof(BundleOpInfo);
3957
3958 return new (NumOperands, DescriptorBytes)
3959 CallBrInst(Ty, Func, DefaultDest, IndirectDests, Args, Bundles,
3960 NumOperands, NameStr, InsertBefore);
3961 }
3962
3963 static CallBrInst *Create(FunctionType *Ty, Value *Func,
3964 BasicBlock *DefaultDest,
3965 ArrayRef IndirectDests,
3966 ArrayRef Args, const Twine &NameStr,
3967 BasicBlock *InsertAtEnd) {
3968 int NumOperands = ComputeNumOperands(Args.size(), IndirectDests.size());
3969 return new (NumOperands)
3970 CallBrInst(Ty, Func, DefaultDest, IndirectDests, Args, None,
3971 NumOperands, NameStr, InsertAtEnd);
3972 }
3973
3974 static CallBrInst *Create(FunctionType *Ty, Value *Func,
3975 BasicBlock *DefaultDest,
3976 ArrayRef IndirectDests,
3977 ArrayRef Args,
3978 ArrayRef Bundles,
3979 const Twine &NameStr, BasicBlock *InsertAtEnd) {
3980 int NumOperands = ComputeNumOperands(Args.size(), IndirectDests.size(),
3981 CountBundleInputs(Bundles));
3982 unsigned DescriptorBytes = Bundles.size() * sizeof(BundleOpInfo);
3983
3984 return new (NumOperands, DescriptorBytes)
3985 CallBrInst(Ty, Func, DefaultDest, IndirectDests, Args, Bundles,
3986 NumOperands, NameStr, InsertAtEnd);
3987 }
3988
3989 static CallBrInst *Create(FunctionCallee Func, BasicBlock *DefaultDest,
3990 ArrayRef IndirectDests,
3991 ArrayRef Args, const Twine &NameStr,
3992 Instruction *InsertBefore = nullptr) {
3993 return Create(Func.getFunctionType(), Func.getCallee(), DefaultDest,
3994 IndirectDests, Args, NameStr, InsertBefore);
3995 }
3996
3997 static CallBrInst *Create(FunctionCallee Func, BasicBlock *DefaultDest,
3998 ArrayRef IndirectDests,
3999 ArrayRef Args,
4000 ArrayRef Bundles = None,
4001 const Twine &NameStr = "",
4002 Instruction *InsertBefore = nullptr) {
4003 return Create(Func.getFunctionType(), Func.getCallee(), DefaultDest,
4004 IndirectDests, Args, Bundles, NameStr, InsertBefore);
4005 }
4006
4007 static CallBrInst *Create(FunctionCallee Func, BasicBlock *DefaultDest,
4008 ArrayRef IndirectDests,
4009 ArrayRef Args, const Twine &NameStr,
4010 BasicBlock *InsertAtEnd) {
4011 return Create(Func.getFunctionType(), Func.getCallee(), DefaultDest,
4012 IndirectDests, Args, NameStr, InsertAtEnd);
4013 }
4014
4015 static CallBrInst *Create(FunctionCallee Func,
4016 BasicBlock *DefaultDest,
4017 ArrayRef IndirectDests,
4018 ArrayRef Args,
4019 ArrayRef Bundles,
4020 const Twine &NameStr, BasicBlock *InsertAtEnd) {
4021 return Create(Func.getFunctionType(), Func.getCallee(), DefaultDest,
4022 IndirectDests, Args, Bundles, NameStr, InsertAtEnd);
4023 }
4024
4025 /// Create a clone of \p CBI with a different set of operand bundles and
4026 /// insert it before \p InsertPt.
4027 ///
4028 /// The returned callbr instruction is identical to \p CBI in every way
4029 /// except that the operand bundles for the new instruction are set to the
4030 /// operand bundles in \p Bundles.
4031 static CallBrInst *Create(CallBrInst *CBI,
4032 ArrayRef Bundles,
4033 Instruction *InsertPt = nullptr);
4034
4035 /// Return the number of callbr indirect dest labels.
4036 ///
4037 unsigned getNumIndirectDests() const { return NumIndirectDests; }
4038
4039 /// getIndirectDestLabel - Return the i-th indirect dest label.
4040 ///
4041 Value *getIndirectDestLabel(unsigned i) const {
4042 assert(i < getNumIndirectDests() && "Out of bounds!");
4043 return getOperand(i + getNumArgOperands() + getNumTotalBundleOperands() +
4044 1);
4045 }
4046
4047 Value *getIndirectDestLabelUse(unsigned i) const {
4048 assert(i < getNumIndirectDests() && "Out of bounds!");
4049 return getOperandUse(i + getNumArgOperands() + getNumTotalBundleOperands() +
4050 1);
4051 }
4052
4053 // Return the destination basic blocks...
4054 BasicBlock *getDefaultDest() const {
4055 return cast(*(&Op<-1>() - getNumIndirectDests() - 1));
4056 }
4057 BasicBlock *getIndirectDest(unsigned i) const {
4058 return cast(*(&Op<-1>() - getNumIndirectDests() + i));
4059 }
4060 SmallVector getIndirectDests() const {
4061 SmallVector IndirectDests;
4062 for (unsigned i = 0, e = getNumIndirectDests(); i < e; ++i)
4063 IndirectDests.push_back(getIndirectDest(i));
4064 return IndirectDests;
4065 }
4066 void setDefaultDest(BasicBlock *B) {
4067 *(&Op<-1>() - getNumIndirectDests() - 1) = reinterpret_cast(B);
4068 }
4069 void setIndirectDest(unsigned i, BasicBlock *B) {
4070 *(&Op<-1>() - getNumIndirectDests() + i) = reinterpret_cast(B);
4071 }
4072
4073 BasicBlock *getSuccessor(unsigned i) const {
4074 assert(i < getNumSuccessors() + 1 &&
4075 "Successor # out of range for callbr!");
4076 return i == 0 ? getDefaultDest() : getIndirectDest(i - 1);
4077 }
4078
4079 void setSuccessor(unsigned idx, BasicBlock *NewSucc) {
4080 assert(idx < getNumIndirectDests() + 1 &&
4081 "Successor # out of range for callbr!");
4082 *(&Op<-1>() - getNumIndirectDests() -1 + idx) =
4083 reinterpret_cast(NewSucc);
4084 }
4085
4086 unsigned getNumSuccessors() const { return getNumIndirectDests() + 1; }
4087
4088 // Methods for support type inquiry through isa, cast, and dyn_cast:
4089 static bool classof(const Instruction *I) {
4090 return (I->getOpcode() == Instruction::CallBr);
4091 }
4092 static bool classof(const Value *V) {
4093 return isa(V) && classof(cast(V));
4094 }
4095
4096 private:
4097
4098 // Shadow Instruction::setInstructionSubclassData with a private forwarding
4099 // method so that subclasses cannot accidentally use it.
4100 void setInstructionSubclassData(unsigned short D) {
4101 Instruction::setInstructionSubclassData(D);
4102 }
4103 };
4104
4105 CallBrInst::CallBrInst(FunctionType *Ty, Value *Func, BasicBlock *DefaultDest,
4106 ArrayRef IndirectDests,
4107 ArrayRef Args,
4108 ArrayRef Bundles, int NumOperands,
4109 const Twine &NameStr, Instruction *InsertBefore)
4110 : CallBase(Ty->getReturnType(), Instruction::CallBr,
4111 OperandTraits::op_end(this) - NumOperands, NumOperands,
4112 InsertBefore) {
4113 init(Ty, Func, DefaultDest, IndirectDests, Args, Bundles, NameStr);
4114 }
4115
4116 CallBrInst::CallBrInst(FunctionType *Ty, Value *Func, BasicBlock *DefaultDest,
4117 ArrayRef IndirectDests,
4118 ArrayRef Args,
4119 ArrayRef Bundles, int NumOperands,
4120 const Twine &NameStr, BasicBlock *InsertAtEnd)
4121 : CallBase(
4122 cast(
4123 cast(Func->getType())->getElementType())
4124 ->getReturnType(),
4125 Instruction::CallBr,
4126 OperandTraits::op_end(this) - NumOperands, NumOperands,
4127 InsertAtEnd) {
4128 init(Ty, Func, DefaultDest, IndirectDests, Args, Bundles, NameStr);
4129 }
4130
4131 //===----------------------------------------------------------------------===//
38894132 // ResumeInst Class
38904133 //===----------------------------------------------------------------------===//
38914134
2727 ///
2828 HANDLE_TARGET_OPCODE(PHI)
2929 HANDLE_TARGET_OPCODE(INLINEASM)
30 HANDLE_TARGET_OPCODE(INLINEASM_BR)
3031 HANDLE_TARGET_OPCODE(CFI_INSTRUCTION)
3132 HANDLE_TARGET_OPCODE(EH_LABEL)
3233 HANDLE_TARGET_OPCODE(GC_LABEL)
932932 let AsmString = "";
933933 let hasSideEffects = 0; // Note side effect is encoded in an operand.
934934 }
935 def INLINEASM_BR : StandardPseudoInstruction {
936 let OutOperandList = (outs);
937 let InOperandList = (ins variable_ops);
938 let AsmString = "";
939 let hasSideEffects = 0; // Note side effect is encoded in an operand.
940 let isTerminator = 1;
941 let isBranch = 1;
942 let isIndirectBranch = 1;
943 }
935944 def CFI_INSTRUCTION : StandardPseudoInstruction {
936945 let OutOperandList = (outs);
937946 let InOperandList = (ins i32imm:$id);
6464 LLVMInvoke = 5,
6565 /* removed 6 due to API changes */
6666 LLVMUnreachable = 7,
67 LLVMCallBr = 67,
6768
6869 /* Standard Unary Operators */
6970 LLVMFNeg = 66,
39213921 case Instruction::VAArg:
39223922 case Instruction::Alloca:
39233923 case Instruction::Invoke:
3924 case Instruction::CallBr:
39243925 case Instruction::PHI:
39253926 case Instruction::Store:
39263927 case Instruction::Ret:
857857 INSTKEYWORD(invoke, Invoke);
858858 INSTKEYWORD(resume, Resume);
859859 INSTKEYWORD(unreachable, Unreachable);
860 INSTKEYWORD(callbr, CallBr);
860861
861862 INSTKEYWORD(alloca, Alloca);
862863 INSTKEYWORD(load, Load);
162162 AS = AS.addAttributes(Context, AttributeList::FunctionIndex,
163163 AttributeSet::get(Context, FnAttrs));
164164 II->setAttributes(AS);
165 } else if (CallBrInst *CBI = dyn_cast(V)) {
166 AttributeList AS = CBI->getAttributes();
167 AttrBuilder FnAttrs(AS.getFnAttributes());
168 AS = AS.removeAttributes(Context, AttributeList::FunctionIndex);
169 FnAttrs.merge(B);
170 AS = AS.addAttributes(Context, AttributeList::FunctionIndex,
171 AttributeSet::get(Context, FnAttrs));
172 CBI->setAttributes(AS);
165173 } else if (auto *GV = dyn_cast(V)) {
166174 AttrBuilder Attrs(GV->getAttributes());
167175 Attrs.merge(B);
55655573 case lltok::kw_catchswitch: return ParseCatchSwitch(Inst, PFS);
55665574 case lltok::kw_catchpad: return ParseCatchPad(Inst, PFS);
55675575 case lltok::kw_cleanuppad: return ParseCleanupPad(Inst, PFS);
5576 case lltok::kw_callbr: return ParseCallBr(Inst, PFS);
55685577 // Unary Operators.
55695578 case lltok::kw_fneg: {
55705579 FastMathFlags FMF = EatFastMathFlagsIfPresent();
61836192 return false;
61846193 }
61856194
6195 /// ParseCallBr
6196 /// ::= 'callbr' OptionalCallingConv OptionalAttrs Type Value ParamList
6197 /// OptionalAttrs OptionalOperandBundles 'to' TypeAndValue
6198 /// '[' LabelList ']'
6199 bool LLParser::ParseCallBr(Instruction *&Inst, PerFunctionState &PFS) {
6200 LocTy CallLoc = Lex.getLoc();
6201 AttrBuilder RetAttrs, FnAttrs;
6202 std::vector FwdRefAttrGrps;
6203 LocTy NoBuiltinLoc;
6204 unsigned CC;
6205 Type *RetType = nullptr;
6206 LocTy RetTypeLoc;
6207 ValID CalleeID;
6208 SmallVector ArgList;
6209 SmallVector BundleList;
6210
6211 BasicBlock *DefaultDest;
6212 if (ParseOptionalCallingConv(CC) || ParseOptionalReturnAttrs(RetAttrs) ||
6213 ParseType(RetType, RetTypeLoc, true /*void allowed*/) ||
6214 ParseValID(CalleeID) || ParseParameterList(ArgList, PFS) ||
6215 ParseFnAttributeValuePairs(FnAttrs, FwdRefAttrGrps, false,
6216 NoBuiltinLoc) ||
6217 ParseOptionalOperandBundles(BundleList, PFS) ||
6218 ParseToken(lltok::kw_to, "expected 'to' in callbr") ||
6219 ParseTypeAndBasicBlock(DefaultDest, PFS) ||
6220 ParseToken(lltok::lsquare, "expected '[' in callbr"))
6221 return true;
6222
6223 // Parse the destination list.
6224 SmallVector IndirectDests;
6225
6226 if (Lex.getKind() != lltok::rsquare) {
6227 BasicBlock *DestBB;
6228 if (ParseTypeAndBasicBlock(DestBB, PFS))
6229 return true;
6230 IndirectDests.push_back(DestBB);
6231
6232 while (EatIfPresent(lltok::comma)) {
6233 if (ParseTypeAndBasicBlock(DestBB, PFS))
6234 return true;
6235 IndirectDests.push_back(DestBB);
6236 }
6237 }
6238
6239 if (ParseToken(lltok::rsquare, "expected ']' at end of block list"))
6240 return true;
6241
6242 // If RetType is a non-function pointer type, then this is the short syntax
6243 // for the call, which means that RetType is just the return type. Infer the
6244 // rest of the function argument types from the arguments that are present.
6245 FunctionType *Ty = dyn_cast(RetType);
6246 if (!Ty) {
6247 // Pull out the types of all of the arguments...
6248 std::vector ParamTypes;
6249 for (unsigned i = 0, e = ArgList.size(); i != e; ++i)
6250 ParamTypes.push_back(ArgList[i].V->getType());
6251
6252 if (!FunctionType::isValidReturnType(RetType))
6253 return Error(RetTypeLoc, "Invalid result type for LLVM function");
6254
6255 Ty = FunctionType::get(RetType, ParamTypes, false);
6256 }
6257
6258 CalleeID.FTy = Ty;
6259
6260 // Look up the callee.
6261 Value *Callee;
6262 if (ConvertValIDToValue(PointerType::getUnqual(Ty), CalleeID, Callee, &PFS,
6263 /*IsCall=*/true))
6264 return true;
6265
6266 if (isa(Callee) && !Ty->getReturnType()->isVoidTy())
6267 return Error(RetTypeLoc, "asm-goto outputs not supported");
6268
6269 // Set up the Attribute for the function.
6270 SmallVector Args;
6271 SmallVector ArgAttrs;
6272
6273 // Loop through FunctionType's arguments and ensure they are specified
6274 // correctly. Also, gather any parameter attributes.
6275 FunctionType::param_iterator I = Ty->param_begin();
6276 FunctionType::param_iterator E = Ty->param_end();
6277 for (unsigned i = 0, e = ArgList.size(); i != e; ++i) {
6278 Type *ExpectedTy = nullptr;
6279 if (I != E) {
6280 ExpectedTy = *I++;
6281 } else if (!Ty->isVarArg()) {
6282 return Error(ArgList[i].Loc, "too many arguments specified");
6283 }
6284
6285 if (ExpectedTy && ExpectedTy != ArgList[i].V->getType())
6286 return Error(ArgList[i].Loc, "argument is not of expected type '" +
6287 getTypeString(ExpectedTy) + "'");
6288 Args.push_back(ArgList[i].V);
6289 ArgAttrs.push_back(ArgList[i].Attrs);
6290 }
6291
6292 if (I != E)
6293 return Error(CallLoc, "not enough parameters specified for call");
6294
6295 if (FnAttrs.hasAlignmentAttr())
6296 return Error(CallLoc, "callbr instructions may not have an alignment");
6297
6298 // Finish off the Attribute and check them
6299 AttributeList PAL =
6300 AttributeList::get(Context, AttributeSet::get(Context, FnAttrs),
6301 AttributeSet::get(Context, RetAttrs), ArgAttrs);
6302
6303 CallBrInst *CBI =
6304 CallBrInst::Create(Ty, Callee, DefaultDest, IndirectDests, Args,
6305 BundleList);
6306 CBI->setCallingConv(CC);
6307 CBI->setAttributes(PAL);
6308 ForwardRefAttrGroups[CBI] = FwdRefAttrGrps;
6309 Inst = CBI;
6310 return false;
6311 }
6312
61866313 //===----------------------------------------------------------------------===//
61876314 // Binary Operators.
61886315 //===----------------------------------------------------------------------===//
569569 bool ParseCatchSwitch(Instruction *&Inst, PerFunctionState &PFS);
570570 bool ParseCatchPad(Instruction *&Inst, PerFunctionState &PFS);
571571 bool ParseCleanupPad(Instruction *&Inst, PerFunctionState &PFS);
572 bool ParseCallBr(Instruction *&Inst, PerFunctionState &PFS);
572573
573574 bool ParseUnaryOp(Instruction *&Inst, PerFunctionState &PFS, unsigned Opc,
574575 unsigned OperandType);
326326 kw_catchret,
327327 kw_catchpad,
328328 kw_cleanuppad,
329 kw_callbr,
329330
330331 kw_alloca,
331332 kw_load,
42304230 InstructionList.push_back(I);
42314231 break;
42324232 }
4233 case bitc::FUNC_CODE_INST_CALLBR: {
4234 // CALLBR: [attr, cc, norm, transfs, fty, fnid, args]
4235 unsigned OpNum = 0;
4236 AttributeList PAL = getAttributes(Record[OpNum++]);
4237 unsigned CCInfo = Record[OpNum++];
4238
4239 BasicBlock *DefaultDest = getBasicBlock(Record[OpNum++]);
4240 unsigned NumIndirectDests = Record[OpNum++];
4241 SmallVector IndirectDests;
4242 for (unsigned i = 0, e = NumIndirectDests; i != e; ++i)
4243 IndirectDests.push_back(getBasicBlock(Record[OpNum++]));
4244
4245 FunctionType *FTy = nullptr;
4246 if (CCInfo >> bitc::CALL_EXPLICIT_TYPE & 1 &&
4247 !(FTy = dyn_cast(getTypeByID(Record[OpNum++]))))
4248 return error("Explicit call type is not a function type");
4249
4250 Value *Callee;
4251 if (getValueTypePair(Record, OpNum, NextValueNo, Callee))
4252 return error("Invalid record");
4253
4254 PointerType *OpTy = dyn_cast(Callee->getType());
4255 if (!OpTy)
4256 return error("Callee is not a pointer type");
4257 if (!FTy) {
4258 FTy = dyn_cast(OpTy->getElementType());
4259 if (!FTy)
4260 return error("Callee is not of pointer to function type");
4261 } else if (OpTy->getElementType() != FTy)
4262 return error("Explicit call type does not match pointee type of "
4263 "callee operand");
4264 if (Record.size() < FTy->getNumParams() + OpNum)
4265 return error("Insufficient operands to call");
4266
4267 SmallVector Args;
4268 // Read the fixed params.
4269 for (unsigned i = 0, e = FTy->getNumParams(); i != e; ++i, ++OpNum) {
4270 if (FTy->getParamType(i)->isLabelTy())
4271 Args.push_back(getBasicBlock(Record[OpNum]));
4272 else
4273 Args.push_back(getValue(Record, OpNum, NextValueNo,
4274 FTy->getParamType(i)));
4275 if (!Args.back())
4276 return error("Invalid record");
4277 }
4278
4279 // Read type/value pairs for varargs params.
4280 if (!FTy->isVarArg()) {
4281 if (OpNum != Record.size())
4282 return error("Invalid record");
4283 } else {
4284 while (OpNum != Record.size()) {
4285 Value *Op;
4286 if (getValueTypePair(Record, OpNum, NextValueNo, Op))
4287 return error("Invalid record");
4288 Args.push_back(Op);
4289 }
4290 }
4291
4292 I = CallBrInst::Create(FTy, Callee, DefaultDest, IndirectDests, Args,
4293 OperandBundles);
4294 OperandBundles.clear();
4295 InstructionList.push_back(I);
4296 cast(I)->setCallingConv(
4297 static_cast((0x7ff & CCInfo) >> bitc::CALL_CCONV));
4298 cast(I)->setAttributes(PAL);
4299 break;
4300 }
42334301 case bitc::FUNC_CODE_INST_UNREACHABLE: // UNREACHABLE
42344302 I = new UnreachableInst(Context);
42354303 InstructionList.push_back(I);
27762776 Vals.push_back(VE.getValueID(CatchSwitch.getUnwindDest()));
27772777 break;
27782778 }
2779 case Instruction::CallBr: {
2780 const CallBrInst *CBI = cast(&I);
2781 const Value *Callee = CBI->getCalledValue();
2782 FunctionType *FTy = CBI->getFunctionType();
2783
2784 if (CBI->hasOperandBundles())
2785 writeOperandBundles(CBI, InstID);
2786
2787 Code = bitc::FUNC_CODE_INST_CALLBR;
2788
2789 Vals.push_back(VE.getAttributeListID(CBI->getAttributes()));
2790
2791 Vals.push_back(CBI->getCallingConv() << bitc::CALL_CCONV |
2792 1 << bitc::CALL_EXPLICIT_TYPE);
2793
2794 Vals.push_back(VE.getValueID(CBI->getDefaultDest()));
2795 Vals.push_back(CBI->getNumIndirectDests());
2796 for (unsigned i = 0, e = CBI->getNumIndirectDests(); i != e; ++i)
2797 Vals.push_back(VE.getValueID(CBI->getIndirectDest(i)));
2798
2799 Vals.push_back(VE.getTypeID(FTy));
2800 pushValueAndType(Callee, InstID, Vals);
2801
2802 // Emit value #'s for the fixed parameters.
2803 for (unsigned i = 0, e = FTy->getNumParams(); i != e; ++i)
2804 pushValue(I.getOperand(i), InstID, Vals); // fixed param.
2805
2806 // Emit type/value pairs for varargs params.
2807 if (FTy->isVarArg()) {
2808 for (unsigned i = FTy->getNumParams(), e = CBI->getNumArgOperands();
2809 i != e; ++i)
2810 pushValueAndType(I.getOperand(i), InstID, Vals); // vararg
2811 }
2812 break;
2813 }
27792814 case Instruction::Unreachable:
27802815 Code = bitc::FUNC_CODE_INST_UNREACHABLE;
27812816 AbbrevToUse = FUNCTION_INST_UNREACHABLE_ABBREV;
413413 EnumerateMetadata(&F, MD->getMetadata());
414414 }
415415 EnumerateType(I.getType());
416 if (const CallInst *CI = dyn_cast(&I))
417 EnumerateAttributes(CI->getAttributes());
418 else if (const InvokeInst *II = dyn_cast(&I))
419 EnumerateAttributes(II->getAttributes());
416 if (const auto *Call = dyn_cast(&I))
417 EnumerateAttributes(Call->getAttributes());
420418
421419 // Enumerate metadata attached with this instruction.
422420 MDs.clear();
10661066 OutStreamer->EmitLabel(MI.getOperand(0).getMCSymbol());
10671067 break;
10681068 case TargetOpcode::INLINEASM:
1069 case TargetOpcode::INLINEASM_BR:
10691070 EmitInlineAsm(&MI);
10701071 break;
10711072 case TargetOpcode::DBG_VALUE:
432432 ++OpNo; // Skip over the ID number.
433433
434434 if (Modifier[0] == 'l') { // Labels are target independent.
435 // FIXME: What if the operand isn't an MBB, report error?
436 const MCSymbol *Sym = MI->getOperand(OpNo).getMBB()->getSymbol();
437 Sym->print(OS, AP->MAI);
435 if (MI->getOperand(OpNo).isBlockAddress()) {
436 const BlockAddress *BA = MI->getOperand(OpNo).getBlockAddress();
437 MCSymbol *Sym = AP->GetBlockAddressSymbol(BA);
438 Sym->print(OS, AP->MAI);
439 } else if (MI->getOperand(OpNo).isMBB()) {
440 const MCSymbol *Sym = MI->getOperand(OpNo).getMBB()->getSymbol();
441 Sym->print(OS, AP->MAI);
442 } else {
443 Error = true;
444 }
438445 } else {
439446 if (InlineAsm::isMemKind(OpFlags)) {
440447 Error = AP->PrintAsmMemoryOperand(MI, OpNo, InlineAsmVariant,
654654 BB->getSinglePredecessor()->getSingleSuccessor()))
655655 return false;
656656
657 // Skip merging if the block's successor is also a successor to any callbr
658 // that leads to this block.
659 // FIXME: Is this really needed? Is this a correctness issue?
660 for (pred_iterator PI = pred_begin(BB), E = pred_end(BB); PI != E; ++PI) {
661 if (auto *CBI = dyn_cast((*PI)->getTerminator()))
662 for (unsigned i = 0, e = CBI->getNumSuccessors(); i != e; ++i)
663 if (DestBB == CBI->getSuccessor(i))
664 return false;
665 }
666
657667 // Try to skip merging if the unique predecessor of BB is terminated by a
658668 // switch or indirect branch instruction, and BB is used as an incoming block
659669 // of PHIs in DestBB. In such case, merging BB and DestBB would cause ISel to
12581258 return true;
12591259 }
12601260
1261 bool IRTranslator::translateCallBr(const User &U,
1262 MachineIRBuilder &MIRBuilder) {
1263 // FIXME: Implement this.
1264 return false;
1265 }
1266
12611267 bool IRTranslator::translateLandingPad(const User &U,
12621268 MachineIRBuilder &MIRBuilder) {
12631269 const LandingPadInst &LP = cast(U);
147147 ConstantInt *BBIndexC = ConstantInt::get(ITy, BBIndex);
148148
149149 // Now rewrite the blockaddress to an integer constant based on the index.
150 // FIXME: We could potentially preserve the uses as arguments to inline asm.
151 // This would allow some uses such as diagnostic information in crashes to
152 // have higher quality even when this transform is enabled, but would break
153 // users that round-trip blockaddresses through inline assembly and then
154 // back into an indirectbr.
150 // FIXME: This part doesn't properly recognize other uses of blockaddress
151 // expressions, for instance, where they are used to pass labels to
152 // asm-goto. This part of the pass needs a rework.
155153 BA->replaceAllUsesWith(ConstantExpr::getIntToPtr(BBIndexC, BA->getType()));
156154 }
157155
10471047 break;
10481048 }
10491049
1050 case ISD::INLINEASM: {
1050 case ISD::INLINEASM:
1051 case ISD::INLINEASM_BR: {
10511052 unsigned NumOps = Node->getNumOperands();
10521053 if (Node->getOperand(NumOps-1).getValueType() == MVT::Glue)
10531054 --NumOps; // Ignore the glue operand.
10541055
10551056 // Create the inline asm machine instruction.
1056 MachineInstrBuilder MIB = BuildMI(*MF, Node->getDebugLoc(),
1057 TII->get(TargetOpcode::INLINEASM));
1057 unsigned TgtOpc = Node->getOpcode() == ISD::INLINEASM_BR
1058 ? TargetOpcode::INLINEASM_BR
1059 : TargetOpcode::INLINEASM;
1060 MachineInstrBuilder MIB =
1061 BuildMI(*MF, Node->getDebugLoc(), TII->get(TgtOpc));
10581062
10591063 // Add the asm string as an external symbol operand.
10601064 SDValue AsmStrV = Node->getOperand(InlineAsm::Op_AsmString);
8383 case ISD::CopyFromReg: NumberDeps++; break;
8484 case ISD::CopyToReg: break;
8585 case ISD::INLINEASM: break;
86 case ISD::INLINEASM_BR: break;
8687 }
8788 if (!ScegN->isMachineOpcode())
8889 continue;
119120 case ISD::CopyFromReg: break;
120121 case ISD::CopyToReg: NumberDeps++; break;
121122 case ISD::INLINEASM: break;
123 case ISD::INLINEASM_BR: break;
122124 }
123125 if (!ScegN->isMachineOpcode())
124126 continue;
444446 break;
445447
446448 case ISD::INLINEASM:
449 case ISD::INLINEASM_BR:
447450 ResCount += PriorityThree;
448451 break;
449452 }
546549 NodeNumDefs++;
547550 break;
548551 case ISD::INLINEASM:
552 case ISD::INLINEASM_BR:
549553 NodeNumDefs++;
550554 break;
551555 }
478478 }
479479
480480 for (SDNode *Node = SU->getNode(); Node; Node = Node->getGluedNode()) {
481 if (Node->getOpcode() == ISD::INLINEASM) {
481 if (Node->getOpcode() == ISD::INLINEASM ||
482 Node->getOpcode() == ISD::INLINEASM_BR) {
482483 // Inline asm can clobber physical defs.
483484 unsigned NumOps = Node->getNumOperands();
484485 if (Node->getOperand(NumOps-1).getValueType() == MVT::Glue)
707707 // removed.
708708 return;
709709 case ISD::INLINEASM:
710 case ISD::INLINEASM_BR:
710711 // For inline asm, clear the pipeline state.
711712 HazardRec->Reset();
712713 return;
13461347 }
13471348
13481349 for (SDNode *Node = SU->getNode(); Node; Node = Node->getGluedNode()) {
1349 if (Node->getOpcode() == ISD::INLINEASM) {
1350 if (Node->getOpcode() == ISD::INLINEASM ||
1351 Node->getOpcode() == ISD::INLINEASM_BR) {
13501352 // Inline asm can clobber physical defs.
13511353 unsigned NumOps = Node->getNumOperands();
13521354 if (Node->getOperand(NumOps-1).getValueType() == MVT::Glue)
25472547 InvokeMBB->normalizeSuccProbs();
25482548
25492549 // Drop into normal successor.
2550 DAG.setRoot(DAG.getNode(ISD::BR, getCurSDLoc(), MVT::Other, getControlRoot(),
2551 DAG.getBasicBlock(Return)));
2552 }
2553
2554 void SelectionDAGBuilder::visitCallBr(const CallBrInst &I) {
2555 MachineBasicBlock *CallBrMBB = FuncInfo.MBB;
2556
2557 // Deopt bundles are lowered in LowerCallSiteWithDeoptBundle, and we don't
2558 // have to do anything here to lower funclet bundles.
2559 assert(!I.hasOperandBundlesOtherThan(
2560 {LLVMContext::OB_deopt, LLVMContext::OB_funclet}) &&
2561 "Cannot lower callbrs with arbitrary operand bundles yet!");
2562
2563 assert(isa(I.getCalledValue()) &&
2564 "Only know how to handle inlineasm callbr");
2565 visitInlineAsm(&I);
2566
2567 // Retrieve successors.
2568 MachineBasicBlock *Return = FuncInfo.MBBMap[I.getDefaultDest()];
2569
2570 // Update successor info.
2571 addSuccessorWithProb(CallBrMBB, Return);
2572 for (unsigned i = 0, e = I.getNumIndirectDests(); i < e; ++i) {
2573 MachineBasicBlock *Target = FuncInfo.MBBMap[I.getIndirectDest(i)];
2574 addSuccessorWithProb(CallBrMBB, Target);
2575 }
2576 CallBrMBB->normalizeSuccProbs();
2577
2578 // Drop into default successor.
25502579 DAG.setRoot(DAG.getNode(ISD::BR, getCurSDLoc(),
25512580 MVT::Other, getControlRoot(),
25522581 DAG.getBasicBlock(Return)));
75837612
75847613 // Process the call argument. BasicBlocks are labels, currently appearing
75857614 // only in asm's.
7586 if (const BasicBlock *BB = dyn_cast(OpInfo.CallOperandVal)) {
7615 const Instruction *I = CS.getInstruction();
7616 if (isa(I) &&
7617 (ArgNo - 1) >= (cast(I)->getNumArgOperands() -
7618 cast(I)->getNumIndirectDests())) {
7619 const auto *BA = cast(OpInfo.CallOperandVal);
7620 EVT VT = TLI.getValueType(DAG.getDataLayout(), BA->getType(), true);
7621 OpInfo.CallOperand = DAG.getTargetBlockAddress(BA, VT);
7622 } else if (const auto *BB = dyn_cast(OpInfo.CallOperandVal)) {
75877623 OpInfo.CallOperand = DAG.getBasicBlock(FuncInfo.MBBMap[BB]);
75887624 } else {
75897625 OpInfo.CallOperand = getValue(OpInfo.CallOperandVal);
78827918 AsmNodeOperands[InlineAsm::Op_InputChain] = Chain;
78837919 if (Flag.getNode()) AsmNodeOperands.push_back(Flag);
78847920
7885 Chain = DAG.getNode(ISD::INLINEASM, getCurSDLoc(),
7921 unsigned ISDOpc = isa(CS.getInstruction()) ? ISD::INLINEASM_BR : ISD::INLINEASM;
7922 Chain = DAG.getNode(ISDOpc, getCurSDLoc(),
78867923 DAG.getVTList(MVT::Other, MVT::Glue), AsmNodeOperands);
78877924 Flag = Chain.getValue(1);
78887925
4545 class BasicBlock;
4646 class BranchInst;
4747 class CallInst;
48 class CallBrInst;
4849 class CatchPadInst;
4950 class CatchReturnInst;
5051 class CatchSwitchInst;
850851 private:
851852 // These all get lowered before this pass.
852853 void visitInvoke(const InvokeInst &I);
854 void visitCallBr(const CallBrInst &I);
853855 void visitResume(const ResumeInst &I);
854856
855857 void visitUnary(const User &I, unsigned Opcode);
171171 case ISD::UNDEF: return "undef";
172172 case ISD::MERGE_VALUES: return "merge_values";
173173 case ISD::INLINEASM: return "inlineasm";
174 case ISD::INLINEASM_BR: return "inlineasm_br";
174175 case ISD::EH_LABEL: return "eh_label";
175176 case ISD::HANDLENODE: return "handlenode";
176177
24402440 return !findNonImmUse(Root, N.getNode(), U, IgnoreChains);
24412441 }
24422442
2443 void SelectionDAGISel::Select_INLINEASM(SDNode *N) {
2443 void SelectionDAGISel::Select_INLINEASM(SDNode *N, bool Branch) {
24442444 SDLoc DL(N);
24452445
24462446 std::vector Ops(N->op_begin(), N->op_end());
24472447 SelectInlineAsmMemoryOperands(Ops, DL);
24482448
24492449 const EVT VTs[] = {MVT::Other, MVT::Glue};
2450 SDValue New = CurDAG->getNode(ISD::INLINEASM, DL, VTs, Ops);
2450 SDValue New = CurDAG->getNode(Branch ? ISD::INLINEASM_BR : ISD::INLINEASM, DL, VTs, Ops);
24512451 New->setNodeId(-1);
24522452 ReplaceUses(N, New.getNode());
24532453 CurDAG->RemoveDeadNode(N);
29972997 CurDAG->RemoveDeadNode(NodeToMatch);
29982998 return;
29992999 case ISD::INLINEASM:
3000 Select_INLINEASM(NodeToMatch);
3000 case ISD::INLINEASM_BR:
3001 Select_INLINEASM(NodeToMatch,
3002 NodeToMatch->getOpcode() == ISD::INLINEASM_BR);
30013003 return;
30023004 case ISD::READ_REGISTER:
30033005 Select_READ_REGISTER(NodeToMatch);
32883288 switch (ConstraintLetter) {
32893289 default: break;
32903290 case 'X': // Allows any operand; labels (basic block) use this.
3291 if (Op.getOpcode() == ISD::BasicBlock) {
3291 if (Op.getOpcode() == ISD::BasicBlock ||
3292 Op.getOpcode() == ISD::TargetBlockAddress) {
32923293 Ops.push_back(Op);
32933294 return;
32943295 }
37753776 return;
37763777 }
37773778
3779 if (Op.getNode() && Op.getOpcode() == ISD::TargetBlockAddress)
3780 return;
3781
37783782 // Otherwise, try to resolve it to something we know about by looking at
37793783 // the actual operand type.
37803784 if (const char *Repl = LowerXConstraint(OpInfo.ConstraintVT)) {
14541454 case Switch: return 0;
14551455 case IndirectBr: return 0;
14561456 case Invoke: return 0;
1457 case CallBr: return 0;
14571458 case Resume: return 0;
14581459 case Unreachable: return 0;
14591460 case CleanupRet: return 0;
38353835 writeOperand(II->getNormalDest(), true);
38363836 Out << " unwind ";
38373837 writeOperand(II->getUnwindDest(), true);
3838 } else if (const CallBrInst *CBI = dyn_cast(&I)) {
3839 Operand = CBI->getCalledValue();
3840 FunctionType *FTy = CBI->getFunctionType();
3841 Type *RetTy = FTy->getReturnType();
3842 const AttributeList &PAL = CBI->getAttributes();
3843
3844 // Print the calling convention being used.
3845 if (CBI->getCallingConv() != CallingConv::C) {
3846 Out << " ";
3847 PrintCallingConv(CBI->getCallingConv(), Out);
3848 }
3849
3850 if (PAL.hasAttributes(AttributeList::ReturnIndex))
3851 Out << ' ' << PAL.getAsString(AttributeList::ReturnIndex);
3852
3853 // If possible, print out the short form of the callbr instruction. We can
3854 // only do this if the first argument is a pointer to a nonvararg function,
3855 // and if the return type is not a pointer to a function.
3856 //
3857 Out << ' ';
3858 TypePrinter.print(FTy->isVarArg() ? FTy : RetTy, Out);
3859 Out << ' ';
3860 writeOperand(Operand, false);
3861 Out << '(';
3862 for (unsigned op = 0, Eop = CBI->getNumArgOperands(); op < Eop; ++op) {
3863 if (op)
3864 Out << ", ";
3865 writeParamOperand(CBI->getArgOperand(op), PAL.getParamAttributes(op));
3866 }
3867
3868 Out << ')';
3869 if (PAL.hasAttributes(AttributeList::FunctionIndex))
3870 Out << " #" << Machine.getAttributeGroupSlot(PAL.getFnAttributes());
3871
3872 writeOperandBundles(CBI);
3873
3874 Out << "\n to ";
3875 writeOperand(CBI->getDefaultDest(), true);
3876 Out << " [";
3877 for (unsigned i = 0, e = CBI->getNumIndirectDests(); i != e; ++i) {
3878 if (i != 0)
3879 Out << ", ";
3880 writeOperand(CBI->getIndirectDest(i), true);
3881 }
3882 Out << ']';
38383883 } else if (const AllocaInst *AI = dyn_cast(&I)) {
38393884 Out << ' ';
38403885 if (AI->isUsedWithInAlloca())
300300 case CatchRet: return "catchret";
301301 case CatchPad: return "catchpad";
302302 case CatchSwitch: return "catchswitch";
303 case CallBr: return "callbr";
303304
304305 // Standard unary operators...
305306 case FNeg: return "fneg";
404405 return CI->getCallingConv() == cast(I2)->getCallingConv() &&
405406 CI->getAttributes() == cast(I2)->getAttributes() &&
406407 CI->hasIdenticalOperandBundleSchema(*cast(I2));
408 if (const CallBrInst *CI = dyn_cast(I1))
409 return CI->getCallingConv() == cast(I2)->getCallingConv() &&
410 CI->getAttributes() == cast(I2)->getAttributes() &&
411 CI->hasIdenticalOperandBundleSchema(*cast(I2));
407412 if (const InsertValueInst *IVI = dyn_cast(I1))
408413 return IVI->getIndices() == cast(I2)->getIndices();
409414 if (const ExtractValueInst *EVI = dyn_cast(I1))
515520 return true;
516521 case Instruction::Call:
517522 case Instruction::Invoke:
523 case Instruction::CallBr:
518524 return !cast(this)->doesNotAccessMemory();
519525 case Instruction::Store:
520526 return !cast(this)->isUnordered();
534540 return true;
535541 case Instruction::Call:
536542 case Instruction::Invoke:
543 case Instruction::CallBr:
537544 return !cast(this)->onlyReadsMemory();
538545 case Instruction::Load:
539546 return !cast(this)->isUnordered();
771778 }
772779
773780 void Instruction::setProfWeight(uint64_t W) {
774 assert((isa(this) || isa(this)) &&
775 "Can only set weights for call and invoke instrucitons");
781 assert(isa(this) &&
782 "Can only set weights for call like instructions");
776783 SmallVector Weights;
777784 Weights.push_back(W);
778785 MDBuilder MDB(getContext());
255255
256256 Function *CallBase::getCaller() { return getParent()->getParent(); }
257257
258 unsigned CallBase::getNumSubclassExtraOperandsDynamic() const {
259 assert(getOpcode() == Instruction::CallBr && "Unexpected opcode!");
260 return cast(this)->getNumIndirectDests() + 1;
261 }
262
258263 bool CallBase::isIndirectCall() const {
259264 const Value *V = getCalledValue();
260265 if (isa(V) || isa(V))
723728
724729 LandingPadInst *InvokeInst::getLandingPadInst() const {
725730 return cast(getUnwindDest()->getFirstNonPHI());
731 }
732
733 //===----------------------------------------------------------------------===//
734 // CallBrInst Implementation
735 //===----------------------------------------------------------------------===//
736
737 void CallBrInst::init(FunctionType *FTy, Value *Fn, BasicBlock *Fallthrough,
738 ArrayRef IndirectDests,
739 ArrayRef Args,
740 ArrayRef Bundles,
741 const Twine &NameStr) {
742 this->FTy = FTy;
743
744 assert((int)getNumOperands() ==
745 ComputeNumOperands(Args.size(), IndirectDests.size(),
746 CountBundleInputs(Bundles)) &&
747 "NumOperands not set up?");
748 NumIndirectDests = IndirectDests.size();
749 setDefaultDest(Fallthrough);
750 for (unsigned i = 0; i != NumIndirectDests; ++i)
751 setIndirectDest(i, IndirectDests[i]);
752 setCalledOperand(Fn);
753
754 #ifndef NDEBUG
755 assert(((Args.size() == FTy->getNumParams()) ||
756 (FTy->isVarArg() && Args.size() > FTy->getNumParams())) &&
757 "Calling a function with bad signature");
758
759 for (unsigned i = 0, e = Args.size(); i != e; i++)
760 assert((i >= FTy->getNumParams() ||
761 FTy->getParamType(i) == Args[i]->getType()) &&
762 "Calling a function with a bad signature!");
763 #endif
764
765 std::copy(Args.begin(), Args.end(), op_begin());
766
767 auto It = populateBundleOperandInfos(Bundles, Args.size());
768 (void)It;
769 assert(It + 2 + IndirectDests.size() == op_end() && "Should add up!");
770
771 setName(NameStr);
772 }
773
774 CallBrInst::CallBrInst(const CallBrInst &CBI)
775 : CallBase(CBI.Attrs, CBI.FTy, CBI.getType(), Instruction::CallBr,
776 OperandTraits::op_end(this) - CBI.getNumOperands(),
777 CBI.getNumOperands()) {
778 setCallingConv(CBI.getCallingConv());
779 std::copy(CBI.op_begin(), CBI.op_end(), op_begin());
780 std::copy(CBI.bundle_op_info_begin(), CBI.bundle_op_info_end(),
781 bundle_op_info_begin());
782 SubclassOptionalData = CBI.SubclassOptionalData;
783 NumIndirectDests = CBI.NumIndirectDests;
784 }
785
786 CallBrInst *CallBrInst::Create(CallBrInst *CBI, ArrayRef OpB,
787 Instruction *InsertPt) {
788 std::vector Args(CBI->arg_begin(), CBI->arg_end());
789
790 auto *NewCBI = CallBrInst::Create(CBI->getFunctionType(),
791 CBI->getCalledValue(),
792 CBI->getDefaultDest(),
793 CBI->getIndirectDests(),
794 Args, OpB, CBI->getName(), InsertPt);
795 NewCBI->setCallingConv(CBI->getCallingConv());
796 NewCBI->SubclassOptionalData = CBI->SubclassOptionalData;
797 NewCBI->setAttributes(CBI->getAttributes());
798 NewCBI->setDebugLoc(CBI->getDebugLoc());
799 NewCBI->NumIndirectDests = CBI->NumIndirectDests;
800 return NewCBI;
726801 }
727802
728803 //===----------------------------------------------------------------------===//
39954070 return new(getNumOperands()) InvokeInst(*this);
39964071 }
39974072
4073 CallBrInst *CallBrInst::cloneImpl() const {
4074 if (hasOperandBundles()) {
4075 unsigned DescriptorBytes = getNumOperandBundles() * sizeof(BundleOpInfo);
4076 return new (getNumOperands(), DescriptorBytes) CallBrInst(*this);
4077 }
4078 return new (getNumOperands()) CallBrInst(*this);
4079 }
4080
39984081 ResumeInst *ResumeInst::cloneImpl() const { return new (1) ResumeInst(*this); }
39994082
40004083 CleanupReturnInst *CleanupReturnInst::cloneImpl() const {
5656 // FIXME: Why isn't this in the subclass gunk??
5757 // Note, we cannot call isa before the CallInst has been
5858 // constructed.
59 if (SubclassID == Instruction::Call || SubclassID == Instruction::Invoke)
59 if (SubclassID == Instruction::Call || SubclassID == Instruction::Invoke ||
60 SubclassID == Instruction::CallBr)
6061 assert((VTy->isFirstClassType() || VTy->isVoidTy() || VTy->isStructTy()) &&
6162 "invalid CallInst type!");
6263 else if (SubclassID != BasicBlockVal &&
465465 void visitReturnInst(ReturnInst &RI);
466466 void visitSwitchInst(SwitchInst &SI);
467467 void visitIndirectBrInst(IndirectBrInst &BI);
468 void visitCallBrInst(CallBrInst &CBI);
468469 void visitSelectInst(SelectInst &SI);
469470 void visitUserOp1(Instruction &I);
470471 void visitUserOp2(Instruction &I) { visitUserOp1(I); }
24472448 "Indirectbr destinations must all have pointer type!", &BI);
24482449
24492450 visitTerminator(BI);
2451 }
2452
2453 void Verifier::visitCallBrInst(CallBrInst &CBI) {
2454 Assert(CBI.isInlineAsm(), "Callbr is currently only used for asm-goto!",
2455 &CBI);
2456 Assert(CBI.getType()->isVoidTy(), "Callbr return value is not supported!",
2457 &CBI);
2458 for (unsigned i = 0, e = CBI.getNumSuccessors(); i != e; ++i)
2459 Assert(CBI.getSuccessor(i)->getType()->isLabelTy(),
2460 "Callbr successors must all have pointer type!", &CBI);
2461 for (unsigned i = 0, e = CBI.getNumOperands(); i != e; ++i) {
2462 Assert(i >= CBI.getNumArgOperands() || !isa(CBI.getOperand(i)),
2463 "Using an unescaped label as a callbr argument!", &CBI);
2464 if (isa(CBI.getOperand(i)))
2465 for (unsigned j = i + 1; j != e; ++j)
2466 Assert(CBI.getOperand(i) != CBI.getOperand(j),
2467 "Duplicate callbr destination!", &CBI);
2468 }
2469
2470 visitTerminator(CBI);
24502471 }
24512472
24522473 void Verifier::visitSelectInst(SelectInst &SI) {
589589 case ISD::FDIV:
590590 case ISD::FREM:
591591 case ISD::INLINEASM:
592 case ISD::INLINEASM_BR:
592593 case AMDGPUISD::INTERP_P1:
593594 case AMDGPUISD::INTERP_P2:
594595 case AMDGPUISD::DIV_SCALE:
96969696 do {
96979697 // Follow the chain until we find an INLINEASM node.
96989698 N = N->getOperand(0).getNode();
9699 if (N->getOpcode() == ISD::INLINEASM)
9699 if (N->getOpcode() == ISD::INLINEASM ||
9700 N->getOpcode() == ISD::INLINEASM_BR)
97009701 return true;
97019702 } while (N->getOpcode() == ISD::CopyFromReg);
97029703 return false;
53125312 return 0;
53135313 case TargetOpcode::BUNDLE:
53145314 return getInstBundleSize(MI);
5315 case TargetOpcode::INLINEASM: {
5315 case TargetOpcode::INLINEASM:
5316 case TargetOpcode::INLINEASM_BR: {
53165317 const MachineFunction *MF = MI.getParent()->getParent();
53175318 const char *AsmStr = MI.getOperand(0).getSymbolName();
53185319 return getInlineAsmLength(AsmStr, *MF->getTarget().getMCAsmInfo());
26142614 return;
26152615 break;
26162616 case ISD::INLINEASM:
2617 case ISD::INLINEASM_BR:
26172618 if (tryInlineAsm(N))
26182619 return;
26192620 break;
43184319 if (!Changed)
43194320 return false;
43204321
4321 SDValue New = CurDAG->getNode(ISD::INLINEASM, SDLoc(N),
4322 SDValue New = CurDAG->getNode(N->getOpcode(), SDLoc(N),
43224323 CurDAG->getVTList(MVT::Other, MVT::Glue), AsmNodeOperands);
43234324 New->setNodeId(-1);
43244325 ReplaceNode(N, New.getNode());
486486 case TargetOpcode::KILL:
487487 case TargetOpcode::DBG_VALUE:
488488 return 0;
489 case TargetOpcode::INLINEASM: {
489 case TargetOpcode::INLINEASM:
490 case TargetOpcode::INLINEASM_BR: {
490491 const MachineFunction &MF = *MI.getParent()->getParent();
491492 const AVRTargetMachine &TM = static_cast(MF.getTarget());
492493 const AVRSubtarget &STI = MF.getSubtarget();
577577 const HexagonRegisterInfo &HRI = *Subtarget.getRegisterInfo();
578578 unsigned LR = HRI.getRARegister();
579579
580 if (Op.getOpcode() != ISD::INLINEASM || HMFI.hasClobberLR())
580 if ((Op.getOpcode() != ISD::INLINEASM &&
581 Op.getOpcode() != ISD::INLINEASM_BR) || HMFI.hasClobberLR())
581582 return Op;
582583
583584 unsigned NumOps = Op.getNumOperands();
12901291 setOperationAction(ISD::BUILD_PAIR, MVT::i64, Expand);
12911292 setOperationAction(ISD::SIGN_EXTEND_INREG, MVT::i1, Expand);
12921293 setOperationAction(ISD::INLINEASM, MVT::Other, Custom);
1294 setOperationAction(ISD::INLINEASM_BR, MVT::Other, Custom);
12931295 setOperationAction(ISD::PREFETCH, MVT::Other, Custom);
12941296 setOperationAction(ISD::READCYCLECOUNTER, MVT::i64, Custom);
12951297 setOperationAction(ISD::INTRINSIC_VOID, MVT::Other, Custom);
27392741 unsigned Opc = Op.getOpcode();
27402742
27412743 // Handle INLINEASM first.
2742 if (Opc == ISD::INLINEASM)
2744 if (Opc == ISD::INLINEASM || Opc == ISD::INLINEASM_BR)
27432745 return LowerINLINEASM(Op, DAG);
27442746
27452747 if (isHvxOperation(Op)) {
111111 case TargetOpcode::IMPLICIT_DEF:
112112 case TargetOpcode::COPY:
113113 case TargetOpcode::INLINEASM:
114 case TargetOpcode::INLINEASM_BR:
114115 break;
115116 }
116117
166167 case TargetOpcode::EH_LABEL:
167168 case TargetOpcode::COPY:
168169 case TargetOpcode::INLINEASM:
170 case TargetOpcode::INLINEASM_BR:
169171 break;
170172 }
171173 Packet.push_back(SU);
306306 case TargetOpcode::KILL:
307307 case TargetOpcode::DBG_VALUE:
308308 return 0;
309 case TargetOpcode::INLINEASM: {
309 case TargetOpcode::INLINEASM:
310 case TargetOpcode::INLINEASM_BR: {
310311 const MachineFunction *MF = MI.getParent()->getParent();
311312 const TargetInstrInfo &TII = *MF->getSubtarget().getInstrInfo();
312313 return TII.getInlineAsmLength(MI.getOperand(0).getSymbolName(),
576576 switch (MI.getOpcode()) {
577577 default:
578578 return MI.getDesc().getSize();
579 case TargetOpcode::INLINEASM: { // Inline Asm: Variable size.
579 case TargetOpcode::INLINEASM:
580 case TargetOpcode::INLINEASM_BR: { // Inline Asm: Variable size.
580581 const MachineFunction *MF = MI.getParent()->getParent();
581582 const char *AsmStr = MI.getOperand(0).getSymbolName();
582583 return getInlineAsmLength(AsmStr, *MF->getTarget().getMCAsmInfo());
999999
10001000 if (noImmForm)
10011001 OperandBase = 1;
1002 else if (OpC != TargetOpcode::INLINEASM) {
1002 else if (OpC != TargetOpcode::INLINEASM &&
1003 OpC != TargetOpcode::INLINEASM_BR) {
10031004 assert(ImmToIdxMap.count(OpC) &&
10041005 "No indexed form of load or store available!");
10051006 unsigned NewOpcode = ImmToIdxMap.find(OpC)->second;
438438 case RISCV::PseudoCALL:
439439 case RISCV::PseudoTAIL:
440440 return 8;
441 case TargetOpcode::INLINEASM: {
441 case TargetOpcode::INLINEASM:
442 case TargetOpcode::INLINEASM_BR: {
442443 const MachineFunction &MF = *MI.getParent()->getParent();
443444 const auto &TM = static_cast(MF.getTarget());
444445 return getInlineAsmLength(MI.getOperand(0).getSymbolName(),
311311
312312 SelectInlineAsmMemoryOperands(AsmNodeOperands, SDLoc(N));
313313
314 SDValue New = CurDAG->getNode(ISD::INLINEASM, SDLoc(N),
314 SDValue New = CurDAG->getNode(N->getOpcode(), SDLoc(N),
315315 CurDAG->getVTList(MVT::Other, MVT::Glue), AsmNodeOperands);
316316 New->setNodeId(-1);
317317 ReplaceNode(N, New.getNode());
327327
328328 switch (N->getOpcode()) {
329329 default: break;
330 case ISD::INLINEASM: {
330 case ISD::INLINEASM:
331 case ISD::INLINEASM_BR: {
331332 if (tryInlineAsm(N))
332333 return;
333334 break;
252252 printSymbolOperand(P, MO, O);
253253 break;
254254 }
255 case MachineOperand::MO_BlockAddress: {
256 MCSymbol *Sym = P.GetBlockAddressSymbol(MO.getBlockAddress());
257 Sym->print(O, P.MAI);
258 break;
259 }
255260 }
256261 }
257262
14751475 break;
14761476 }
14771477
1478 case TargetOpcode::INLINEASM: {
1478 case TargetOpcode::INLINEASM:
1479 case TargetOpcode::INLINEASM_BR: {
14791480 // The inline asm MachineInstr currently only *uses* FP registers for the
14801481 // 'f' constraint. These should be turned into the current ST(x) register
14811482 // in the machine instr.
55 //
66 //===----------------------------------------------------------------------===//
77 //
8 // This file implements the visitCall and visitInvoke functions.
8 // This file implements the visitCall, visitInvoke, and visitCallBr functions.
99 //
1010 //===----------------------------------------------------------------------===//
1111
18331833 IntrinsicInst *II = dyn_cast(&CI);
18341834 if (!II) return visitCallBase(CI);
18351835
1836 // Intrinsics cannot occur in an invoke, so handle them here instead of in
1837 // visitCallBase.
1836 // Intrinsics cannot occur in an invoke or a callbr, so handle them here
1837 // instead of in visitCallBase.
18381838 if (auto *MI = dyn_cast(II)) {
18391839 bool Changed = false;
18401840
40164016 return visitCallBase(II);
40174017 }
40184018
4019 // CallBrInst simplification
4020 Instruction *InstCombiner::visitCallBrInst(CallBrInst &CBI) {
4021 return visitCallBase(CBI);
4022 }
4023
40194024 /// If this cast does not affect the value passed through the varargs area, we
40204025 /// can eliminate the use of the cast.
40214026 static bool isSafeToEliminateVarargsCast(const CallBase &Call,
41444149 return nullptr;
41454150 }
41464151
4147 /// Improvements for call and invoke instructions.
4152 /// Improvements for call, callbr and invoke instructions.
41484153 Instruction *InstCombiner::visitCallBase(CallBase &Call) {
41494154 if (isAllocLikeFn(&Call, &TLI))
41504155 return visitAllocSite(Call);
41774182 }
41784183
41794184 // If the callee is a pointer to a function, attempt to move any casts to the
4180 // arguments of the call/invoke.
4185 // arguments of the call/callbr/invoke.
41814186 Value *Callee = Call.getCalledValue();
41824187 if (!isa(Callee) && transformConstExprCastCall(Call))
41834188 return nullptr;
42104215 if (isa(OldCall))
42114216 return eraseInstFromFunction(*OldCall);
42124217
4213 // We cannot remove an invoke, because it would change the CFG, just
4214 // change the callee to a null pointer.
4215 cast(OldCall)->setCalledFunction(
4218 // We cannot remove an invoke or a callbr, because it would change thexi
4219 // CFG, just change the callee to a null pointer.
4220 cast(OldCall)->setCalledFunction(
42164221 CalleeF->getFunctionType(),
42174222 Constant::getNullValue(CalleeF->getType()));
42184223 return nullptr;
42274232 if (!Call.getType()->isVoidTy())
42284233 replaceInstUsesWith(Call, UndefValue::get(Call.getType()));
42294234
4230 if (isa(Call)) {
4231 // Can't remove an invoke because we cannot change the CFG.
4235 if (Call.isTerminator()) {
4236 // Can't remove an invoke or callbr because we cannot change the CFG.
42324237 return nullptr;
42334238 }
42344239
42814286 }
42824287
42834288 /// If the callee is a constexpr cast of a function, attempt to move the cast to
4284 /// the arguments of the call/invoke.
4289 /// the arguments of the call/callbr/invoke.
42854290 bool InstCombiner::transformConstExprCastCall(CallBase &Call) {
42864291 auto *Callee = dyn_cast(Call.getCalledValue()->stripPointerCasts());
42874292 if (!Callee)
43324337 return false; // Attribute not compatible with transformed value.
43334338 }
43344339
4335 // If the callbase is an invoke instruction, and the return value is used by
4336 // a PHI node in a successor, we cannot change the return type of the call
4337 // because there is no place to put the cast instruction (without breaking
4338 // the critical edge). Bail out in this case.
4339 if (!Caller->use_empty())
4340 // If the callbase is an invoke/callbr instruction, and the return value is
4341 // used by a PHI node in a successor, we cannot change the return type of
4342 // the call because there is no place to put the cast instruction (without
4343 // breaking the critical edge). Bail out in this case.
4344 if (!Caller->use_empty()) {
43404345 if (InvokeInst *II = dyn_cast(Caller))
43414346 for (User *U : II->users())
43424347 if (PHINode *PN = dyn_cast(U))
43434348 if (PN->getParent() == II->getNormalDest() ||
43444349 PN->getParent() == II->getUnwindDest())
43454350 return false;
4351 // FIXME: Be conservative for callbr to avoid a quadratic search.
4352 if (CallBrInst *CBI = dyn_cast(Caller))
4353 return false;
4354 }
43464355 }
43474356
43484357 unsigned NumActualArgs = Call.arg_size();
44964505 if (InvokeInst *II = dyn_cast(Caller)) {
44974506 NewCall = Builder.CreateInvoke(Callee, II->getNormalDest(),
44984507 II->getUnwindDest(), Args, OpBundles);
4508 } else if (CallBrInst *CBI = dyn_cast(Caller)) {
4509 NewCall = Builder.CreateCallBr(Callee, CBI->getDefaultDest(),
4510 CBI->getIndirectDests(), Args, OpBundles);
44994511 } else {
45004512 NewCall = Builder.CreateCall(Callee, Args, OpBundles);
45014513 cast(NewCall)->setTailCallKind(
45194531 NV = NC = CastInst::CreateBitOrPointerCast(NC, OldRetTy);
45204532 NC->setDebugLoc(Caller->getDebugLoc());
45214533
4522 // If this is an invoke instruction, we should insert it after the first
4523 // non-phi, instruction in the normal successor block.
4534 // If this is an invoke/callbr instruction, we should insert it after the
4535 // first non-phi instruction in the normal successor block.
45244536 if (InvokeInst *II = dyn_cast(Caller)) {
45254537 BasicBlock::iterator I = II->getNormalDest()->getFirstInsertionPt();
4538 InsertNewInstBefore(NC, *I);
4539 } else if (CallBrInst *CBI = dyn_cast(Caller)) {
4540 BasicBlock::iterator I = CBI->getDefaultDest()->getFirstInsertionPt();
45264541 InsertNewInstBefore(NC, *I);
45274542 } else {
45284543 // Otherwise, it's a call, just insert cast right after the call.
46724687 NewArgs, OpBundles);
46734688 cast(NewCaller)->setCallingConv(II->getCallingConv());
46744689 cast(NewCaller)->setAttributes(NewPAL);
4690 } else if (CallBrInst *CBI = dyn_cast(&Call)) {
4691 NewCaller =
4692 CallBrInst::Create(NewFTy, NewCallee, CBI->getDefaultDest(),
4693 CBI->getIndirectDests(), NewArgs, OpBundles);
4694 cast(NewCaller)->setCallingConv(CBI->getCallingConv());
4695 cast(NewCaller)->setAttributes(NewPAL);
46754696 } else {
46764697 NewCaller = CallInst::Create(NewFTy, NewCallee, NewArgs, OpBundles);
46774698 cast(NewCaller)->setTailCallKind(
391391 Instruction *visitSelectInst(SelectInst &SI);
392392 Instruction *visitCallInst(CallInst &CI);
393393 Instruction *visitInvokeInst(InvokeInst &II);
394 Instruction *visitCallBrInst(CallBrInst &CBI);
394395
395396 Instruction *SliceUpIllegalIntegerPHI(PHINode &PN);
396397 Instruction *visitPHINode(PHINode &PN);
920920
921921 // If the InVal is an invoke at the end of the pred block, then we can't
922922 // insert a computation after it without breaking the edge.
923 if (InvokeInst *II = dyn_cast(InVal))
924 if (II->getParent() == NonConstBB)
923 if (isa(InVal))
924 if (cast(InVal)->getParent() == NonConstBB)
925925 return nullptr;
926926
927927 // If the incoming non-constant value is in I's block, we will remove one
11301130 return false;
11311131 }
11321132
1133 // FIXME: Can we support the fallthrough edge?
1134 if (isa(Pred->getTerminator())) {
1135 LLVM_DEBUG(
1136 dbgs() << "COULD NOT PRE LOAD BECAUSE OF CALLBR CRITICAL EDGE '"
1137 << Pred->getName() << "': " << *LI << '\n');
1138 return false;
1139 }
1140
11331141 if (LoadBB->isEHPad()) {
11341142 LLVM_DEBUG(
11351143 dbgs() << "COULD NOT PRE LOAD BECAUSE OF AN EH PAD CRITICAL EDGE '"
21662174 return false;
21672175
21682176 // We don't currently value number ANY inline asm calls.
2169 if (CallInst *CallI = dyn_cast(CurInst))
2170 if (CallI->isInlineAsm())
2177 if (auto *CallB = dyn_cast(CurInst))
2178 if (CallB->isInlineAsm())
21712179 return false;
21722180
21732181 uint32_t ValNo = VN.lookup(CurInst);
22482256
22492257 // Don't do PRE across indirect branch.
22502258 if (isa(PREPred->getTerminator()))
2259 return false;
2260
2261 // Don't do PRE across callbr.
2262 // FIXME: Can we do this across the fallthrough edge?
2263 if (isa(PREPred->getTerminator()))
22512264 return false;
22522265
22532266 // We can't do PRE safely on a critical edge, so instead we schedule
10541054 Condition = IB->getAddress()->stripPointerCasts();
10551055 Preference = WantBlockAddress;
10561056 } else {
1057 return false; // Must be an invoke.
1057 return false; // Must be an invoke or callbr.
10581058 }
10591059
10601060 // Run constant folding to see if we can reduce the condition to a simple
14271427 // Add all the unavailable predecessors to the PredsToSplit list.
14281428 for (BasicBlock *P : predecessors(LoadBB)) {
14291429 // If the predecessor is an indirect goto, we can't split the edge.
1430 if (isa(P->getTerminator()))
1430 // Same for CallBr.
1431 if (isa(P->getTerminator()) ||
1432 isa(P->getTerminator()))
14311433 return false;
14321434
14331435 if (!AvailablePredSet.count(P))
16401642 ++PredWithKnownDest;
16411643
16421644 // If the predecessor ends with an indirect goto, we can't change its
1643 // destination.
1644 if (isa(Pred->getTerminator()))
1645 // destination. Same for CallBr.
1646 if (isa(Pred->getTerminator()) ||
1647 isa(Pred->getTerminator()))
16451648 continue;
16461649
16471650 PredToDestList.push_back(std::make_pair(Pred, DestBB));
637637 visitTerminator(II);
638638 }
639639
640 void visitCallBrInst (CallBrInst &CBI) {
641 visitCallSite(&CBI);
642 visitTerminator(CBI);
643 }
644
640645 void visitCallSite (CallSite CS);
641646 void visitResumeInst (ResumeInst &I) { /*returns void*/ }
642647 void visitUnreachableInst(UnreachableInst &I) { /*returns void*/ }
729734
730735 // If we didn't find our destination in the IBR successor list, then we
731736 // have undefined behavior. Its ok to assume no successor is executable.
737 return;
738 }
739
740 // In case of callbr, we pessimistically assume that all successors are
741 // feasible.
742 if (isa(&TI)) {
743 Succs.assign(TI.getNumSuccessors(), true);
732744 return;
733745 }
734746
15961608 return true;
15971609 case Instruction::Call:
15981610 case Instruction::Invoke:
1611 case Instruction::CallBr:
15991612 // There are two reasons a call can have an undef result
16001613 // 1. It could be tracked.
16011614 // 2. It could be constant-foldable.
548548 // all BlockAddress uses would need to be updated.
549549 assert(!isa(Preds[i]->getTerminator()) &&
550550 "Cannot split an edge from an IndirectBrInst");
551 assert(!isa(Preds[i]->getTerminator()) &&
552 "Cannot split an edge from a CallBrInst");
551553 Preds[i]->getTerminator()->replaceUsesOfWith(BB, NewBB);
552554 }
553555
142142 // Splitting the critical edge to a pad block is non-trivial. Don't do
143143 // it in this generic function.
144144 if (DestBB->isEHPad()) return nullptr;
145
146 // Don't split the non-fallthrough edge from a callbr.
147 if (isa(TI) && SuccNum > 0)
148 return nullptr;
145149
146150 // Create a new basic block, linking it into the CFG.
147151 BasicBlock *NewBB = BasicBlock::Create(TI->getContext(),
15031503 assert(TheCall->getParent() && TheCall->getFunction()
15041504 && "Instruction not in function!");
15051505
1506 // FIXME: we don't inline callbr yet.
1507 if (isa(TheCall))
1508 return false;
1509
15061510 // If IFI has any state in it, zap it before we fill it in.
15071511 IFI.reset();
15081512
17281732 Instruction *NewI = nullptr;
17291733 if (isa(I))
17301734 NewI = CallInst::Create(cast(I), OpDefs, I);
1735 else if (isa(I))
1736 NewI = CallBrInst::Create(cast(I), OpDefs, I);
17311737 else
17321738 NewI = InvokeInst::Create(cast(I), OpDefs, I);
17331739
20302036 Instruction *NewInst;
20312037 if (CS.isCall())
20322038 NewInst = CallInst::Create(cast(I), OpBundles, I);
2039 else if (CS.isCallBr())
2040 NewInst = CallBrInst::Create(cast(I), OpBundles, I);
20332041 else
20342042 NewInst = InvokeInst::Create(cast(I), OpBundles, I);
20352043 NewInst->takeName(I);
995995 }
996996 }
997997
998 // We cannot fold the block if it's a branch to an already present callbr
999 // successor because that creates duplicate successors.
1000 for (auto I = pred_begin(BB), E = pred_end(BB); I != E; ++I) {
1001 if (auto *CBI = dyn_cast((*I)->getTerminator())) {
1002 if (Succ == CBI->getDefaultDest())
1003 return false;
1004 for (unsigned i = 0, e = CBI->getNumIndirectDests(); i != e; ++i)
1005 if (Succ == CBI->getIndirectDest(i))
1006 return false;
1007 }
1008 }
1009
9981010 LLVM_DEBUG(dbgs() << "Killing Trivial BB: \n" << *BB);
9991011
10001012 SmallVector Updates;
2525 // contains or is entered by an indirectbr instruction, it may not be possible
2626 // to transform the loop and make these guarantees. Client code should check
2727 // that these conditions are true before relying on them.
28 //
29 // Similar complications arise from callbr instructions, particularly in
30 // asm-goto where blockaddress expressions are used.
2831 //
2932 // Note that the simplifycfg pass will clean up blocks which are split out but
3033 // end up being unnecessary, so usage of this pass should not pessimize
122125 PI != PE; ++PI) {
123126 BasicBlock *P = *PI;
124127 if (!L->contains(P)) { // Coming in from outside the loop?
125 // If the loop is branched to from an indirect branch, we won't
128 // If the loop is branched to from an indirect terminator, we won't
126129 // be able to fully transform the loop, because it prohibits
127130 // edge splitting.
128 if (isa(P->getTerminator())) return nullptr;
131 if (P->getTerminator()->isIndirectTerminator())
132 return nullptr;
129133
130134 // Keep track of it.
131135 OutsideBlocks.push_back(P);
234238 for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
235239 if (PN->getIncomingValue(i) != PN ||
236240 !L->contains(PN->getIncomingBlock(i))) {
237 // We can't split indirectbr edges.
238 if (isa(PN->getIncomingBlock(i)->getTerminator()))
241 // We can't split indirect control flow edges.
242 if (PN->getIncomingBlock(i)->getTerminator()->isIndirectTerminator())
239243 return nullptr;
240244 OuterLoopPreds.push_back(PN->getIncomingBlock(i));
241245 }
356360 for (pred_iterator I = pred_begin(Header), E = pred_end(Header); I != E; ++I){
357361 BasicBlock *P = *I;
358362
359 // Indirectbr edges cannot be split, so we must fail if we find one.
360 if (isa(P->getTerminator()))
363 // Indirect edges cannot be split, so we must fail if we find one.
364 if (P->getTerminator()->isIndirectTerminator())
361365 return nullptr;
362366
363367 if (P != Preheader) BackedgeBlocks.push_back(P);
6363 if (L->contains(PredBB)) {
6464 if (isa(PredBB->getTerminator()))
6565 // We cannot rewrite exiting edges from an indirectbr.
66 return false;
67 if (isa(PredBB->getTerminator()))
68 // We cannot rewrite exiting edges from a callbr.
6669 return false;
6770
6871 InLoopPredecessors.push_back(PredBB);
12641264 while (isa(I2))
12651265 I2 = &*BB2_Itr++;
12661266 }
1267 // FIXME: Can we define a safety predicate for CallBr?
12671268 if (isa(I1) || !I1->isIdenticalToWhenDefined(I2) ||
1268 (isa(I1) && !isSafeToHoistInvoke(BB1, BB2, I1, I2)))
1269 (isa(I1) && !isSafeToHoistInvoke(BB1, BB2, I1, I2)) ||
1270 isa(I1))
12691271 return false;
12701272
12711273 BasicBlock *BIParent = BI->getParent();
13481350
13491351 HoistTerminator:
13501352 // It may not be possible to hoist an invoke.
1353 // FIXME: Can we define a safety predicate for CallBr?
13511354 if (isa(I1) && !isSafeToHoistInvoke(BB1, BB2, I1, I2))
1355 return Changed;
1356
1357 // TODO: callbr hoisting currently disabled pending further study.
1358 if (isa(I1))
13521359 return Changed;
13531360
13541361 for (BasicBlock *Succ : successors(BB1)) {
14421449 // Conservatively return false if I is an inline-asm instruction. Sinking
14431450 // and merging inline-asm instructions can potentially create arguments
14441451 // that cannot satisfy the inline-asm constraints.
1445 if (const auto *C = dyn_castInst>(I))
1452 if (const auto *C = dyn_castBase>(I))
14461453 if (C->isInlineAsm())
14471454 return false;
14481455
15051512 // We can't create a PHI from this GEP.
15061513 return false;
15071514 // Don't create indirect calls! The called value is the final operand.
1508 if ((isa(I0) || isa(I0)) && OI == OE - 1) {
1515 if (isa(I0) && OI == OE - 1) {
15091516 // FIXME: if the call was *already* indirect, we should do this.
15101517 return false;
15111518 }
0 ; RUN: llvm-dis < %s.bc | FileCheck %s
1
2 ; callbr.ll.bc was generated by passing this file to llvm-as.
3
4 define i32 @test_asm_goto(i32 %x){
5 entry:
6 ; CHECK: callbr void asm "", "r,X"(i32 %x, i8* blockaddress(@test_asm_goto, %fail))
7 ; CHECK-NEXT: to label %normal [label %fail]
8 callbr void asm "", "r,X"(i32 %x, i8* blockaddress(@test_asm_goto, %fail)) to label %normal [label %fail]
9 normal:
10 ret i32 1
11 fail:
12 ret i32 0
13 }
Binary diff not shown
0 ; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
1 ; RUN: llc < %s -mtriple=x86_64-unknown-linux-gnu | FileCheck %s
2
3 ; This test asserted in MachineBlockPlacement during asm-goto bring up.
4
5 %struct.wibble = type { %struct.pluto, i32, i8* }
6 %struct.pluto = type { i32, i32, i32 }
7
8 @global = external global [0 x %struct.wibble]
9
10 define i32 @foo(i32 %arg, i32 (i8*)* %arg3) nounwind {
11 ; CHECK-LABEL: foo:
12 ; CHECK: # %bb.0: # %bb
13 ; CHECK-NEXT: pushq %rbp
14 ; CHECK-NEXT: pushq %r15
15 ; CHECK-NEXT: pushq %r14
16 ; CHECK-NEXT: pushq %r13
17 ; CHECK-NEXT: pushq %r12
18 ; CHECK-NEXT: pushq %rbx
19 ; CHECK-NEXT: pushq %rax
20 ; CHECK-NEXT: movabsq $-2305847407260205056, %rbx # imm = 0xDFFFFC0000000000
21 ; CHECK-NEXT: xorl %eax, %eax
22 ; CHECK-NEXT: testb %al, %al
23 ; CHECK-NEXT: jne .LBB0_5
24 ; CHECK-NEXT: # %bb.1: # %bb5
25 ; CHECK-NEXT: movq %rsi, %r14
26 ; CHECK-NEXT: movslq %edi, %rbp
27 ; CHECK-NEXT: leaq (,%rbp,8), %rax
28 ; CHECK-NEXT: leaq global(%rax,%rax,2), %r15
29 ; CHECK-NEXT: leaq global+4(%rax,%rax,2), %r12
30 ; CHECK-NEXT: xorl %r13d, %r13d
31 ; CHECK-NEXT: .p2align 4, 0x90
32 ; CHECK-NEXT: .LBB0_2: # %bb8
33 ; CHECK-NEXT: # =>This Inner Loop Header: Depth=1
34 ; CHECK-NEXT: callq bar
35 ; CHECK-NEXT: movq %rax, %rbx
36 ; CHECK-NEXT: movq %rax, %rdi
37 ; CHECK-NEXT: callq *%r14
38 ; CHECK-NEXT: movq %r15, %rdi
39 ; CHECK-NEXT: callq hoge
40 ; CHECK-NEXT: movq %r12, %rdi
41 ; CHECK-NEXT: callq hoge
42 ; CHECK-NEXT: testb %r13b, %r13b
43 ; CHECK-NEXT: jne .LBB0_2
44 ; CHECK-NEXT: # %bb.3: # %bb15
45 ; CHECK-NEXT: leaq (%rbp,%rbp,2), %rax
46 ; CHECK-NEXT: movq %rbx, global+16(,%rax,8)
47 ; CHECK-NEXT: movabsq $-2305847407260205056, %rbx # imm = 0xDFFFFC0000000000
48 ; CHECK-NEXT: #APP
49 ; CHECK-NEXT: #NO_APP
50 ; CHECK-NEXT: .LBB0_4: # %bb17
51 ; CHECK-NEXT: callq widget
52 ; CHECK-NEXT: .Ltmp0: # Block address taken
53 ; CHECK-NEXT: .LBB0_5: # %bb18
54 ; CHECK-NEXT: movw $0, 14(%rbx)
55 ; CHECK-NEXT: addq $8, %rsp
56 ; CHECK-NEXT: popq %rbx
57 ; CHECK-NEXT: popq %r12
58 ; CHECK-NEXT: popq %r13
59 ; CHECK-NEXT: popq %r14
60 ; CHECK-NEXT: popq %r15
61 ; CHECK-NEXT: popq %rbp
62 ; CHECK-NEXT: retq
63 bb:
64 %tmp = add i64 0, -2305847407260205056
65 %tmp4 = sext i32 %arg to i64
66 br i1 undef, label %bb18, label %bb5
67
68 bb5: ; preds = %bb
69 %tmp6 = getelementptr [0 x %struct.wibble], [0 x %struct.wibble]* @global, i64 0, i64 %tmp4, i32 0, i32 0
70 %tmp7 = getelementptr [0 x %struct.wibble], [0 x %struct.wibble]* @global, i64 0, i64 %tmp4, i32 0, i32 1
71 br label %bb8
72
73 bb8: ; preds = %bb8, %bb5
74 %tmp9 = call i8* @bar(i64 undef)
75 %tmp10 = call i32 %arg3(i8* nonnull %tmp9)
76 %tmp11 = ptrtoint i32* %tmp6 to i64
77 call void @hoge(i64 %tmp11)
78 %tmp12 = ptrtoint i32* %tmp7 to i64
79 %tmp13 = add i64 undef, -2305847407260205056
80 call void @hoge(i64 %tmp12)
81 %tmp14 = icmp eq i32 0, 0
82 br i1 %tmp14, label %bb15, label %bb8
83
84 bb15: ; preds = %bb8
85 %tmp16 = getelementptr [0 x %struct.wibble], [0 x %struct.wibble]* @global, i64 0, i64 %tmp4, i32 2
86 store i8* %tmp9, i8** %tmp16
87 callbr void asm sideeffect "", "X"(i8* blockaddress(@foo, %bb18))
88 to label %bb17 [label %bb18]
89
90 bb17: ; preds = %bb15
91 call void @widget()
92 br label %bb18
93
94 bb18: ; preds = %bb17, %bb15, %bb
95 %tmp19 = add i64 %tmp, 14
96 %tmp20 = inttoptr i64 %tmp19 to i16*
97 store i16 0, i16* %tmp20
98 ret i32 undef
99 }
100
101 declare i8* @bar(i64)
102
103 declare void @widget()
104
105 declare void @hoge(i64)
0 ; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
1 ; RUN: llc < %s -mtriple=x86_64-unknown-linux-gnu | FileCheck %s
2
3 ; This test hung in the BranchFolding pass during asm-goto bring up
4
5 @e = global i32 0
6 @j = global i32 0
7
8 define void @n(i32* %o, i32 %p, i32 %u) nounwind {
9 ; CHECK-LABEL: n:
10 ; CHECK: # %bb.0: # %entry
11 ; CHECK-NEXT: pushq %rbp
12 ; CHECK-NEXT: pushq %r15
13 ; CHECK-NEXT: pushq %r14
14 ; CHECK-NEXT: pushq %r13
15 ; CHECK-NEXT: pushq %r12
16 ; CHECK-NEXT: pushq %rbx
17 ; CHECK-NEXT: pushq %rax
18 ; CHECK-NEXT: movl %edx, %ebx
19 ; CHECK-NEXT: movl %esi, %r12d
20 ; CHECK-NEXT: movq %rdi, %r15
21 ; CHECK-NEXT: callq c
22 ; CHECK-NEXT: movl %eax, %r13d
23 ; CHECK-NEXT: movq %r15, %rdi
24 ; CHECK-NEXT: callq l
25 ; CHECK-NEXT: testl %eax, %eax
26 ; CHECK-NEXT: je .LBB0_1
27 ; CHECK-NEXT: .LBB0_10: # %cleanup
28 ; CHECK-NEXT: addq $8, %rsp
29 ; CHECK-NEXT: popq %rbx
30 ; CHECK-NEXT: popq %r12
31 ; CHECK-NEXT: popq %r13
32 ; CHECK-NEXT: popq %r14
33 ; CHECK-NEXT: popq %r15
34 ; CHECK-NEXT: popq %rbp
35 ; CHECK-NEXT: retq
36 ; CHECK-NEXT: .LBB0_1: # %if.end
37 ; CHECK-NEXT: movl %ebx, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
38 ; CHECK-NEXT: cmpl $0, {{.*}}(%rip)
39 ; CHECK-NEXT: # implicit-def: $ebx
40 ; CHECK-NEXT: # implicit-def: $r14d
41 ; CHECK-NEXT: je .LBB0_4
42 ; CHECK-NEXT: # %bb.2: # %if.then4
43 ; CHECK-NEXT: movslq %r12d, %rdi
44 ; CHECK-NEXT: callq m
45 ; CHECK-NEXT: # implicit-def: $ebx
46 ; CHECK-NEXT: # implicit-def: $ebp
47 ; CHECK-NEXT: .LBB0_3: # %r
48 ; CHECK-NEXT: callq c
49 ; CHECK-NEXT: movl %ebp, %r14d
50 ; CHECK-NEXT: .LBB0_4: # %if.end8
51 ; CHECK-NEXT: movl %ebx, %edi
52 ; CHECK-NEXT: callq i
53 ; CHECK-NEXT: movl %eax, %ebp
54 ; CHECK-NEXT: orl %r14d, %ebp
55 ; CHECK-NEXT: testl %r13d, %r13d
56 ; CHECK-NEXT: je .LBB0_6
57 ; CHECK-NEXT: # %bb.5:
58 ; CHECK-NEXT: andl $4, %ebx
59 ; CHECK-NEXT: jmp .LBB0_3
60 ; CHECK-NEXT: .LBB0_6: # %if.end12
61 ; CHECK-NEXT: testl %ebp, %ebp
62 ; CHECK-NEXT: je .LBB0_9
63 ; CHECK-NEXT: # %bb.7: # %if.then14
64 ; CHECK-NEXT: movl {{[-0-9]+}}(%r{{[sb]}}p), %eax # 4-byte Reload
65 ; CHECK-NEXT: #APP
66 ; CHECK-NEXT: #NO_APP
67 ; CHECK-NEXT: jmp .LBB0_10
68 ; CHECK-NEXT: .Ltmp0: # Block address taken
69 ; CHECK-NEXT: .LBB0_8: # %if.then20.critedge
70 ; CHECK-NEXT: movl {{.*}}(%rip), %edi
71 ; CHECK-NEXT: movslq %eax, %rcx
72 ; CHECK-NEXT: movl $1, %esi
73 ; CHECK-NEXT: movq %r15, %rdx
74 ; CHECK-NEXT: addq $8, %rsp
75 ; CHECK-NEXT: popq %rbx
76 ; CHECK-NEXT: popq %r12
77 ; CHECK-NEXT: popq %r13
78 ; CHECK-NEXT: popq %r14
79 ; CHECK-NEXT: popq %r15
80 ; CHECK-NEXT: popq %rbp
81 ; CHECK-NEXT: jmp k # TAILCALL
82 ; CHECK-NEXT: .LBB0_9: # %if.else
83 ; CHECK-NEXT: incq 0
84 ; CHECK-NEXT: jmp .LBB0_10
85 entry:
86 %call = tail call i32 @c()
87 %call1 = tail call i32 @l(i32* %o)
88 %tobool = icmp eq i32 %call1, 0
89 br i1 %tobool, label %if.end, label %cleanup
90
91 if.end: ; preds = %entry
92 %0 = load i32, i32* @e
93 %tobool3 = icmp eq i32 %0, 0
94 br i1 %tobool3, label %if.end8, label %if.then4, !prof !0
95
96 if.then4: ; preds = %if.end
97 %conv5 = sext i32 %p to i64
98 %call6 = tail call i32 @m(i64 %conv5)
99 br label %r
100
101 r: ; preds = %if.end8, %if.then4
102 %flags.0 = phi i32 [ undef, %if.then4 ], [ %and, %if.end8 ]
103 %major.0 = phi i32 [ undef, %if.then4 ], [ %or, %if.end8 ]
104 %call7 = tail call i32 @c()
105 br label %if.end8
106
107 if.end8: ; preds = %r, %if.end
108 %flags.1 = phi i32 [ %flags.0, %r ], [ undef, %if.end ]
109 %major.1 = phi i32 [ %major.0, %r ], [ undef, %if.end ]
110 %call9 = tail call i32 @i(i32 %flags.1)
111 %or = or i32 %call9, %major.1
112 %and = and i32 %flags.1, 4
113 %tobool10 = icmp eq i32 %call, 0
114 br i1 %tobool10, label %if.end12, label %r
115
116 if.end12: ; preds = %if.end8
117 %tobool13 = icmp eq i32 %or, 0
118 br i1 %tobool13, label %if.else, label %if.then14
119
120 if.then14: ; preds = %if.end12
121 callbr void asm sideeffect "", "X,~{dirflag},~{fpsr},~{flags}"(i8* blockaddress(@n, %if.then20.critedge))
122 to label %cleanup [label %if.then20.critedge]
123
124 if.then20.critedge: ; preds = %if.then14
125 %1 = load i32, i32* @j
126 %conv21 = sext i32 %u to i64
127 %call22 = tail call i32 @k(i32 %1, i64 1, i32* %o, i64 %conv21)
128 br label %cleanup
129
130 if.else: ; preds = %if.end12
131 %2 = load i64, i64* null
132 %inc = add i64 %2, 1
133 store i64 %inc, i64* null
134 br label %cleanup
135
136 cleanup: ; preds = %if.else, %if.then20.critedge, %if.then14, %entry
137 ret void
138 }
139
140 declare i32 @c()
141
142 declare i32 @l(i32*)
143
144 declare i32 @m(i64)
145
146 declare i32 @i(i32)
147
148 declare i32 @k(i32, i64, i32*, i64)
149
150 !0 = !{!"branch_weights", i32 2000, i32 1}
0 ; RUN: not llc -mtriple=i686-- < %s 2> %t
1 ; RUN: FileCheck %s < %t
2
3 ; CHECK: Duplicate callbr destination
4
5 ; A test for asm-goto duplicate labels limitation
6
7 define i32 @test(i32 %a) {
8 entry:
9 %0 = add i32 %a, 4
10 callbr void asm "xorl $0, $0; jmp ${1:l}", "r,X,~{dirflag},~{fpsr},~{flags}"(i32 %0, i8* blockaddress(@test, %fail)) to label %fail [label %fail]
11
12 fail:
13 ret i32 1
14 }
0 ; RUN: not llc -mtriple=i686-- < %s 2> %t
1 ; RUN: FileCheck %s < %t
2
3 ; CHECK: Duplicate callbr destination
4
5 ; A test for asm-goto duplicate labels limitation
6
7 define i32 @test(i32 %a) {
8 entry:
9 %0 = add i32 %a, 4
10 callbr void asm "xorl $0, $0; jmp ${1:l}", "r,X,X,~{dirflag},~{fpsr},~{flags}"(i32 %0, i8* blockaddress(@test, %fail), i8* blockaddress(@test, %fail)) to label %normal [label %fail, label %fail]
11
12 normal:
13 ret i32 %0
14
15 fail:
16 ret i32 1
17 }
0 ; RUN: not llc -mtriple=i686-- < %s 2> %t
1 ; RUN: FileCheck %s < %t
2
3 ; CHECK: error: asm-goto outputs not supported
4
5 ; A test for asm-goto output prohibition
6
7 define i32 @test(i32 %a) {
8 entry:
9 %0 = add i32 %a, 4
10 %1 = callbr i32 asm "xorl $1, $1; jmp ${1:l}", "=&r,r,X,~{dirflag},~{fpsr},~{flags}"(i32 %0, i8* blockaddress(@test, %fail)) to label %normal [label %fail]
11
12 normal:
13 ret i32 %1
14
15 fail:
16 ret i32 1
17 }
0 ; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
1 ; RUN: llc < %s -mtriple=i686-- -O3 | FileCheck %s
2
3 ; Tests for using callbr as an asm-goto wrapper
4
5 ; Test 1 - fallthrough label gets removed, but the fallthrough code that is
6 ; unreachable due to asm ending on a jmp is still left in.
7 define i32 @test1(i32 %a) {
8 ; CHECK-LABEL: test1:
9 ; CHECK: # %bb.0: # %entry
10 ; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
11 ; CHECK-NEXT: addl $4, %eax
12 ; CHECK-NEXT: #APP
13 ; CHECK-NEXT: xorl %eax, %eax
14 ; CHECK-NEXT: jmp .Ltmp00
15 ; CHECK-NEXT: #NO_APP
16 ; CHECK-NEXT: .LBB0_1: # %normal
17 ; CHECK-NEXT: xorl %eax, %eax
18 ; CHECK-NEXT: retl
19 ; CHECK-NEXT: .Ltmp0: # Block address taken
20 ; CHECK-NEXT: .LBB0_2: # %fail
21 ; CHECK-NEXT: movl $1, %eax
22 ; CHECK-NEXT: retl
23 entry:
24 %0 = add i32 %a, 4
25 callbr void asm "xorl $0, $0; jmp ${1:l}", "r,X,~{dirflag},~{fpsr},~{flags}"(i32 %0, i8* blockaddress(@test1, %fail)) to label %normal [label %fail]
26
27 normal:
28 ret i32 0
29
30 fail:
31 ret i32 1
32 }
33
34 ; Test 2 - callbr terminates an unreachable block, function gets simplified
35 ; to a trivial zero return.
36 define i32 @test2(i32 %a) {
37 ; CHECK-LABEL: test2:
38 ; CHECK: # %bb.0: # %entry
39 ; CHECK-NEXT: xorl %eax, %eax
40 ; CHECK-NEXT: retl
41 entry:
42 br label %normal
43
44 unreachableasm:
45 %0 = add i32 %a, 4
46 callbr void asm sideeffect "xorl $0, $0; jmp ${1:l}", "r,X,~{dirflag},~{fpsr},~{flags}"(i32 %0, i8* blockaddress(@test2, %fail)) to label %normal [label %fail]
47
48 normal:
49 ret i32 0
50
51 fail:
52 ret i32 1
53 }
54
55
56 ; Test 3 - asm-goto implements a loop. The loop gets recognized, but many loop
57 ; transforms fail due to canonicalization having callbr exceptions. Trivial
58 ; blocks at labels 1 and 3 also don't get simplified due to callbr.
59 define dso_local i32 @test3(i32 %a) {
60 ; CHECK-LABEL: test3:
61 ; CHECK: # %bb.0: # %entry
62 ; CHECK-NEXT: .Ltmp1: # Block address taken
63 ; CHECK-NEXT: .LBB2_1: # %label01
64 ; CHECK-NEXT: # =>This Loop Header: Depth=1
65 ; CHECK-NEXT: # Child Loop BB2_2 Depth 2
66 ; CHECK-NEXT: # Child Loop BB2_3 Depth 3
67 ; CHECK-NEXT: # Child Loop BB2_4 Depth 4
68 ; CHECK-NEXT: .Ltmp2: # Block address taken
69 ; CHECK-NEXT: .LBB2_2: # %label02
70 ; CHECK-NEXT: # Parent Loop BB2_1 Depth=1
71 ; CHECK-NEXT: # => This Loop Header: Depth=2
72 ; CHECK-NEXT: # Child Loop BB2_3 Depth 3
73 ; CHECK-NEXT: # Child Loop BB2_4 Depth 4
74 ; CHECK-NEXT: addl $4, {{[0-9]+}}(%esp)
75 ; CHECK-NEXT: .Ltmp3: # Block address taken
76 ; CHECK-NEXT: .LBB2_3: # %label03
77 ; CHECK-NEXT: # Parent Loop BB2_1 Depth=1
78 ; CHECK-NEXT: # Parent Loop BB2_2 Depth=2
79 ; CHECK-NEXT: # => This Loop Header: Depth=3
80 ; CHECK-NEXT: # Child Loop BB2_4 Depth 4
81 ; CHECK-NEXT: .p2align 4, 0x90
82 ; CHECK-NEXT: .Ltmp4: # Block address taken
83 ; CHECK-NEXT: .LBB2_4: # %label04
84 ; CHECK-NEXT: # Parent Loop BB2_1 Depth=1
85 ; CHECK-NEXT: # Parent Loop BB2_2 Depth=2
86 ; CHECK-NEXT: # Parent Loop BB2_3 Depth=3
87 ; CHECK-NEXT: # => This Inner Loop Header: Depth=4
88 ; CHECK-NEXT: #APP
89 ; CHECK-NEXT: jmp .Ltmp10
90 ; CHECK-NEXT: jmp .Ltmp20
91 ; CHECK-NEXT: jmp .Ltmp30
92 ; CHECK-NEXT: #NO_APP
93 ; CHECK-NEXT: .LBB2_5: # %normal0
94 ; CHECK-NEXT: # in Loop: Header=BB2_4 Depth=4
95 ; CHECK-NEXT: #APP
96 ; CHECK-NEXT: jmp .Ltmp10
97 ; CHECK-NEXT: jmp .Ltmp20
98 ; CHECK-NEXT: jmp .Ltmp30
99 ; CHECK-NEXT: jmp .Ltmp40
100 ; CHECK-NEXT: #NO_APP
101 ; CHECK-NEXT: .LBB2_6: # %normal1
102 ; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
103 ; CHECK-NEXT: retl
104 entry:
105 %a.addr = alloca i32, align 4
106 store i32 %a, i32* %a.addr, align 4
107 br label %label01
108
109 label01: ; preds = %normal0, %label04, %entry
110 br label %label02
111
112 label02: ; preds = %normal0, %label04, %label01
113 %0 = load i32, i32* %a.addr, align 4
114 %add = add nsw i32 %0, 4
115 store i32 %add, i32* %a.addr, align 4
116 br label %label03
117
118 label03: ; preds = %normal0, %label04, %label02
119 br label %label04
120
121 label04: ; preds = %normal0, %label03
122 callbr void asm sideeffect "jmp ${0:l}; jmp ${1:l}; jmp ${2:l}", "X,X,X,~{dirflag},~{fpsr},~{flags}"(i8* blockaddress(@test3, %label01), i8* blockaddress(@test3, %label02), i8* blockaddress(@test3, %label03))
123 to label %normal0 [label %label01, label %label02, label %label03]
124
125 normal0: ; preds = %label04
126 callbr void asm sideeffect "jmp ${0:l}; jmp ${1:l}; jmp ${2:l}; jmp ${3:l}", "X,X,X,X,~{dirflag},~{fpsr},~{flags}"(i8* blockaddress(@test3, %label01), i8* blockaddress(@test3, %label02), i8* blockaddress(@test3, %label03), i8* blockaddress(@test3, %label04))
127 to label %normal1 [label %label01, label %label02, label %label03, label %label04]
128
129 normal1: ; preds = %normal0
130 %1 = load i32, i32* %a.addr, align 4
131 ret i32 %1
132 }
0 ; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
1 ; RUN: opt < %s -gvn -S | FileCheck %s
2
3 ; This test checks that we don't hang trying to split a critical edge in loadpre
4 ; when the control flow uses a callbr instruction.
5
6 %struct.pluto = type <{ i8, i8 }>
7
8 define void @widget(%struct.pluto** %tmp1) {
9 ; CHECK-LABEL: @widget(
10 ; CHECK-NEXT: bb:
11 ; CHECK-NEXT: callbr void asm sideeffect "", "X,X"(i8* blockaddress(@widget, [[BB5:%.*]]), i8* blockaddress(@widget, [[BB8:%.*]]))
12 ; CHECK-NEXT: to label [[BB4:%.*]] [label [[BB5]], label %bb8]
13 ; CHECK: bb4:
14 ; CHECK-NEXT: br label [[BB5]]
15 ; CHECK: bb5:
16 ; CHECK-NEXT: [[TMP6:%.*]] = load %struct.pluto*, %struct.pluto** [[TMP1:%.*]]
17 ; CHECK-NEXT: [[TMP7:%.*]] = getelementptr inbounds [[STRUCT_PLUTO:%.*]], %struct.pluto* [[TMP6]], i64 0, i32 1
18 ; CHECK-NEXT: br label [[BB8]]
19 ; CHECK: bb8:
20 ; CHECK-NEXT: [[TMP9:%.*]] = phi i8* [ [[TMP7]], [[BB5]] ], [ null, [[BB:%.*]] ]
21 ; CHECK-NEXT: [[TMP10:%.*]] = load %struct.pluto*, %struct.pluto** [[TMP1]]
22 ; CHECK-NEXT: [[TMP11:%.*]] = getelementptr inbounds [[STRUCT_PLUTO]], %struct.pluto* [[TMP10]], i64 0, i32 0
23 ; CHECK-NEXT: [[TMP12:%.*]] = load i8, i8* [[TMP11]]
24 ; CHECK-NEXT: tail call void @spam(i8* [[TMP9]], i8 [[TMP12]])
25 ; CHECK-NEXT: ret void
26 ;
27 bb:
28 callbr void asm sideeffect "", "X,X"(i8* blockaddress(@widget, %bb5), i8* blockaddress(@widget, %bb8))
29 to label %bb4 [label %bb5, label %bb8]
30
31 bb4: ; preds = %bb
32 br label %bb5
33
34 bb5: ; preds = %bb4, %bb
35 %tmp6 = load %struct.pluto*, %struct.pluto** %tmp1
36 %tmp7 = getelementptr inbounds %struct.pluto, %struct.pluto* %tmp6, i64 0, i32 1
37 br label %bb8
38
39 bb8: ; preds = %bb5, %bb
40 %tmp9 = phi i8* [ %tmp7, %bb5 ], [ null, %bb ]
41 %tmp10 = load %struct.pluto*, %struct.pluto** %tmp1
42 %tmp11 = getelementptr inbounds %struct.pluto, %struct.pluto* %tmp10, i64 0, i32 0
43 %tmp12 = load i8, i8* %tmp11
44 tail call void @spam(i8* %tmp9, i8 %tmp12)
45 ret void
46 }
47
48 declare void @spam(i8*, i8)
0 ; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
1 ; RUN: opt < %s -gvn -S | FileCheck %s
2
3 ; This test checks that we don't hang trying to split a critical edge in scalar
4 ; PRE when the control flow uses a callbr instruction.
5
6 define void @wombat(i64 %arg, i64* %arg1, i64 %arg2, i32* %arg3) {
7 ; CHECK-LABEL: @wombat(
8 ; CHECK-NEXT: bb:
9 ; CHECK-NEXT: [[TMP5:%.*]] = or i64 [[ARG2:%.*]], [[ARG:%.*]]
10 ; CHECK-NEXT: callbr void asm sideeffect "", "X,X"(i8* blockaddress(@wombat, [[BB7:%.*]]), i8* blockaddress(@wombat, [[BB9:%.*]]))
11 ; CHECK-NEXT: to label [[BB6:%.*]] [label [[BB7]], label %bb9]
12 ; CHECK: bb6:
13 ; CHECK-NEXT: br label [[BB7]]
14 ; CHECK: bb7:
15 ; CHECK-NEXT: [[TMP8:%.*]] = trunc i64 [[TMP5]] to i32
16 ; CHECK-NEXT: tail call void @barney(i32 [[TMP8]])
17 ; CHECK-NEXT: br label [[BB9]]
18 ; CHECK: bb9:
19 ; CHECK-NEXT: [[TMP10:%.*]] = trunc i64 [[TMP5]] to i32
20 ; CHECK-NEXT: store i32 [[TMP10]], i32* [[ARG3:%.*]]
21 ; CHECK-NEXT: ret void
22 ;
23 bb:
24 %tmp5 = or i64 %arg2, %arg
25 callbr void asm sideeffect "", "X,X"(i8* blockaddress(@wombat, %bb7), i8* blockaddress(@wombat, %bb9))
26 to label %bb6 [label %bb7, label %bb9]
27
28 bb6: ; preds = %bb
29 br label %bb7
30
31 bb7: ; preds = %bb6, %bb
32 %tmp8 = trunc i64 %tmp5 to i32
33 tail call void @barney(i32 %tmp8)
34 br label %bb9
35
36 bb9: ; preds = %bb7, %bb
37 %tmp10 = trunc i64 %tmp5 to i32
38 store i32 %tmp10, i32* %arg3
39 ret void
40 }
41
42 declare void @barney(i32)
0 ; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
1 ; RUN: opt < %s -S -jump-threading | FileCheck %s
2
3 ; This test used to cause jump threading to try to split an edge of a callbr.
4
5 @a = global i32 0
6
7 define i32 @c() {
8 ; CHECK-LABEL: @c(
9 ; CHECK-NEXT: entry:
10 ; CHECK-NEXT: [[TMP0:%.*]] = load i32, i32* @a
11 ; CHECK-NEXT: [[TOBOOL:%.*]] = icmp eq i32 [[TMP0]], 0
12 ; CHECK-NEXT: br i1 [[TOBOOL]], label [[IF_ELSE:%.*]], label [[IF_THEN:%.*]]
13 ; CHECK: if.then:
14 ; CHECK-NEXT: [[CALL:%.*]] = call i32 @b()
15 ; CHECK-NEXT: [[PHITMP:%.*]] = icmp ne i32 [[CALL]], 0
16 ; CHECK-NEXT: br i1 [[PHITMP]], label [[IF_THEN2:%.*]], label [[IF_END4:%.*]]
17 ; CHECK: if.else:
18 ; CHECK-NEXT: callbr void asm sideeffect "", "X"(i8* blockaddress(@c, [[IF_THEN2]]))
19 ; CHECK-NEXT: to label [[IF_END_THREAD:%.*]] [label %if.then2]
20 ; CHECK: if.end.thread:
21 ; CHECK-NEXT: br label [[IF_THEN2]]
22 ; CHECK: if.then2:
23 ; CHECK-NEXT: [[CALL3:%.*]] = call i32 @b()
24 ; CHECK-NEXT: br label [[IF_END4]]
25 ; CHECK: if.end4:
26 ; CHECK-NEXT: ret i32 undef
27 ;
28 entry:
29 %0 = load i32, i32* @a
30 %tobool = icmp eq i32 %0, 0
31 br i1 %tobool, label %if.else, label %if.then
32
33 if.then: ; preds = %entry
34 %call = call i32 @b() #2
35 %phitmp = icmp ne i32 %call, 0
36 br label %if.end
37
38 if.else: ; preds = %entry
39 callbr void asm sideeffect "", "X"(i8* blockaddress(@c, %if.end)) #2
40 to label %normal [label %if.end]
41
42 normal: ; preds = %if.else
43 br label %if.end
44
45 if.end: ; preds = %if.else, %normal, %if.then
46 %d.0 = phi i1 [ %phitmp, %if.then ], [ undef, %normal ], [ undef, %if.else ]
47 br i1 %d.0, label %if.then2, label %if.end4
48
49 if.then2: ; preds = %if.end
50 %call3 = call i32 @b()
51 br label %if.end4
52
53 if.end4: ; preds = %if.then2, %if.end
54 ret i32 undef
55 }
56
57 declare i32 @b()
6262 resume { i8*, i32 } zeroinitializer
6363 }
6464
65 define i8 @call_with_same_range() {
66 ; CHECK-LABEL: @call_with_same_range
67 ; CHECK: tail call i8 @call_with_range
68 bitcast i8 0 to i8
69 %out = call i8 @dummy(), !range !0
70 ret i8 %out
71 }
72
7365 define i8 @invoke_with_same_range() personality i8* undef {
7466 ; CHECK-LABEL: @invoke_with_same_range()
7567 ; CHECK: tail call i8 @invoke_with_range()
8375 resume { i8*, i32 } zeroinitializer
8476 }
8577
78 define i8 @call_with_same_range() {
79 ; CHECK-LABEL: @call_with_same_range
80 ; CHECK: tail call i8 @call_with_range
81 bitcast i8 0 to i8
82 %out = call i8 @dummy(), !range !0
83 ret i8 %out
84 }
8685
8786
8887 declare i8 @dummy();
22 ; CHECK-LABEL: @int_ptr_arg_different
33 ; CHECK-NEXT: call void asm
44
5 ; CHECK-LABEL: @int_ptr_null
6 ; CHECK-NEXT: tail call void @float_ptr_null()
7
58 ; CHECK-LABEL: @int_ptr_arg_same
69 ; CHECK-NEXT: %2 = bitcast i32* %0 to float*
710 ; CHECK-NEXT: tail call void @float_ptr_arg_same(float* %2)
8
9 ; CHECK-LABEL: @int_ptr_null
10 ; CHECK-NEXT: tail call void @float_ptr_null()
1111
1212 ; Used to satisfy minimum size limit
1313 declare void @stuff()
290290 STRINGIFY_CODE(FUNC_CODE, INST_LOADATOMIC)
291291 STRINGIFY_CODE(FUNC_CODE, INST_STOREATOMIC)
292292 STRINGIFY_CODE(FUNC_CODE, INST_CMPXCHG)
293 STRINGIFY_CODE(FUNC_CODE, INST_CALLBR)
293294 }
294295 case bitc::VALUE_SYMTAB_BLOCK_ID:
295296 switch (CodeID) {
2222 " The true and false tokens can be used for comparison opcodes, but it's
2323 " much more common for these tokens to be used for boolean constants.
2424 syn keyword llvmStatement add addrspacecast alloca and arcp ashr atomicrmw
25 syn keyword llvmStatement bitcast br catchpad catchswitch catchret call
25 syn keyword llvmStatement bitcast br catchpad catchswitch catchret call callbr
2626 syn keyword llvmStatement cleanuppad cleanupret cmpxchg eq exact extractelement
2727 syn keyword llvmStatement extractvalue fadd fast fcmp fdiv fence fmul fpext
2828 syn keyword llvmStatement fptosi fptoui fptrunc free frem fsub fneg getelementptr