llvm.org GIT mirror llvm / 8f85685
Enhance synchscope representation OpenCL 2.0 introduces the notion of memory scopes in atomic operations to global and local memory. These scopes restrict how synchronization is achieved, which can result in improved performance. This change extends existing notion of synchronization scopes in LLVM to support arbitrary scopes expressed as target-specific strings, in addition to the already defined scopes (single thread, system). The LLVM IR and MIR syntax for expressing synchronization scopes has changed to use *syncscope("<scope>")*, where <scope> can be "singlethread" (this replaces *singlethread* keyword), or a target-specific name. As before, if the scope is not specified, it defaults to CrossThread/System scope. Implementation details: - Mapping from synchronization scope name/string to synchronization scope id is stored in LLVM context; - CrossThread/System and SingleThread scopes are pre-defined to efficiently check for known scopes without comparing strings; - Synchronization scope names are stored in SYNC_SCOPE_NAMES_BLOCK in the bitcode. Differential Revision: https://reviews.llvm.org/D21723 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@307722 91177308-0d34-0410-b5e6-96231b3b80d8 Konstantin Zhuravlyov 3 years ago
69 changed file(s) with 1264 addition(s) and 828 deletion(s). Raw diff Collapse all Expand all
22082208 same address in this global order. This corresponds to the C++0x/C1x
22092209 ``memory_order_seq_cst`` and Java volatile.
22102210
2211 .. _singlethread:
2212
2213 If an atomic operation is marked ``singlethread``, it only *synchronizes
2214 with* or participates in modification and seq\_cst total orderings with
2215 other operations running in the same thread (for example, in signal
2216 handlers).
2211 .. _syncscope:
2212
2213 If an atomic operation is marked ``syncscope("singlethread")``, it only
2214 *synchronizes with* and only participates in the seq\_cst total orderings of
2215 other operations running in the same thread (for example, in signal handlers).
2216
2217 If an atomic operation is marked ``syncscope("")``, where
2218 ```` is a target specific synchronization scope, then it is target
2219 dependent if it *synchronizes with* and participates in the seq\_cst total
2220 orderings of other operations.
2221
2222 Otherwise, an atomic operation that is not marked ``syncscope("singlethread")``
2223 or ``syncscope("")`` *synchronizes with* and participates in the
2224 seq\_cst total orderings of other operations that are not marked
2225 ``syncscope("singlethread")`` or ``syncscope("")``.
22172226
22182227 .. _fastmath:
22192228
73797388 ::
73807389
73817390 = load [volatile] , * [, align ][, !nontemporal !][, !invariant.load !][, !invariant.group !][, !nonnull !][, !dereferenceable !][, !dereferenceable_or_null !][, !align !]
7382 = load atomic [volatile] , * [singlethread] , align [, !invariant.group !]
7391 = load atomic [volatile] , * [syncscope("")] , align [, !invariant.group !]
73837392 ! = !{ i32 1 }
73847393 ! = !{i64 }
73857394 ! = !{ i64 }
74007409 :ref:`volatile operations `.
74017410
74027411 If the ``load`` is marked as ``atomic``, it takes an extra :ref:`ordering
7403 ` and optional ``singlethread`` argument. The ``release`` and
7404 ``acq_rel`` orderings are not valid on ``load`` instructions. Atomic loads
7405 produce :ref:`defined ` results when they may see multiple atomic
7406 stores. The type of the pointee must be an integer, pointer, or floating-point
7407 type whose bit width is a power of two greater than or equal to eight and less
7408 than or equal to a target-specific size limit. ``align`` must be explicitly
7409 specified on atomic loads, and the load has undefined behavior if the alignment
7410 is not set to a value which is at least the size in bytes of the
7412 ` and optional ``syncscope("")`` argument. The
7413 ``release`` and ``acq_rel`` orderings are not valid on ``load`` instructions.
7414 Atomic loads produce :ref:`defined ` results when they may see
7415 multiple atomic stores. The type of the pointee must be an integer, pointer, or
7416 floating-point type whose bit width is a power of two greater than or equal to
7417 eight and less than or equal to a target-specific size limit. ``align`` must be
7418 explicitly specified on atomic loads, and the load has undefined behavior if the
7419 alignment is not set to a value which is at least the size in bytes of the
74117420 pointee. ``!nontemporal`` does not have any defined semantics for atomic loads.
74127421
74137422 The optional constant ``align`` argument specifies the alignment of the
75087517 ::
75097518
75107519 store [volatile] , * [, align ][, !nontemporal !][, !invariant.group !] ; yields void
7511 store atomic [volatile] , * [singlethread] , align [, !invariant.group !] ; yields void
7520 store atomic [volatile] , * [syncscope("")] , align [, !invariant.group !] ; yields void
75127521
75137522 Overview:
75147523 """""""""
75287537 structural type `) can be stored.
75297538
75307539 If the ``store`` is marked as ``atomic``, it takes an extra :ref:`ordering
7531 ` and optional ``singlethread`` argument. The ``acquire`` and
7532 ``acq_rel`` orderings aren't valid on ``store`` instructions. Atomic loads
7533 produce :ref:`defined ` results when they may see multiple atomic
7534 stores. The type of the pointee must be an integer, pointer, or floating-point
7535 type whose bit width is a power of two greater than or equal to eight and less
7536 than or equal to a target-specific size limit. ``align`` must be explicitly
7537 specified on atomic stores, and the store has undefined behavior if the
7538 alignment is not set to a value which is at least the size in bytes of the
7540 ` and optional ``syncscope("")`` argument. The
7541 ``acquire`` and ``acq_rel`` orderings aren't valid on ``store`` instructions.
7542 Atomic loads produce :ref:`defined ` results when they may see
7543 multiple atomic stores. The type of the pointee must be an integer, pointer, or
7544 floating-point type whose bit width is a power of two greater than or equal to
7545 eight and less than or equal to a target-specific size limit. ``align`` must be
7546 explicitly specified on atomic stores, and the store has undefined behavior if
7547 the alignment is not set to a value which is at least the size in bytes of the
75397548 pointee. ``!nontemporal`` does not have any defined semantics for atomic stores.
75407549
75417550 The optional constant ``align`` argument specifies the alignment of the
75967605
75977606 ::
75987607
7599 fence [singlethread] ; yields void
7608 fence [syncscope("")] ; yields void
76007609
76017610 Overview:
76027611 """""""""
76307639 ``acquire`` and ``release`` semantics specified above, participates in
76317640 the global program order of other ``seq_cst`` operations and/or fences.
76327641
7633 The optional ":ref:`singlethread `" argument specifies
7634 that the fence only synchronizes with other fences in the same thread.
7635 (This is useful for interacting with signal handlers.)
7642 A ``fence`` instruction can also take an optional
7643 ":ref:`syncscope `" argument.
76367644
76377645 Example:
76387646 """"""""
76397647
76407648 .. code-block:: llvm
76417649
7642 fence acquire ; yields void
7643 fence singlethread seq_cst ; yields void
7650 fence acquire ; yields void
7651 fence syncscope("singlethread") seq_cst ; yields void
7652 fence syncscope("agent") seq_cst ; yields void
76447653
76457654 .. _i_cmpxchg:
76467655
76527661
76537662 ::
76547663
7655 cmpxchg [weak] [volatile] * , , [singlethread] ; yields { ty, i1 }
7664 cmpxchg [weak] [volatile] * , , [syncscope("")] ; yields { ty, i1 }
76567665
76577666 Overview:
76587667 """""""""
76817690 stronger than that on success, and the failure ordering cannot be either
76827691 ``release`` or ``acq_rel``.
76837692
7684 The optional "``singlethread``" argument declares that the ``cmpxchg``
7685 is only atomic with respect to code (usually signal handlers) running in
7686 the same thread as the ``cmpxchg``. Otherwise the cmpxchg is atomic with
7687 respect to all other code in the system.
7693 A ``cmpxchg`` instruction can also take an optional
7694 ":ref:`syncscope `" argument.
76887695
76897696 The pointer passed into cmpxchg must have alignment greater than or
76907697 equal to the size in memory of the operand.
77387745
77397746 ::
77407747
7741 atomicrmw [volatile] * , [singlethread] ; yields ty
7748 atomicrmw [volatile] * , [syncscope("")] ; yields ty
77427749
77437750 Overview:
77447751 """""""""
77717778 ``volatile``, then the optimizer is not allowed to modify the number or
77727779 order of execution of this ``atomicrmw`` with other :ref:`volatile
77737780 operations `.
7781
7782 A ``atomicrmw`` instruction can also take an optional
7783 ":ref:`syncscope `" argument.
77747784
77757785 Semantics:
77767786 """"""""""
5858 FULL_LTO_GLOBALVAL_SUMMARY_BLOCK_ID,
5959
6060 SYMTAB_BLOCK_ID,
61
62 SYNC_SCOPE_NAMES_BLOCK_ID,
6163 };
6264
6365 /// Identification block contains a string that describes the producer details,
169171
170172 enum OperandBundleTagCode {
171173 OPERAND_BUNDLE_TAG = 1, // TAG: [strchr x N]
174 };
175
176 enum SyncScopeNameCode {
177 SYNC_SCOPE_NAME = 1,
172178 };
173179
174180 // Value symbol table codes.
403409 ORDERING_SEQCST = 6
404410 };
405411
406 /// Encoded SynchronizationScope values.
407 enum AtomicSynchScopeCodes {
408 SYNCHSCOPE_SINGLETHREAD = 0,
409 SYNCHSCOPE_CROSSTHREAD = 1
410 };
411
412412 /// Markers and flags for call instruction.
413413 enum CallMarkersFlags {
414414 CALL_TAIL = 0,
649649 MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s,
650650 unsigned base_alignment, const AAMDNodes &AAInfo = AAMDNodes(),
651651 const MDNode *Ranges = nullptr,
652 SynchronizationScope SynchScope = CrossThread,
652 SyncScope::ID SSID = SyncScope::System,
653653 AtomicOrdering Ordering = AtomicOrdering::NotAtomic,
654654 AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic);
655655
123123 private:
124124 /// Atomic information for this memory operation.
125125 struct MachineAtomicInfo {
126 /// Synchronization scope for this memory operation.
127 unsigned SynchScope : 1; // enum SynchronizationScope
126 /// Synchronization scope ID for this memory operation.
127 unsigned SSID : 8; // SyncScope::ID
128128 /// Atomic ordering requirements for this memory operation. For cmpxchg
129129 /// atomic operations, atomic ordering requirements when store occurs.
130130 unsigned Ordering : 4; // enum AtomicOrdering
151151 unsigned base_alignment,
152152 const AAMDNodes &AAInfo = AAMDNodes(),
153153 const MDNode *Ranges = nullptr,
154 SynchronizationScope SynchScope = CrossThread,
154 SyncScope::ID SSID = SyncScope::System,
155155 AtomicOrdering Ordering = AtomicOrdering::NotAtomic,
156156 AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic);
157157
201201 /// Return the range tag for the memory reference.
202202 const MDNode *getRanges() const { return Ranges; }
203203
204 /// Return the synchronization scope for this memory operation.
205 SynchronizationScope getSynchScope() const {
206 return static_cast(AtomicInfo.SynchScope);
204 /// Returns the synchronization scope ID for this memory operation.
205 SyncScope::ID getSyncScopeID() const {
206 return static_cast(AtomicInfo.SSID);
207207 }
208208
209209 /// Return the atomic ordering requirements for this memory operation. For
926926 SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo,
927927 unsigned Alignment, AtomicOrdering SuccessOrdering,
928928 AtomicOrdering FailureOrdering,
929 SynchronizationScope SynchScope);
929 SyncScope::ID SSID);
930930 SDValue getAtomicCmpSwap(unsigned Opcode, const SDLoc &dl, EVT MemVT,
931931 SDVTList VTs, SDValue Chain, SDValue Ptr,
932932 SDValue Cmp, SDValue Swp, MachineMemOperand *MMO);
936936 SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain,
937937 SDValue Ptr, SDValue Val, const Value *PtrVal,
938938 unsigned Alignment, AtomicOrdering Ordering,
939 SynchronizationScope SynchScope);
939 SyncScope::ID SSID);
940940 SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain,
941941 SDValue Ptr, SDValue Val, MachineMemOperand *MMO);
942942
12121212 /// Returns the Ranges that describes the dereference.
12131213 const MDNode *getRanges() const { return MMO->getRanges(); }
12141214
1215 /// Return the synchronization scope for this memory operation.
1216 SynchronizationScope getSynchScope() const { return MMO->getSynchScope(); }
1215 /// Returns the synchronization scope ID for this memory operation.
1216 SyncScope::ID getSyncScopeID() const { return MMO->getSyncScopeID(); }
12171217
12181218 /// Return the atomic ordering requirements for this memory operation. For
12191219 /// cmpxchg atomic operations, return the atomic ordering requirements when
12021202 return SI;
12031203 }
12041204 FenceInst *CreateFence(AtomicOrdering Ordering,
1205 SynchronizationScope SynchScope = CrossThread,
1205 SyncScope::ID SSID = SyncScope::System,
12061206 const Twine &Name = "") {
1207 return Insert(new FenceInst(Context, Ordering, SynchScope), Name);
1207 return Insert(new FenceInst(Context, Ordering, SSID), Name);
12081208 }
12091209 AtomicCmpXchgInst *
12101210 CreateAtomicCmpXchg(Value *Ptr, Value *Cmp, Value *New,
12111211 AtomicOrdering SuccessOrdering,
12121212 AtomicOrdering FailureOrdering,
1213 SynchronizationScope SynchScope = CrossThread) {
1213 SyncScope::ID SSID = SyncScope::System) {
12141214 return Insert(new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering,
1215 FailureOrdering, SynchScope));
1215 FailureOrdering, SSID));
12161216 }
12171217 AtomicRMWInst *CreateAtomicRMW(AtomicRMWInst::BinOp Op, Value *Ptr, Value *Val,
12181218 AtomicOrdering Ordering,
1219 SynchronizationScope SynchScope = CrossThread) {
1220 return Insert(new AtomicRMWInst(Op, Ptr, Val, Ordering, SynchScope));
1219 SyncScope::ID SSID = SyncScope::System) {
1220 return Insert(new AtomicRMWInst(Op, Ptr, Val, Ordering, SSID));
12211221 }
12221222 Value *CreateGEP(Value *Ptr, ArrayRef IdxList,
12231223 const Twine &Name = "") {
5151 class DataLayout;
5252 class LLVMContext;
5353
54 enum SynchronizationScope {
55 SingleThread = 0,
56 CrossThread = 1
57 };
58
5954 //===----------------------------------------------------------------------===//
6055 // AllocaInst Class
6156 //===----------------------------------------------------------------------===//
194189 LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile,
195190 unsigned Align, BasicBlock *InsertAtEnd);
196191 LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile, unsigned Align,
197 AtomicOrdering Order, SynchronizationScope SynchScope = CrossThread,
192 AtomicOrdering Order, SyncScope::ID SSID = SyncScope::System,
198193 Instruction *InsertBefore = nullptr)
199194 : LoadInst(cast(Ptr->getType())->getElementType(), Ptr,
200 NameStr, isVolatile, Align, Order, SynchScope, InsertBefore) {}
195 NameStr, isVolatile, Align, Order, SSID, InsertBefore) {}
201196 LoadInst(Type *Ty, Value *Ptr, const Twine &NameStr, bool isVolatile,
202197 unsigned Align, AtomicOrdering Order,
203 SynchronizationScope SynchScope = CrossThread,
198 SyncScope::ID SSID = SyncScope::System,
204199 Instruction *InsertBefore = nullptr);
205200 LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile,
206 unsigned Align, AtomicOrdering Order,
207 SynchronizationScope SynchScope,
201 unsigned Align, AtomicOrdering Order, SyncScope::ID SSID,
208202 BasicBlock *InsertAtEnd);
209203 LoadInst(Value *Ptr, const char *NameStr, Instruction *InsertBefore);
210204 LoadInst(Value *Ptr, const char *NameStr, BasicBlock *InsertAtEnd);
234228
235229 void setAlignment(unsigned Align);
236230
237 /// Returns the ordering effect of this fence.
231 /// Returns the ordering constraint of this load instruction.
238232 AtomicOrdering getOrdering() const {
239233 return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
240234 }
241235
242 /// Set the ordering constraint on this load. May not be Release or
243 /// AcquireRelease.
236 /// Sets the ordering constraint of this load instruction. May not be Release
237 /// or AcquireRelease.
244238 void setOrdering(AtomicOrdering Ordering) {
245239 setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
246240 ((unsigned)Ordering << 7));
247241 }
248242
249 SynchronizationScope getSynchScope() const {
250 return SynchronizationScope((getSubclassDataFromInstruction() >> 6) & 1);
251 }
252
253 /// Specify whether this load is ordered with respect to all
254 /// concurrently executing threads, or only with respect to signal handlers
255 /// executing in the same thread.
256 void setSynchScope(SynchronizationScope xthread) {
257 setInstructionSubclassData((getSubclassDataFromInstruction() & ~(1 << 6)) |
258 (xthread << 6));
259 }
260
243 /// Returns the synchronization scope ID of this load instruction.
244 SyncScope::ID getSyncScopeID() const {
245 return SSID;
246 }
247
248 /// Sets the synchronization scope ID of this load instruction.
249 void setSyncScopeID(SyncScope::ID SSID) {
250 this->SSID = SSID;
251 }
252
253 /// Sets the ordering constraint and the synchronization scope ID of this load
254 /// instruction.
261255 void setAtomic(AtomicOrdering Ordering,
262 SynchronizationScope SynchScope = CrossThread) {
256 SyncScope::ID SSID = SyncScope::System) {
263257 setOrdering(Ordering);
264 setSynchScope(SynchScope);
258 setSyncScopeID(SSID);
265259 }
266260
267261 bool isSimple() const { return !isAtomic() && !isVolatile(); }
296290 void setInstructionSubclassData(unsigned short D) {
297291 Instruction::setInstructionSubclassData(D);
298292 }
293
294 /// The synchronization scope ID of this load instruction. Not quite enough
295 /// room in SubClassData for everything, so synchronization scope ID gets its
296 /// own field.
297 SyncScope::ID SSID;
299298 };
300299
301300 //===----------------------------------------------------------------------===//
324323 unsigned Align, BasicBlock *InsertAtEnd);
325324 StoreInst(Value *Val, Value *Ptr, bool isVolatile,
326325 unsigned Align, AtomicOrdering Order,
327 SynchronizationScope SynchScope = CrossThread,
326 SyncScope::ID SSID = SyncScope::System,
328327 Instruction *InsertBefore = nullptr);
329328 StoreInst(Value *Val, Value *Ptr, bool isVolatile,
330 unsigned Align, AtomicOrdering Order,
331 SynchronizationScope SynchScope,
329 unsigned Align, AtomicOrdering Order, SyncScope::ID SSID,
332330 BasicBlock *InsertAtEnd);
333331
334332 // allocate space for exactly two operands
355353
356354 void setAlignment(unsigned Align);
357355
358 /// Returns the ordering effect of this store.
356 /// Returns the ordering constraint of this store instruction.
359357 AtomicOrdering getOrdering() const {
360358 return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
361359 }
362360
363 /// Set the ordering constraint on this store. May not be Acquire or
364 /// AcquireRelease.
361 /// Sets the ordering constraint of this store instruction. May not be
362 /// Acquire or AcquireRelease.
365363 void setOrdering(AtomicOrdering Ordering) {
366364 setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
367365 ((unsigned)Ordering << 7));
368366 }
369367
370 SynchronizationScope getSynchScope() const {
371 return SynchronizationScope((getSubclassDataFromInstruction() >> 6) & 1);
372 }
373
374 /// Specify whether this store instruction is ordered with respect to all
375 /// concurrently executing threads, or only with respect to signal handlers
376 /// executing in the same thread.
377 void setSynchScope(SynchronizationScope xthread) {
378 setInstructionSubclassData((getSubclassDataFromInstruction() & ~(1 << 6)) |
379 (xthread << 6));
380 }
381
368 /// Returns the synchronization scope ID of this store instruction.
369 SyncScope::ID getSyncScopeID() const {
370 return SSID;
371 }
372
373 /// Sets the synchronization scope ID of this store instruction.
374 void setSyncScopeID(SyncScope::ID SSID) {
375 this->SSID = SSID;
376 }
377
378 /// Sets the ordering constraint and the synchronization scope ID of this
379 /// store instruction.
382380 void setAtomic(AtomicOrdering Ordering,
383 SynchronizationScope SynchScope = CrossThread) {
381 SyncScope::ID SSID = SyncScope::System) {
384382 setOrdering(Ordering);
385 setSynchScope(SynchScope);
383 setSyncScopeID(SSID);
386384 }
387385
388386 bool isSimple() const { return !isAtomic() && !isVolatile(); }
420418 void setInstructionSubclassData(unsigned short D) {
421419 Instruction::setInstructionSubclassData(D);
422420 }
421
422 /// The synchronization scope ID of this store instruction. Not quite enough
423 /// room in SubClassData for everything, so synchronization scope ID gets its
424 /// own field.
425 SyncScope::ID SSID;
423426 };
424427
425428 template <>
434437
435438 /// An instruction for ordering other memory operations.
436439 class FenceInst : public Instruction {
437 void Init(AtomicOrdering Ordering, SynchronizationScope SynchScope);
440 void Init(AtomicOrdering Ordering, SyncScope::ID SSID);
438441
439442 protected:
440443 // Note: Instruction needs to be a friend here to call cloneImpl.
446449 // Ordering may only be Acquire, Release, AcquireRelease, or
447450 // SequentiallyConsistent.
448451 FenceInst(LLVMContext &C, AtomicOrdering Ordering,
449 SynchronizationScope SynchScope = CrossThread,
452 SyncScope::ID SSID = SyncScope::System,
450453 Instruction *InsertBefore = nullptr);
451 FenceInst(LLVMContext &C, AtomicOrdering Ordering,
452 SynchronizationScope SynchScope,
454 FenceInst(LLVMContext &C, AtomicOrdering Ordering, SyncScope::ID SSID,
453455 BasicBlock *InsertAtEnd);
454456
455457 // allocate space for exactly zero operands
457459 return User::operator new(s, 0);
458460 }
459461
460 /// Returns the ordering effect of this fence.
462 /// Returns the ordering constraint of this fence instruction.
461463 AtomicOrdering getOrdering() const {
462464 return AtomicOrdering(getSubclassDataFromInstruction() >> 1);
463465 }
464466
465 /// Set the ordering constraint on this fence. May only be Acquire, Release,
466 /// AcquireRelease, or SequentiallyConsistent.
467 /// Sets the ordering constraint of this fence instruction. May only be
468 /// Acquire, Release, AcquireRelease, or SequentiallyConsistent.
467469 void setOrdering(AtomicOrdering Ordering) {
468470 setInstructionSubclassData((getSubclassDataFromInstruction() & 1) |
469471 ((unsigned)Ordering << 1));
470472 }
471473
472 SynchronizationScope getSynchScope() const {
473 return SynchronizationScope(getSubclassDataFromInstruction() & 1);
474 }
475
476 /// Specify whether this fence orders other operations with respect to all
477 /// concurrently executing threads, or only with respect to signal handlers
478 /// executing in the same thread.
479 void setSynchScope(SynchronizationScope xthread) {
480 setInstructionSubclassData((getSubclassDataFromInstruction() & ~1) |
481 xthread);
474 /// Returns the synchronization scope ID of this fence instruction.
475 SyncScope::ID getSyncScopeID() const {
476 return SSID;
477 }
478
479 /// Sets the synchronization scope ID of this fence instruction.
480 void setSyncScopeID(SyncScope::ID SSID) {
481 this->SSID = SSID;
482482 }
483483
484484 // Methods for support type inquiry through isa, cast, and dyn_cast:
495495 void setInstructionSubclassData(unsigned short D) {
496496 Instruction::setInstructionSubclassData(D);
497497 }
498
499 /// The synchronization scope ID of this fence instruction. Not quite enough
500 /// room in SubClassData for everything, so synchronization scope ID gets its
501 /// own field.
502 SyncScope::ID SSID;
498503 };
499504
500505 //===----------------------------------------------------------------------===//
508513 class AtomicCmpXchgInst : public Instruction {
509514 void Init(Value *Ptr, Value *Cmp, Value *NewVal,
510515 AtomicOrdering SuccessOrdering, AtomicOrdering FailureOrdering,
511 SynchronizationScope SynchScope);
516 SyncScope::ID SSID);
512517
513518 protected:
514519 // Note: Instruction needs to be a friend here to call cloneImpl.
520525 AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
521526 AtomicOrdering SuccessOrdering,
522527 AtomicOrdering FailureOrdering,
523 SynchronizationScope SynchScope,
524 Instruction *InsertBefore = nullptr);
528 SyncScope::ID SSID, Instruction *InsertBefore = nullptr);
525529 AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
526530 AtomicOrdering SuccessOrdering,
527531 AtomicOrdering FailureOrdering,
528 SynchronizationScope SynchScope,
529 BasicBlock *InsertAtEnd);
532 SyncScope::ID SSID, BasicBlock *InsertAtEnd);
530533
531534 // allocate space for exactly three operands
532535 void *operator new(size_t s) {
560563 /// Transparently provide more efficient getOperand methods.
561564 DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
562565
563 /// Set the ordering constraint on this cmpxchg.
566 /// Returns the success ordering constraint of this cmpxchg instruction.
567 AtomicOrdering getSuccessOrdering() const {
568 return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
569 }
570
571 /// Sets the success ordering constraint of this cmpxchg instruction.
564572 void setSuccessOrdering(AtomicOrdering Ordering) {
565573 assert(Ordering != AtomicOrdering::NotAtomic &&
566574 "CmpXchg instructions can only be atomic.");
568576 ((unsigned)Ordering << 2));
569577 }
570578
579 /// Returns the failure ordering constraint of this cmpxchg instruction.
580 AtomicOrdering getFailureOrdering() const {
581 return AtomicOrdering((getSubclassDataFromInstruction() >> 5) & 7);
582 }
583
584 /// Sets the failure ordering constraint of this cmpxchg instruction.
571585 void setFailureOrdering(AtomicOrdering Ordering) {
572586 assert(Ordering != AtomicOrdering::NotAtomic &&
573587 "CmpXchg instructions can only be atomic.");
575589 ((unsigned)Ordering << 5));
576590 }
577591
578 /// Specify whether this cmpxchg is atomic and orders other operations with
579 /// respect to all concurrently executing threads, or only with respect to
580 /// signal handlers executing in the same thread.
581 void setSynchScope(SynchronizationScope SynchScope) {
582 setInstructionSubclassData((getSubclassDataFromInstruction() & ~2) |
583 (SynchScope << 1));
584 }
585
586 /// Returns the ordering constraint on this cmpxchg.
587 AtomicOrdering getSuccessOrdering() const {
588 return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
589 }
590
591 /// Returns the ordering constraint on this cmpxchg.
592 AtomicOrdering getFailureOrdering() const {
593 return AtomicOrdering((getSubclassDataFromInstruction() >> 5) & 7);
594 }
595
596 /// Returns whether this cmpxchg is atomic between threads or only within a
597 /// single thread.
598 SynchronizationScope getSynchScope() const {
599 return SynchronizationScope((getSubclassDataFromInstruction() & 2) >> 1);
592 /// Returns the synchronization scope ID of this cmpxchg instruction.
593 SyncScope::ID getSyncScopeID() const {
594 return SSID;
595 }
596
597 /// Sets the synchronization scope ID of this cmpxchg instruction.
598 void setSyncScopeID(SyncScope::ID SSID) {
599 this->SSID = SSID;
600600 }
601601
602602 Value *getPointerOperand() { return getOperand(0); }
651651 void setInstructionSubclassData(unsigned short D) {
652652 Instruction::setInstructionSubclassData(D);
653653 }
654
655 /// The synchronization scope ID of this cmpxchg instruction. Not quite
656 /// enough room in SubClassData for everything, so synchronization scope ID
657 /// gets its own field.
658 SyncScope::ID SSID;
654659 };
655660
656661 template <>
710715 };
711716
712717 AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
713 AtomicOrdering Ordering, SynchronizationScope SynchScope,
718 AtomicOrdering Ordering, SyncScope::ID SSID,
714719 Instruction *InsertBefore = nullptr);
715720 AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
716 AtomicOrdering Ordering, SynchronizationScope SynchScope,
721 AtomicOrdering Ordering, SyncScope::ID SSID,
717722 BasicBlock *InsertAtEnd);
718723
719724 // allocate space for exactly two operands
747752 /// Transparently provide more efficient getOperand methods.
748753 DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
749754
750 /// Set the ordering constraint on this RMW.
755 /// Returns the ordering constraint of this rmw instruction.
756 AtomicOrdering getOrdering() const {
757 return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
758 }
759
760 /// Sets the ordering constraint of this rmw instruction.
751761 void setOrdering(AtomicOrdering Ordering) {
752762 assert(Ordering != AtomicOrdering::NotAtomic &&
753763 "atomicrmw instructions can only be atomic.");
755765 ((unsigned)Ordering << 2));
756766 }
757767
758 /// Specify whether this RMW orders other operations with respect to all
759 /// concurrently executing threads, or only with respect to signal handlers
760 /// executing in the same thread.
761 void setSynchScope(SynchronizationScope SynchScope) {
762 setInstructionSubclassData((getSubclassDataFromInstruction() & ~2) |
763 (SynchScope << 1));
764 }
765
766 /// Returns the ordering constraint on this RMW.
767 AtomicOrdering getOrdering() const {
768 return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
769 }
770
771 /// Returns whether this RMW is atomic between threads or only within a
772 /// single thread.
773 SynchronizationScope getSynchScope() const {
774 return SynchronizationScope((getSubclassDataFromInstruction() & 2) >> 1);
768 /// Returns the synchronization scope ID of this rmw instruction.
769 SyncScope::ID getSyncScopeID() const {
770 return SSID;
771 }
772
773 /// Sets the synchronization scope ID of this rmw instruction.
774 void setSyncScopeID(SyncScope::ID SSID) {
775 this->SSID = SSID;
775776 }
776777
777778 Value *getPointerOperand() { return getOperand(0); }
796797
797798 private:
798799 void Init(BinOp Operation, Value *Ptr, Value *Val,
799 AtomicOrdering Ordering, SynchronizationScope SynchScope);
800 AtomicOrdering Ordering, SyncScope::ID SSID);
800801
801802 // Shadow Instruction::setInstructionSubclassData with a private forwarding
802803 // method so that subclasses cannot accidentally use it.
803804 void setInstructionSubclassData(unsigned short D) {
804805 Instruction::setInstructionSubclassData(D);
805806 }
807
808 /// The synchronization scope ID of this rmw instruction. Not quite enough
809 /// room in SubClassData for everything, so synchronization scope ID gets its
810 /// own field.
811 SyncScope::ID SSID;
806812 };
807813
808814 template <>
4040 class Output;
4141
4242 } // end namespace yaml
43
44 namespace SyncScope {
45
46 typedef uint8_t ID;
47
48 /// Known synchronization scope IDs, which always have the same value. All
49 /// synchronization scope IDs that LLVM has special knowledge of are listed
50 /// here. Additionally, this scheme allows LLVM to efficiently check for
51 /// specific synchronization scope ID without comparing strings.
52 enum {
53 /// Synchronized with respect to signal handlers executing in the same thread.
54 SingleThread = 0,
55
56 /// Synchronized with respect to all concurrently executing threads.
57 System = 1
58 };
59
60 } // end namespace SyncScope
4361
4462 /// This is an important class for using LLVM in a threaded context. It
4563 /// (opaquely) owns and manages the core "global" data of LLVM's core
110128 /// tag registered with an LLVMContext has an unique ID.
111129 uint32_t getOperandBundleTagID(StringRef Tag) const;
112130
131 /// getOrInsertSyncScopeID - Maps synchronization scope name to
132 /// synchronization scope ID. Every synchronization scope registered with
133 /// LLVMContext has unique ID except pre-defined ones.
134 SyncScope::ID getOrInsertSyncScopeID(StringRef SSN);
135
136 /// getSyncScopeNames - Populates client supplied SmallVector with
137 /// synchronization scope names registered with LLVMContext. Synchronization
138 /// scope names are ordered by increasing synchronization scope IDs.
139 void getSyncScopeNames(SmallVectorImpl &SSNs) const;
140
113141 /// Define the GC for a function
114142 void setGC(const Function &Fn, std::string GCName);
115143
541541 KEYWORD(release);
542542 KEYWORD(acq_rel);
543543 KEYWORD(seq_cst);
544 KEYWORD(singlethread);
544 KEYWORD(syncscope);
545545
546546 KEYWORD(nnan);
547547 KEYWORD(ninf);
19181918 }
19191919
19201920 /// ParseScopeAndOrdering
1921 /// if isAtomic: ::= 'singlethread'? AtomicOrdering
1921 /// if isAtomic: ::= SyncScope? AtomicOrdering
19221922 /// else: ::=
19231923 ///
19241924 /// This sets Scope and Ordering to the parsed values.
1925 bool LLParser::ParseScopeAndOrdering(bool isAtomic, SynchronizationScope &Scope,
1925 bool LLParser::ParseScopeAndOrdering(bool isAtomic, SyncScope::ID &SSID,
19261926 AtomicOrdering &Ordering) {
19271927 if (!isAtomic)
19281928 return false;
19291929
1930 Scope = CrossThread;
1931 if (EatIfPresent(lltok::kw_singlethread))
1932 Scope = SingleThread;
1933
1934 return ParseOrdering(Ordering);
1930 return ParseScope(SSID) || ParseOrdering(Ordering);
1931 }
1932
1933 /// ParseScope
1934 /// ::= syncscope("singlethread" | "")?
1935 ///
1936 /// This sets synchronization scope ID to the ID of the parsed value.
1937 bool LLParser::ParseScope(SyncScope::ID &SSID) {
1938 SSID = SyncScope::System;
1939 if (EatIfPresent(lltok::kw_syncscope)) {
1940 auto StartParenAt = Lex.getLoc();
1941 if (!EatIfPresent(lltok::lparen))
1942 return Error(StartParenAt, "Expected '(' in syncscope");
1943
1944 std::string SSN;
1945 auto SSNAt = Lex.getLoc();
1946 if (ParseStringConstant(SSN))
1947 return Error(SSNAt, "Expected synchronization scope name");
1948
1949 auto EndParenAt = Lex.getLoc();
1950 if (!EatIfPresent(lltok::rparen))
1951 return Error(EndParenAt, "Expected ')' in syncscope");
1952
1953 SSID = Context.getOrInsertSyncScopeID(SSN);
1954 }
1955
1956 return false;
19351957 }
19361958
19371959 /// ParseOrdering
60996121 bool AteExtraComma = false;
61006122 bool isAtomic = false;
61016123 AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
6102 SynchronizationScope Scope = CrossThread;
6124 SyncScope::ID SSID = SyncScope::System;
61036125
61046126 if (Lex.getKind() == lltok::kw_atomic) {
61056127 isAtomic = true;
61176139 if (ParseType(Ty) ||
61186140 ParseToken(lltok::comma, "expected comma after load's type") ||
61196141 ParseTypeAndValue(Val, Loc, PFS) ||
6120 ParseScopeAndOrdering(isAtomic, Scope, Ordering) ||
6142 ParseScopeAndOrdering(isAtomic, SSID, Ordering) ||
61216143 ParseOptionalCommaAlign(Alignment, AteExtraComma))
61226144 return true;
61236145
61336155 return Error(ExplicitTypeLoc,
61346156 "explicit pointee type doesn't match operand's pointee type");
61356157
6136 Inst = new LoadInst(Ty, Val, "", isVolatile, Alignment, Ordering, Scope);
6158 Inst = new LoadInst(Ty, Val, "", isVolatile, Alignment, Ordering, SSID);
61376159 return AteExtraComma ? InstExtraComma : InstNormal;
61386160 }
61396161
61486170 bool AteExtraComma = false;
61496171 bool isAtomic = false;
61506172 AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
6151 SynchronizationScope Scope = CrossThread;
6173 SyncScope::ID SSID = SyncScope::System;
61526174
61536175 if (Lex.getKind() == lltok::kw_atomic) {
61546176 isAtomic = true;
61646186 if (ParseTypeAndValue(Val, Loc, PFS) ||
61656187 ParseToken(lltok::comma, "expected ',' after store operand") ||
61666188 ParseTypeAndValue(Ptr, PtrLoc, PFS) ||
6167 ParseScopeAndOrdering(isAtomic, Scope, Ordering) ||
6189 ParseScopeAndOrdering(isAtomic, SSID, Ordering) ||
61686190 ParseOptionalCommaAlign(Alignment, AteExtraComma))
61696191 return true;
61706192
61806202 Ordering == AtomicOrdering::AcquireRelease)
61816203 return Error(Loc, "atomic store cannot use Acquire ordering");
61826204
6183 Inst = new StoreInst(Val, Ptr, isVolatile, Alignment, Ordering, Scope);
6205 Inst = new StoreInst(Val, Ptr, isVolatile, Alignment, Ordering, SSID);
61846206 return AteExtraComma ? InstExtraComma : InstNormal;
61856207 }
61866208
61926214 bool AteExtraComma = false;
61936215 AtomicOrdering SuccessOrdering = AtomicOrdering::NotAtomic;
61946216 AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic;
6195 SynchronizationScope Scope = CrossThread;
6217 SyncScope::ID SSID = SyncScope::System;
61966218 bool isVolatile = false;
61976219 bool isWeak = false;
61986220
62076229 ParseTypeAndValue(Cmp, CmpLoc, PFS) ||
62086230 ParseToken(lltok::comma, "expected ',' after cmpxchg cmp operand") ||
62096231 ParseTypeAndValue(New, NewLoc, PFS) ||
6210 ParseScopeAndOrdering(true /*Always atomic*/, Scope, SuccessOrdering) ||
6232 ParseScopeAndOrdering(true /*Always atomic*/, SSID, SuccessOrdering) ||
62116233 ParseOrdering(FailureOrdering))
62126234 return true;
62136235
62306252 if (!New->getType()->isFirstClassType())
62316253 return Error(NewLoc, "cmpxchg operand must be a first class value");
62326254 AtomicCmpXchgInst *CXI = new AtomicCmpXchgInst(
6233 Ptr, Cmp, New, SuccessOrdering, FailureOrdering, Scope);
6255 Ptr, Cmp, New, SuccessOrdering, FailureOrdering, SSID);
62346256 CXI->setVolatile(isVolatile);
62356257 CXI->setWeak(isWeak);
62366258 Inst = CXI;
62446266 Value *Ptr, *Val; LocTy PtrLoc, ValLoc;
62456267 bool AteExtraComma = false;
62466268 AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
6247 SynchronizationScope Scope = CrossThread;
6269 SyncScope::ID SSID = SyncScope::System;
62486270 bool isVolatile = false;
62496271 AtomicRMWInst::BinOp Operation;
62506272
62706292 if (ParseTypeAndValue(Ptr, PtrLoc, PFS) ||
62716293 ParseToken(lltok::comma, "expected ',' after atomicrmw address") ||
62726294 ParseTypeAndValue(Val, ValLoc, PFS) ||
6273 ParseScopeAndOrdering(true /*Always atomic*/, Scope, Ordering))
6295 ParseScopeAndOrdering(true /*Always atomic*/, SSID, Ordering))
62746296 return true;
62756297
62766298 if (Ordering == AtomicOrdering::Unordered)
62876309 " integer");
62886310
62896311 AtomicRMWInst *RMWI =
6290 new AtomicRMWInst(Operation, Ptr, Val, Ordering, Scope);
6312 new AtomicRMWInst(Operation, Ptr, Val, Ordering, SSID);
62916313 RMWI->setVolatile(isVolatile);
62926314 Inst = RMWI;
62936315 return AteExtraComma ? InstExtraComma : InstNormal;
62976319 /// ::= 'fence' 'singlethread'? AtomicOrdering
62986320 int LLParser::ParseFence(Instruction *&Inst, PerFunctionState &PFS) {
62996321 AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
6300 SynchronizationScope Scope = CrossThread;
6301 if (ParseScopeAndOrdering(true /*Always atomic*/, Scope, Ordering))
6322 SyncScope::ID SSID = SyncScope::System;
6323 if (ParseScopeAndOrdering(true /*Always atomic*/, SSID, Ordering))
63026324 return true;
63036325
63046326 if (Ordering == AtomicOrdering::Unordered)
63066328 if (Ordering == AtomicOrdering::Monotonic)
63076329 return TokError("fence cannot be monotonic");
63086330
6309 Inst = new FenceInst(Context, Ordering, Scope);
6331 Inst = new FenceInst(Context, Ordering, SSID);
63106332 return InstNormal;
63116333 }
63126334
240240 bool ParseOptionalCallingConv(unsigned &CC);
241241 bool ParseOptionalAlignment(unsigned &Alignment);
242242 bool ParseOptionalDerefAttrBytes(lltok::Kind AttrKind, uint64_t &Bytes);
243 bool ParseScopeAndOrdering(bool isAtomic, SynchronizationScope &Scope,
243 bool ParseScopeAndOrdering(bool isAtomic, SyncScope::ID &SSID,
244244 AtomicOrdering &Ordering);
245 bool ParseScope(SyncScope::ID &SSID);
245246 bool ParseOrdering(AtomicOrdering &Ordering);
246247 bool ParseOptionalStackAlignment(unsigned &Alignment);
247248 bool ParseOptionalCommaAlign(unsigned &Alignment, bool &AteExtraComma);
9292 kw_release,
9393 kw_acq_rel,
9494 kw_seq_cst,
95 kw_singlethread,
95 kw_syncscope,
9696 kw_nnan,
9797 kw_ninf,
9898 kw_nsz,
512512 TBAAVerifier TBAAVerifyHelper;
513513
514514 std::vector BundleTags;
515 SmallVector SSIDs;
515516
516517 public:
517518 BitcodeReader(BitstreamCursor Stream, StringRef Strtab,
647648 Error parseTypeTable();
648649 Error parseTypeTableBody();
649650 Error parseOperandBundleTags();
651 Error parseSyncScopeNames();
650652
651653 Expected recordValue(SmallVectorImpl &Record,
652654 unsigned NameIndex, Triple &TT);
667669 Error findFunctionInStream(
668670 Function *F,
669671 DenseMap::iterator DeferredFunctionInfoIterator);
672
673 SyncScope::ID getDecodedSyncScopeID(unsigned Val);
670674 };
671675
672676 /// Class to manage reading and parsing function summary index bitcode
994998 case bitc::ORDERING_ACQREL: return AtomicOrdering::AcquireRelease;
995999 default: // Map unknown orderings to sequentially-consistent.
9961000 case bitc::ORDERING_SEQCST: return AtomicOrdering::SequentiallyConsistent;
997 }
998 }
999
1000 static SynchronizationScope getDecodedSynchScope(unsigned Val) {
1001 switch (Val) {
1002 case bitc::SYNCHSCOPE_SINGLETHREAD: return SingleThread;
1003 default: // Map unknown scopes to cross-thread.
1004 case bitc::SYNCHSCOPE_CROSSTHREAD: return CrossThread;
10051001 }
10061002 }
10071003
17441740 }
17451741 }
17461742
1743 Error BitcodeReader::parseSyncScopeNames() {
1744 if (Stream.EnterSubBlock(bitc::SYNC_SCOPE_NAMES_BLOCK_ID))
1745 return error("Invalid record");
1746
1747 if (!SSIDs.empty())
1748 return error("Invalid multiple synchronization scope names blocks");
1749
1750 SmallVector Record;
1751 while (true) {
1752 BitstreamEntry Entry = Stream.advanceSkippingSubblocks();
1753 switch (Entry.Kind) {
1754 case BitstreamEntry::SubBlock: // Handled for us already.
1755 case BitstreamEntry::Error:
1756 return error("Malformed block");
1757 case BitstreamEntry::EndBlock:
1758 if (SSIDs.empty())
1759 return error("Invalid empty synchronization scope names block");
1760 return Error::success();
1761 case BitstreamEntry::Record:
1762 // The interesting case.
1763 break;
1764 }
1765
1766 // Synchronization scope names are implicitly mapped to synchronization
1767 // scope IDs by their order.
1768
1769 if (Stream.readRecord(Entry.ID, Record) != bitc::SYNC_SCOPE_NAME)
1770 return error("Invalid record");
1771
1772 SmallString<16> SSN;
1773 if (convertToString(Record, 0, SSN))
1774 return error("Invalid record");
1775
1776 SSIDs.push_back(Context.getOrInsertSyncScopeID(SSN));
1777 Record.clear();
1778 }
1779 }
1780
17471781 /// Associate a value with its name from the given index in the provided record.
17481782 Expected BitcodeReader::recordValue(SmallVectorImpl &Record,
17491783 unsigned NameIndex, Triple &TT) {
31313165 if (Error Err = parseOperandBundleTags())
31323166 return Err;
31333167 break;
3168 case bitc::SYNC_SCOPE_NAMES_BLOCK_ID:
3169 if (Error Err = parseSyncScopeNames())
3170 return Err;
3171 break;
31343172 }
31353173 continue;
31363174
42034241 break;
42044242 }
42054243 case bitc::FUNC_CODE_INST_LOADATOMIC: {
4206 // LOADATOMIC: [opty, op, align, vol, ordering, synchscope]
4244 // LOADATOMIC: [opty, op, align, vol, ordering, ssid]
42074245 unsigned OpNum = 0;
42084246 Value *Op;
42094247 if (getValueTypePair(Record, OpNum, NextValueNo, Op) ||
42254263 return error("Invalid record");
42264264 if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0)
42274265 return error("Invalid record");
4228 SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
4266 SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
42294267
42304268 unsigned Align;
42314269 if (Error Err = parseAlignmentValue(Record[OpNum], Align))
42324270 return Err;
4233 I = new LoadInst(Op, "", Record[OpNum+1], Align, Ordering, SynchScope);
4271 I = new LoadInst(Op, "", Record[OpNum+1], Align, Ordering, SSID);
42344272
42354273 InstructionList.push_back(I);
42364274 break;
42594297 }
42604298 case bitc::FUNC_CODE_INST_STOREATOMIC:
42614299 case bitc::FUNC_CODE_INST_STOREATOMIC_OLD: {
4262 // STOREATOMIC: [ptrty, ptr, val, align, vol, ordering, synchscope]
4300 // STOREATOMIC: [ptrty, ptr, val, align, vol, ordering, ssid]
42634301 unsigned OpNum = 0;
42644302 Value *Val, *Ptr;
42654303 if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) ||
42794317 Ordering == AtomicOrdering::Acquire ||
42804318 Ordering == AtomicOrdering::AcquireRelease)
42814319 return error("Invalid record");
4282 SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
4320 SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
42834321 if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0)
42844322 return error("Invalid record");
42854323
42864324 unsigned Align;
42874325 if (Error Err = parseAlignmentValue(Record[OpNum], Align))
42884326 return Err;
4289 I = new StoreInst(Val, Ptr, Record[OpNum+1], Align, Ordering, SynchScope);
4327 I = new StoreInst(Val, Ptr, Record[OpNum+1], Align, Ordering, SSID);
42904328 InstructionList.push_back(I);
42914329 break;
42924330 }
42934331 case bitc::FUNC_CODE_INST_CMPXCHG_OLD:
42944332 case bitc::FUNC_CODE_INST_CMPXCHG: {
4295 // CMPXCHG:[ptrty, ptr, cmp, new, vol, successordering, synchscope,
4333 // CMPXCHG:[ptrty, ptr, cmp, new, vol, successordering, ssid,
42964334 // failureordering?, isweak?]
42974335 unsigned OpNum = 0;
42984336 Value *Ptr, *Cmp, *New;
43094347 if (SuccessOrdering == AtomicOrdering::NotAtomic ||
43104348 SuccessOrdering == AtomicOrdering::Unordered)
43114349 return error("Invalid record");
4312 SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 2]);
4350 SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 2]);
43134351
43144352 if (Error Err = typeCheckLoadStoreInst(Cmp->getType(), Ptr->getType()))
43154353 return Err;
43214359 FailureOrdering = getDecodedOrdering(Record[OpNum + 3]);
43224360
43234361 I = new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering, FailureOrdering,
4324 SynchScope);
4362 SSID);
43254363 cast(I)->setVolatile(Record[OpNum]);
43264364
43274365 if (Record.size() < 8) {
43384376 break;
43394377 }
43404378 case bitc::FUNC_CODE_INST_ATOMICRMW: {
4341 // ATOMICRMW:[ptrty, ptr, val, op, vol, ordering, synchscope]
4379 // ATOMICRMW:[ptrty, ptr, val, op, vol, ordering, ssid]
43424380 unsigned OpNum = 0;
43434381 Value *Ptr, *Val;
43444382 if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) ||
43554393 if (Ordering == AtomicOrdering::NotAtomic ||
43564394 Ordering == AtomicOrdering::Unordered)
43574395 return error("Invalid record");
4358 SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
4359 I = new AtomicRMWInst(Operation, Ptr, Val, Ordering, SynchScope);
4396 SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
4397 I = new AtomicRMWInst(Operation, Ptr, Val, Ordering, SSID);
43604398 cast(I)->setVolatile(Record[OpNum+1]);
43614399 InstructionList.push_back(I);
43624400 break;
43634401 }
4364 case bitc::FUNC_CODE_INST_FENCE: { // FENCE:[ordering, synchscope]
4402 case bitc::FUNC_CODE_INST_FENCE: { // FENCE:[ordering, ssid]
43654403 if (2 != Record.size())
43664404 return error("Invalid record");
43674405 AtomicOrdering Ordering = getDecodedOrdering(Record[0]);
43694407 Ordering == AtomicOrdering::Unordered ||
43704408 Ordering == AtomicOrdering::Monotonic)
43714409 return error("Invalid record");
4372 SynchronizationScope SynchScope = getDecodedSynchScope(Record[1]);
4373 I = new FenceInst(Context, Ordering, SynchScope);
4410 SyncScope::ID SSID = getDecodedSyncScopeID(Record[1]);
4411 I = new FenceInst(Context, Ordering, SSID);
43744412 InstructionList.push_back(I);
43754413 break;
43764414 }
45644602 return Err;
45654603 }
45664604 return Error::success();
4605 }
4606
4607 SyncScope::ID BitcodeReader::getDecodedSyncScopeID(unsigned Val) {
4608 if (Val == SyncScope::SingleThread || Val == SyncScope::System)
4609 return SyncScope::ID(Val);
4610 if (Val >= SSIDs.size())
4611 return SyncScope::System; // Map unknown synchronization scopes to system.
4612 return SSIDs[Val];
45674613 }
45684614
45694615 //===----------------------------------------------------------------------===//
265265 const GlobalObject &GO);
266266 void writeModuleMetadataKinds();
267267 void writeOperandBundleTags();
268 void writeSyncScopeNames();
268269 void writeConstants(unsigned FirstVal, unsigned LastVal, bool isGlobal);
269270 void writeModuleConstants();
270271 bool pushValueAndType(const Value *V, unsigned InstID,
315316 return VE.getValueID(VI.getValue());
316317 }
317318 std::map &valueIds() { return GUIDToValueIdMap; }
319
320 unsigned getEncodedSyncScopeID(SyncScope::ID SSID) {
321 return unsigned(SSID);
322 }
318323 };
319324
320325 /// Class to manage the bitcode writing for a combined index.
482487 case AtomicOrdering::SequentiallyConsistent: return bitc::ORDERING_SEQCST;
483488 }
484489 llvm_unreachable("Invalid ordering");
485 }
486
487 static unsigned getEncodedSynchScope(SynchronizationScope SynchScope) {
488 switch (SynchScope) {
489 case SingleThread: return bitc::SYNCHSCOPE_SINGLETHREAD;
490 case CrossThread: return bitc::SYNCHSCOPE_CROSSTHREAD;
491 }
492 llvm_unreachable("Invalid synch scope");
493490 }
494491
495492 static void writeStringRecord(BitstreamWriter &Stream, unsigned Code,
20412038 Stream.ExitBlock();
20422039 }
20432040
2041 void ModuleBitcodeWriter::writeSyncScopeNames() {
2042 SmallVector SSNs;
2043 M.getContext().getSyncScopeNames(SSNs);
2044 if (SSNs.empty())
2045 return;
2046
2047 Stream.EnterSubblock(bitc::SYNC_SCOPE_NAMES_BLOCK_ID, 2);
2048
2049 SmallVector Record;
2050 for (auto SSN : SSNs) {
2051 Record.append(SSN.begin(), SSN.end());
2052 Stream.EmitRecord(bitc::SYNC_SCOPE_NAME, Record, 0);
2053 Record.clear();
2054 }
2055
2056 Stream.ExitBlock();
2057 }
2058
20442059 static void emitSignedInt64(SmallVectorImpl &Vals, uint64_t V) {
20452060 if ((int64_t)V >= 0)
20462061 Vals.push_back(V << 1);
26572672 Vals.push_back(cast(I).isVolatile());
26582673 if (cast(I).isAtomic()) {
26592674 Vals.push_back(getEncodedOrdering(cast(I).getOrdering()));
2660 Vals.push_back(getEncodedSynchScope(cast(I).getSynchScope()));
2675 Vals.push_back(getEncodedSyncScopeID(cast(I).getSyncScopeID()));
26612676 }
26622677 break;
26632678 case Instruction::Store:
26712686 Vals.push_back(cast(I).isVolatile());
26722687 if (cast(I).isAtomic()) {
26732688 Vals.push_back(getEncodedOrdering(cast(I).getOrdering()));
2674 Vals.push_back(getEncodedSynchScope(cast(I).getSynchScope()));
2689 Vals.push_back(
2690 getEncodedSyncScopeID(cast(I).getSyncScopeID()));
26752691 }
26762692 break;
26772693 case Instruction::AtomicCmpXchg:
26832699 Vals.push_back(
26842700 getEncodedOrdering(cast(I).getSuccessOrdering()));
26852701 Vals.push_back(
2686 getEncodedSynchScope(cast(I).getSynchScope()));
2702 getEncodedSyncScopeID(cast(I).getSyncScopeID()));
26872703 Vals.push_back(
26882704 getEncodedOrdering(cast(I).getFailureOrdering()));
26892705 Vals.push_back(cast(I).isWeak());
26972713 Vals.push_back(cast(I).isVolatile());
26982714 Vals.push_back(getEncodedOrdering(cast(I).getOrdering()));
26992715 Vals.push_back(
2700 getEncodedSynchScope(cast(I).getSynchScope()));
2716 getEncodedSyncScopeID(cast(I).getSyncScopeID()));
27012717 break;
27022718 case Instruction::Fence:
27032719 Code = bitc::FUNC_CODE_INST_FENCE;
27042720 Vals.push_back(getEncodedOrdering(cast(I).getOrdering()));
2705 Vals.push_back(getEncodedSynchScope(cast(I).getSynchScope()));
2721 Vals.push_back(getEncodedSyncScopeID(cast(I).getSyncScopeID()));
27062722 break;
27072723 case Instruction::Call: {
27082724 const CallInst &CI = cast(I);
37153731 writeUseListBlock(nullptr);
37163732
37173733 writeOperandBundleTags();
3734 writeSyncScopeNames();
37183735
37193736 // Emit function bodies.
37203737 DenseMap FunctionToBitcodeIndex;
360360 auto *NewLI = Builder.CreateLoad(NewAddr);
361361 NewLI->setAlignment(LI->getAlignment());
362362 NewLI->setVolatile(LI->isVolatile());
363 NewLI->setAtomic(LI->getOrdering(), LI->getSynchScope());
363 NewLI->setAtomic(LI->getOrdering(), LI->getSyncScopeID());
364364 DEBUG(dbgs() << "Replaced " << *LI << " with " << *NewLI << "\n");
365365
366366 Value *NewVal = Builder.CreateBitCast(NewLI, LI->getType());
443443 StoreInst *NewSI = Builder.CreateStore(NewVal, NewAddr);
444444 NewSI->setAlignment(SI->getAlignment());
445445 NewSI->setVolatile(SI->isVolatile());
446 NewSI->setAtomic(SI->getOrdering(), SI->getSynchScope());
446 NewSI->setAtomic(SI->getOrdering(), SI->getSyncScopeID());
447447 DEBUG(dbgs() << "Replaced " << *SI << " with " << *NewSI << "\n");
448448 SI->eraseFromParent();
449449 return NewSI;
800800 Value *FullWord_Cmp = Builder.CreateOr(Loaded_MaskOut, Cmp_Shifted);
801801 AtomicCmpXchgInst *NewCI = Builder.CreateAtomicCmpXchg(
802802 PMV.AlignedAddr, FullWord_Cmp, FullWord_NewVal, CI->getSuccessOrdering(),
803 CI->getFailureOrdering(), CI->getSynchScope());
803 CI->getFailureOrdering(), CI->getSyncScopeID());
804804 NewCI->setVolatile(CI->isVolatile());
805805 // When we're building a strong cmpxchg, we need a loop, so you
806806 // might think we could use a weak cmpxchg inside. But, using strong
923923 auto *NewCI = Builder.CreateAtomicCmpXchg(NewAddr, NewCmp, NewNewVal,
924924 CI->getSuccessOrdering(),
925925 CI->getFailureOrdering(),
926 CI->getSynchScope());
926 CI->getSyncScopeID());
927927 NewCI->setVolatile(CI->isVolatile());
928928 NewCI->setWeak(CI->isWeak());
929929 DEBUG(dbgs() << "Replaced " << *CI << " with " << *NewCI << "\n");
344344 *MF->getMachineMemOperand(MachinePointerInfo(LI.getPointerOperand()),
345345 Flags, DL->getTypeStoreSize(LI.getType()),
346346 getMemOpAlignment(LI), AAMDNodes(), nullptr,
347 LI.getSynchScope(), LI.getOrdering()));
347 LI.getSyncScopeID(), LI.getOrdering()));
348348 return true;
349349 }
350350
362362 *MF->getMachineMemOperand(
363363 MachinePointerInfo(SI.getPointerOperand()), Flags,
364364 DL->getTypeStoreSize(SI.getValueOperand()->getType()),
365 getMemOpAlignment(SI), AAMDNodes(), nullptr, SI.getSynchScope(),
365 getMemOpAlignment(SI), AAMDNodes(), nullptr, SI.getSyncScopeID(),
366366 SI.getOrdering()));
367367 return true;
368368 }
364364 return lexName(C, Token, MIToken::NamedIRValue, Rule.size(), ErrorCallback);
365365 }
366366
367 static Cursor maybeLexStringConstant(Cursor C, MIToken &Token,
368 ErrorCallbackType ErrorCallback) {
369 if (C.peek() != '"')
370 return None;
371 return lexName(C, Token, MIToken::StringConstant, /*PrefixLength=*/0,
372 ErrorCallback);
373 }
374
367375 static Cursor lexVirtualRegister(Cursor C, MIToken &Token) {
368376 auto Range = C;
369377 C.advance(); // Skip '%'
629637 return R.remaining();
630638 if (Cursor R = maybeLexEscapedIRValue(C, Token, ErrorCallback))
631639 return R.remaining();
640 if (Cursor R = maybeLexStringConstant(C, Token, ErrorCallback))
641 return R.remaining();
632642
633643 Token.reset(MIToken::Error, C.remaining());
634644 ErrorCallback(C.location(),
126126 NamedIRValue,
127127 IRValue,
128128 QuotedIRValue, // ``
129 SubRegisterIndex
129 SubRegisterIndex,
130 StringConstant
130131 };
131132
132133 private:
228228 bool parseMemoryOperandFlag(MachineMemOperand::Flags &Flags);
229229 bool parseMemoryPseudoSourceValue(const PseudoSourceValue *&PSV);
230230 bool parseMachinePointerInfo(MachinePointerInfo &Dest);
231 bool parseOptionalScope(LLVMContext &Context, SyncScope::ID &SSID);
231232 bool parseOptionalAtomicOrdering(AtomicOrdering &Order);
232233 bool parseMachineMemoryOperand(MachineMemOperand *&Dest);
233234
317318 ///
318319 /// Return true if the name isn't a name of a bitmask target flag.
319320 bool getBitmaskTargetFlag(StringRef Name, unsigned &Flag);
321
322 /// parseStringConstant
323 /// ::= StringConstant
324 bool parseStringConstant(std::string &Result);
320325 };
321326
322327 } // end anonymous namespace
21342139 return false;
21352140 }
21362141
2142 bool MIParser::parseOptionalScope(LLVMContext &Context,
2143 SyncScope::ID &SSID) {
2144 SSID = SyncScope::System;
2145 if (Token.is(MIToken::Identifier) && Token.stringValue() == "syncscope") {
2146 lex();
2147 if (expectAndConsume(MIToken::lparen))
2148 return error("expected '(' in syncscope");
2149
2150 std::string SSN;
2151 if (parseStringConstant(SSN))
2152 return true;
2153
2154 SSID = Context.getOrInsertSyncScopeID(SSN);
2155 if (expectAndConsume(MIToken::rparen))
2156 return error("expected ')' in syncscope");
2157 }
2158
2159 return false;
2160 }
2161
21372162 bool MIParser::parseOptionalAtomicOrdering(AtomicOrdering &Order) {
21382163 Order = AtomicOrdering::NotAtomic;
21392164 if (Token.isNot(MIToken::Identifier))
21732198 Flags |= MachineMemOperand::MOStore;
21742199 lex();
21752200
2176 // Optional "singlethread" scope.
2177 SynchronizationScope Scope = SynchronizationScope::CrossThread;
2178 if (Token.is(MIToken::Identifier) && Token.stringValue() == "singlethread") {
2179 Scope = SynchronizationScope::SingleThread;
2180 lex();
2181 }
2201 // Optional synchronization scope.
2202 SyncScope::ID SSID;
2203 if (parseOptionalScope(MF.getFunction()->getContext(), SSID))
2204 return true;
21822205
21832206 // Up to two atomic orderings (cmpxchg provides guarantees on failure).
21842207 AtomicOrdering Order, FailureOrder;
22432266 if (expectAndConsume(MIToken::rparen))
22442267 return true;
22452268 Dest = MF.getMachineMemOperand(Ptr, Flags, Size, BaseAlignment, AAInfo, Range,
2246 Scope, Order, FailureOrder);
2269 SSID, Order, FailureOrder);
22472270 return false;
22482271 }
22492272
24562479 return false;
24572480 }
24582481
2482 bool MIParser::parseStringConstant(std::string &Result) {
2483 if (Token.isNot(MIToken::StringConstant))
2484 return error("expected string constant");
2485 Result = Token.stringValue();
2486 lex();
2487 return false;
2488 }
2489
24592490 bool llvm::parseMachineBasicBlockDefinitions(PerFunctionMIParsingState &PFS,
24602491 StringRef Src,
24612492 SMDiagnostic &Error) {
1717 #include "llvm/ADT/SmallPtrSet.h"
1818 #include "llvm/ADT/SmallVector.h"
1919 #include "llvm/ADT/STLExtras.h"
20 #include "llvm/ADT/StringExtras.h"
2021 #include "llvm/ADT/StringRef.h"
2122 #include "llvm/ADT/Twine.h"
2223 #include "llvm/CodeGen/GlobalISel/RegisterBank.h"
138139 ModuleSlotTracker &MST;
139140 const DenseMap &RegisterMaskIds;
140141 const DenseMap &StackObjectOperandMapping;
142 /// Synchronization scope names registered with LLVMContext.
143 SmallVector SSNs;
141144
142145 bool canPredictBranchProbabilities(const MachineBasicBlock &MBB) const;
143146 bool canPredictSuccessors(const MachineBasicBlock &MBB) const;
161164 void print(const MachineOperand &Op, const TargetRegisterInfo *TRI,
162165 unsigned I, bool ShouldPrintRegisterTies,
163166 LLT TypeToPrint, bool IsDef = false);
164 void print(const MachineMemOperand &Op);
167 void print(const LLVMContext &Context, const MachineMemOperand &Op);
168 void printSyncScope(const LLVMContext &Context, SyncScope::ID SSID);
165169
166170 void print(const MCCFIInstruction &CFI, const TargetRegisterInfo *TRI);
167171 };
730734
731735 if (!MI.memoperands_empty()) {
732736 OS << " :: ";
737 const LLVMContext &Context = MF->getFunction()->getContext();
733738 bool NeedComma = false;
734739 for (const auto *Op : MI.memoperands()) {
735740 if (NeedComma)
736741 OS << ", ";
737 print(*Op);
742 print(Context, *Op);
738743 NeedComma = true;
739744 }
740745 }
10301035 }
10311036 }
10321037
1033 void MIPrinter::print(const MachineMemOperand &Op) {
1038 void MIPrinter::print(const LLVMContext &Context, const MachineMemOperand &Op) {
10341039 OS << '(';
10351040 // TODO: Print operand's target specific flags.
10361041 if (Op.isVolatile())
10481053 OS << "store ";
10491054 }
10501055
1051 if (Op.getSynchScope() == SynchronizationScope::SingleThread)
1052 OS << "singlethread ";
1056 printSyncScope(Context, Op.getSyncScopeID());
10531057
10541058 if (Op.getOrdering() != AtomicOrdering::NotAtomic)
10551059 OS << toIRString(Op.getOrdering()) << ' ';
11181122 OS << ')';
11191123 }
11201124
1125 void MIPrinter::printSyncScope(const LLVMContext &Context, SyncScope::ID SSID) {
1126 switch (SSID) {
1127 case SyncScope::System: {
1128 break;
1129 }
1130 default: {
1131 if (SSNs.empty())
1132 Context.getSyncScopeNames(SSNs);
1133
1134 OS << "syncscope(\"";
1135 PrintEscapedString(SSNs[SSID], OS);
1136 OS << "\") ";
1137 break;
1138 }
1139 }
1140 }
1141
11211142 static void printCFIRegister(unsigned DwarfReg, raw_ostream &OS,
11221143 const TargetRegisterInfo *TRI) {
11231144 int Reg = TRI->getLLVMRegNum(DwarfReg, true);
304304 MachineMemOperand *MachineFunction::getMachineMemOperand(
305305 MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s,
306306 unsigned base_alignment, const AAMDNodes &AAInfo, const MDNode *Ranges,
307 SynchronizationScope SynchScope, AtomicOrdering Ordering,
307 SyncScope::ID SSID, AtomicOrdering Ordering,
308308 AtomicOrdering FailureOrdering) {
309309 return new (Allocator)
310310 MachineMemOperand(PtrInfo, f, s, base_alignment, AAInfo, Ranges,
311 SynchScope, Ordering, FailureOrdering);
311 SSID, Ordering, FailureOrdering);
312312 }
313313
314314 MachineMemOperand *
319319 MachineMemOperand(MachinePointerInfo(MMO->getValue(),
320320 MMO->getOffset()+Offset),
321321 MMO->getFlags(), Size, MMO->getBaseAlignment(),
322 AAMDNodes(), nullptr, MMO->getSynchScope(),
322 AAMDNodes(), nullptr, MMO->getSyncScopeID(),
323323 MMO->getOrdering(), MMO->getFailureOrdering());
324324 return new (Allocator)
325325 MachineMemOperand(MachinePointerInfo(MMO->getPseudoValue(),
326326 MMO->getOffset()+Offset),
327327 MMO->getFlags(), Size, MMO->getBaseAlignment(),
328 AAMDNodes(), nullptr, MMO->getSynchScope(),
328 AAMDNodes(), nullptr, MMO->getSyncScopeID(),
329329 MMO->getOrdering(), MMO->getFailureOrdering());
330330 }
331331
358358 (*I)->getFlags() & ~MachineMemOperand::MOStore,
359359 (*I)->getSize(), (*I)->getBaseAlignment(),
360360 (*I)->getAAInfo(), nullptr,
361 (*I)->getSynchScope(), (*I)->getOrdering(),
361 (*I)->getSyncScopeID(), (*I)->getOrdering(),
362362 (*I)->getFailureOrdering());
363363 Result[Index] = JustLoad;
364364 }
392392 (*I)->getFlags() & ~MachineMemOperand::MOLoad,
393393 (*I)->getSize(), (*I)->getBaseAlignment(),
394394 (*I)->getAAInfo(), nullptr,
395 (*I)->getSynchScope(), (*I)->getOrdering(),
395 (*I)->getSyncScopeID(), (*I)->getOrdering(),
396396 (*I)->getFailureOrdering());
397397 Result[Index] = JustStore;
398398 }
613613 uint64_t s, unsigned int a,
614614 const AAMDNodes &AAInfo,
615615 const MDNode *Ranges,
616 SynchronizationScope SynchScope,
616 SyncScope::ID SSID,
617617 AtomicOrdering Ordering,
618618 AtomicOrdering FailureOrdering)
619619 : PtrInfo(ptrinfo), Size(s), FlagVals(f), BaseAlignLog2(Log2_32(a) + 1),
624624 assert(getBaseAlignment() == a && "Alignment is not a power of 2!");
625625 assert((isLoad() || isStore()) && "Not a load/store!");
626626
627 AtomicInfo.SynchScope = static_cast(SynchScope);
628 assert(getSynchScope() == SynchScope && "Value truncated");
627 AtomicInfo.SSID = static_cast(SSID);
628 assert(getSyncScopeID() == SSID && "Value truncated");
629629 AtomicInfo.Ordering = static_cast(Ordering);
630630 assert(getOrdering() == Ordering && "Value truncated");
631631 AtomicInfo.FailureOrdering = static_cast(FailureOrdering);
54425442 unsigned Opcode, const SDLoc &dl, EVT MemVT, SDVTList VTs, SDValue Chain,
54435443 SDValue Ptr, SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo,
54445444 unsigned Alignment, AtomicOrdering SuccessOrdering,
5445 AtomicOrdering FailureOrdering, SynchronizationScope SynchScope) {
5445 AtomicOrdering FailureOrdering, SyncScope::ID SSID) {
54465446 assert(Opcode == ISD::ATOMIC_CMP_SWAP ||
54475447 Opcode == ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS);
54485448 assert(Cmp.getValueType() == Swp.getValueType() && "Invalid Atomic Op Types");
54585458 MachineMemOperand::MOStore;
54595459 MachineMemOperand *MMO =
54605460 MF.getMachineMemOperand(PtrInfo, Flags, MemVT.getStoreSize(), Alignment,
5461 AAMDNodes(), nullptr, SynchScope, SuccessOrdering,
5461 AAMDNodes(), nullptr, SSID, SuccessOrdering,
54625462 FailureOrdering);
54635463
54645464 return getAtomicCmpSwap(Opcode, dl, MemVT, VTs, Chain, Ptr, Cmp, Swp, MMO);
54805480 SDValue Chain, SDValue Ptr, SDValue Val,
54815481 const Value *PtrVal, unsigned Alignment,
54825482 AtomicOrdering Ordering,
5483 SynchronizationScope SynchScope) {
5483 SyncScope::ID SSID) {
54845484 if (Alignment == 0) // Ensure that codegen never sees alignment 0
54855485 Alignment = getEVTAlignment(MemVT);
54865486
55005500 MachineMemOperand *MMO =
55015501 MF.getMachineMemOperand(MachinePointerInfo(PtrVal), Flags,
55025502 MemVT.getStoreSize(), Alignment, AAMDNodes(),
5503 nullptr, SynchScope, Ordering);
5503 nullptr, SSID, Ordering);
55045504
55055505 return getAtomic(Opcode, dl, MemVT, Chain, Ptr, Val, MMO);
55065506 }
39893989 SDLoc dl = getCurSDLoc();
39903990 AtomicOrdering SuccessOrder = I.getSuccessOrdering();
39913991 AtomicOrdering FailureOrder = I.getFailureOrdering();
3992 SynchronizationScope Scope = I.getSynchScope();
3992 SyncScope::ID SSID = I.getSyncScopeID();
39933993
39943994 SDValue InChain = getRoot();
39953995
39993999 ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS, dl, MemVT, VTs, InChain,
40004000 getValue(I.getPointerOperand()), getValue(I.getCompareOperand()),
40014001 getValue(I.getNewValOperand()), MachinePointerInfo(I.getPointerOperand()),
4002 /*Alignment=*/ 0, SuccessOrder, FailureOrder, Scope);
4002 /*Alignment=*/ 0, SuccessOrder, FailureOrder, SSID);
40034003
40044004 SDValue OutChain = L.getValue(2);
40054005
40254025 case AtomicRMWInst::UMin: NT = ISD::ATOMIC_LOAD_UMIN; break;
40264026 }
40274027 AtomicOrdering Order = I.getOrdering();
4028 SynchronizationScope Scope = I.getSynchScope();
4028 SyncScope::ID SSID = I.getSyncScopeID();
40294029
40304030 SDValue InChain = getRoot();
40314031
40364036 getValue(I.getPointerOperand()),
40374037 getValue(I.getValOperand()),
40384038 I.getPointerOperand(),
4039 /* Alignment=*/ 0, Order, Scope);
4039 /* Alignment=*/ 0, Order, SSID);
40404040
40414041 SDValue OutChain = L.getValue(1);
40424042
40514051 Ops[0] = getRoot();
40524052 Ops[1] = DAG.getConstant((unsigned)I.getOrdering(), dl,
40534053 TLI.getFenceOperandTy(DAG.getDataLayout()));
4054 Ops[2] = DAG.getConstant(I.getSynchScope(), dl,
4054 Ops[2] = DAG.getConstant(I.getSyncScopeID(), dl,
40554055 TLI.getFenceOperandTy(DAG.getDataLayout()));
40564056 DAG.setRoot(DAG.getNode(ISD::ATOMIC_FENCE, dl, MVT::Other, Ops));
40574057 }
40594059 void SelectionDAGBuilder::visitAtomicLoad(const LoadInst &I) {
40604060 SDLoc dl = getCurSDLoc();
40614061 AtomicOrdering Order = I.getOrdering();
4062 SynchronizationScope Scope = I.getSynchScope();
4062 SyncScope::ID SSID = I.getSyncScopeID();
40634063
40644064 SDValue InChain = getRoot();
40654065
40774077 VT.getStoreSize(),
40784078 I.getAlignment() ? I.getAlignment() :
40794079 DAG.getEVTAlignment(VT),
4080 AAMDNodes(), nullptr, Scope, Order);
4080 AAMDNodes(), nullptr, SSID, Order);
40814081
40824082 InChain = TLI.prepareVolatileOrAtomicLoad(InChain, dl, DAG);
40834083 SDValue L =
40944094 SDLoc dl = getCurSDLoc();
40954095
40964096 AtomicOrdering Order = I.getOrdering();
4097 SynchronizationScope Scope = I.getSynchScope();
4097 SyncScope::ID SSID = I.getSyncScopeID();
40984098
40994099 SDValue InChain = getRoot();
41004100
41114111 getValue(I.getPointerOperand()),
41124112 getValue(I.getValueOperand()),
41134113 I.getPointerOperand(), I.getAlignment(),
4114 Order, Scope);
4114 Order, SSID);
41154115
41164116 DAG.setRoot(OutChain);
41174117 }
21182118 bool ShouldPreserveUseListOrder;
21192119 UseListOrderStack UseListOrders;
21202120 SmallVector MDNames;
2121 /// Synchronization scope names registered with LLVMContext.
2122 SmallVector SSNs;
21212123
21222124 public:
21232125 /// Construct an AssemblyWriter with an external SlotTracker
21332135 void writeOperand(const Value *Op, bool PrintType);
21342136 void writeParamOperand(const Value *Operand, AttributeSet Attrs);
21352137 void writeOperandBundles(ImmutableCallSite CS);
2136 void writeAtomic(AtomicOrdering Ordering, SynchronizationScope SynchScope);
2137 void writeAtomicCmpXchg(AtomicOrdering SuccessOrdering,
2138 void writeSyncScope(const LLVMContext &Context,
2139 SyncScope::ID SSID);
2140 void writeAtomic(const LLVMContext &Context,
2141 AtomicOrdering Ordering,
2142 SyncScope::ID SSID);
2143 void writeAtomicCmpXchg(const LLVMContext &Context,
2144 AtomicOrdering SuccessOrdering,
21382145 AtomicOrdering FailureOrdering,
2139 SynchronizationScope SynchScope);
2146 SyncScope::ID SSID);
21402147
21412148 void writeAllMDNodes();
21422149 void writeMDNode(unsigned Slot, const MDNode *Node);
21982205 WriteAsOperandInternal(Out, Operand, &TypePrinter, &Machine, TheModule);
21992206 }
22002207
2201 void AssemblyWriter::writeAtomic(AtomicOrdering Ordering,
2202 SynchronizationScope SynchScope) {
2208 void AssemblyWriter::writeSyncScope(const LLVMContext &Context,
2209 SyncScope::ID SSID) {
2210 switch (SSID) {
2211 case SyncScope::System: {
2212 break;
2213 }
2214 default: {
2215 if (SSNs.empty())
2216 Context.getSyncScopeNames(SSNs);
2217
2218 Out << " syncscope(\"";
2219 PrintEscapedString(SSNs[SSID], Out);
2220 Out << "\")";
2221 break;
2222 }
2223 }
2224 }
2225
2226 void AssemblyWriter::writeAtomic(const LLVMContext &Context,
2227 AtomicOrdering Ordering,
2228 SyncScope::ID SSID) {
22032229 if (Ordering == AtomicOrdering::NotAtomic)
22042230 return;
22052231
2206 switch (SynchScope) {
2207 case SingleThread: Out << " singlethread"; break;
2208 case CrossThread: break;
2209 }
2210
2232 writeSyncScope(Context, SSID);
22112233 Out << " " << toIRString(Ordering);
22122234 }
22132235
2214 void AssemblyWriter::writeAtomicCmpXchg(AtomicOrdering SuccessOrdering,
2236 void AssemblyWriter::writeAtomicCmpXchg(const LLVMContext &Context,
2237 AtomicOrdering SuccessOrdering,
22152238 AtomicOrdering FailureOrdering,
2216 SynchronizationScope SynchScope) {
2239 SyncScope::ID SSID) {
22172240 assert(SuccessOrdering != AtomicOrdering::NotAtomic &&
22182241 FailureOrdering != AtomicOrdering::NotAtomic);
22192242
2220 switch (SynchScope) {
2221 case SingleThread: Out << " singlethread"; break;
2222 case CrossThread: break;
2223 }
2224
2243 writeSyncScope(Context, SSID);
22252244 Out << " " << toIRString(SuccessOrdering);
22262245 Out << " " << toIRString(FailureOrdering);
22272246 }
32143233 // Print atomic ordering/alignment for memory operations
32153234 if (const LoadInst *LI = dyn_cast(&I)) {
32163235 if (LI->isAtomic())
3217 writeAtomic(LI->getOrdering(), LI->getSynchScope());
3236 writeAtomic(LI->getContext(), LI->getOrdering(), LI->getSyncScopeID());
32183237 if (LI->getAlignment())
32193238 Out << ", align " << LI->getAlignment();
32203239 } else if (const StoreInst *SI = dyn_cast(&I)) {
32213240 if (SI->isAtomic())
3222 writeAtomic(SI->getOrdering(), SI->getSynchScope());
3241 writeAtomic(SI->getContext(), SI->getOrdering(), SI->getSyncScopeID());
32233242 if (SI->getAlignment())
32243243 Out << ", align " << SI->getAlignment();
32253244 } else if (const AtomicCmpXchgInst *CXI = dyn_cast(&I)) {
3226 writeAtomicCmpXchg(CXI->getSuccessOrdering(), CXI->getFailureOrdering(),
3227 CXI->getSynchScope());
3245 writeAtomicCmpXchg(CXI->getContext(), CXI->getSuccessOrdering(),
3246 CXI->getFailureOrdering(), CXI->getSyncScopeID());
32283247 } else if (const AtomicRMWInst *RMWI = dyn_cast(&I)) {
3229 writeAtomic(RMWI->getOrdering(), RMWI->getSynchScope());
3248 writeAtomic(RMWI->getContext(), RMWI->getOrdering(),
3249 RMWI->getSyncScopeID());
32303250 } else if (const FenceInst *FI = dyn_cast(&I)) {
3231 writeAtomic(FI->getOrdering(), FI->getSynchScope());
3251 writeAtomic(FI->getContext(), FI->getOrdering(), FI->getSyncScopeID());
32323252 }
32333253
32343254 // Print Metadata info.
27552755 llvm_unreachable("Invalid AtomicOrdering value!");
27562756 }
27572757
2758 // TODO: Should this and other atomic instructions support building with
2759 // "syncscope"?
27582760 LLVMValueRef LLVMBuildFence(LLVMBuilderRef B, LLVMAtomicOrdering Ordering,
27592761 LLVMBool isSingleThread, const char *Name) {
27602762 return wrap(
27612763 unwrap(B)->CreateFence(mapFromLLVMOrdering(Ordering),
2762 isSingleThread ? SingleThread : CrossThread,
2764 isSingleThread ? SyncScope::SingleThread
2765 : SyncScope::System,
27632766 Name));
27642767 }
27652768
30413044 case LLVMAtomicRMWBinOpUMin: intop = AtomicRMWInst::UMin; break;
30423045 }
30433046 return wrap(unwrap(B)->CreateAtomicRMW(intop, unwrap(PTR), unwrap(Val),
3044 mapFromLLVMOrdering(ordering), singleThread ? SingleThread : CrossThread));
3047 mapFromLLVMOrdering(ordering), singleThread ? SyncScope::SingleThread
3048 : SyncScope::System));
30453049 }
30463050
30473051 LLVMValueRef LLVMBuildAtomicCmpXchg(LLVMBuilderRef B, LLVMValueRef Ptr,
30533057 return wrap(unwrap(B)->CreateAtomicCmpXchg(unwrap(Ptr), unwrap(Cmp),
30543058 unwrap(New), mapFromLLVMOrdering(SuccessOrdering),
30553059 mapFromLLVMOrdering(FailureOrdering),
3056 singleThread ? SingleThread : CrossThread));
3060 singleThread ? SyncScope::SingleThread : SyncScope::System));
30573061 }
30583062
30593063
30613065 Value *P = unwrap(AtomicInst);
30623066
30633067 if (AtomicRMWInst *I = dyn_cast(P))
3064 return I->getSynchScope() == SingleThread;
3065 return cast(P)->getSynchScope() == SingleThread;
3068 return I->getSyncScopeID() == SyncScope::SingleThread;
3069 return cast(P)->getSyncScopeID() ==
3070 SyncScope::SingleThread;
30663071 }
30673072
30683073 void LLVMSetAtomicSingleThread(LLVMValueRef AtomicInst, LLVMBool NewValue) {
30693074 Value *P = unwrap(AtomicInst);
3070 SynchronizationScope Sync = NewValue ? SingleThread : CrossThread;
3075 SyncScope::ID SSID = NewValue ? SyncScope::SingleThread : SyncScope::System;
30713076
30723077 if (AtomicRMWInst *I = dyn_cast(P))
3073 return I->setSynchScope(Sync);
3074 return cast(P)->setSynchScope(Sync);
3078 return I->setSyncScopeID(SSID);
3079 return cast(P)->setSyncScopeID(SSID);
30753080 }
30763081
30773082 LLVMAtomicOrdering LLVMGetCmpXchgSuccessOrdering(LLVMValueRef CmpXchgInst) {
361361 (LI->getAlignment() == cast(I2)->getAlignment() ||
362362 IgnoreAlignment) &&
363363 LI->getOrdering() == cast(I2)->getOrdering() &&
364 LI->getSynchScope() == cast(I2)->getSynchScope();
364 LI->getSyncScopeID() == cast(I2)->getSyncScopeID();
365365 if (const StoreInst *SI = dyn_cast(I1))
366366 return SI->isVolatile() == cast(I2)->isVolatile() &&
367367 (SI->getAlignment() == cast(I2)->getAlignment() ||
368368 IgnoreAlignment) &&
369369 SI->getOrdering() == cast(I2)->getOrdering() &&
370 SI->getSynchScope() == cast(I2)->getSynchScope();
370 SI->getSyncScopeID() == cast(I2)->getSyncScopeID();
371371 if (const CmpInst *CI = dyn_cast(I1))
372372 return CI->getPredicate() == cast(I2)->getPredicate();
373373 if (const CallInst *CI = dyn_cast(I1))
385385 return EVI->getIndices() == cast(I2)->getIndices();
386386 if (const FenceInst *FI = dyn_cast(I1))
387387 return FI->getOrdering() == cast(I2)->getOrdering() &&
388 FI->getSynchScope() == cast(I2)->getSynchScope();
388 FI->getSyncScopeID() == cast(I2)->getSyncScopeID();
389389 if (const AtomicCmpXchgInst *CXI = dyn_cast(I1))
390390 return CXI->isVolatile() == cast(I2)->isVolatile() &&
391391 CXI->isWeak() == cast(I2)->isWeak() &&
393393 cast(I2)->getSuccessOrdering() &&
394394 CXI->getFailureOrdering() ==
395395 cast(I2)->getFailureOrdering() &&
396 CXI->getSynchScope() == cast(I2)->getSynchScope();
396 CXI->getSyncScopeID() ==
397 cast(I2)->getSyncScopeID();
397398 if (const AtomicRMWInst *RMWI = dyn_cast(I1))
398399 return RMWI->getOperation() == cast(I2)->getOperation() &&
399400 RMWI->isVolatile() == cast(I2)->isVolatile() &&
400401 RMWI->getOrdering() == cast(I2)->getOrdering() &&
401 RMWI->getSynchScope() == cast(I2)->getSynchScope();
402 RMWI->getSyncScopeID() == cast(I2)->getSyncScopeID();
402403
403404 return true;
404405 }
13031303 LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile,
13041304 unsigned Align, Instruction *InsertBef)
13051305 : LoadInst(Ty, Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic,
1306 CrossThread, InsertBef) {}
1306 SyncScope::System, InsertBef) {}
13071307
13081308 LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile,
13091309 unsigned Align, BasicBlock *InsertAE)
13101310 : LoadInst(Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic,
1311 CrossThread, InsertAE) {}
1311 SyncScope::System, InsertAE) {}
13121312
13131313 LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile,
13141314 unsigned Align, AtomicOrdering Order,
1315 SynchronizationScope SynchScope, Instruction *InsertBef)
1315 SyncScope::ID SSID, Instruction *InsertBef)
13161316 : UnaryInstruction(Ty, Load, Ptr, InsertBef) {
13171317 assert(Ty == cast(Ptr->getType())->getElementType());
13181318 setVolatile(isVolatile);
13191319 setAlignment(Align);
1320 setAtomic(Order, SynchScope);
1320 setAtomic(Order, SSID);
13211321 AssertOK();
13221322 setName(Name);
13231323 }
13241324
13251325 LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile,
13261326 unsigned Align, AtomicOrdering Order,
1327 SynchronizationScope SynchScope,
1327 SyncScope::ID SSID,
13281328 BasicBlock *InsertAE)
13291329 : UnaryInstruction(cast(Ptr->getType())->getElementType(),
13301330 Load, Ptr, InsertAE) {
13311331 setVolatile(isVolatile);
13321332 setAlignment(Align);
1333 setAtomic(Order, SynchScope);
1333 setAtomic(Order, SSID);
13341334 AssertOK();
13351335 setName(Name);
13361336 }
14181418 StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align,
14191419 Instruction *InsertBefore)
14201420 : StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic,
1421 CrossThread, InsertBefore) {}
1421 SyncScope::System, InsertBefore) {}
14221422
14231423 StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align,
14241424 BasicBlock *InsertAtEnd)
14251425 : StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic,
1426 CrossThread, InsertAtEnd) {}
1426 SyncScope::System, InsertAtEnd) {}
14271427
14281428 StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
14291429 unsigned Align, AtomicOrdering Order,
1430 SynchronizationScope SynchScope,
1430 SyncScope::ID SSID,
14311431 Instruction *InsertBefore)
14321432 : Instruction(Type::getVoidTy(val->getContext()), Store,
14331433 OperandTraits::op_begin(this),
14371437 Op<1>() = addr;
14381438 setVolatile(isVolatile);
14391439 setAlignment(Align);
1440 setAtomic(Order, SynchScope);
1440 setAtomic(Order, SSID);
14411441 AssertOK();
14421442 }
14431443
14441444 StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
14451445 unsigned Align, AtomicOrdering Order,
1446 SynchronizationScope SynchScope,
1446 SyncScope::ID SSID,
14471447 BasicBlock *InsertAtEnd)
14481448 : Instruction(Type::getVoidTy(val->getContext()), Store,
14491449 OperandTraits::op_begin(this),
14531453 Op<1>() = addr;
14541454 setVolatile(isVolatile);
14551455 setAlignment(Align);
1456 setAtomic(Order, SynchScope);
1456 setAtomic(Order, SSID);
14571457 AssertOK();
14581458 }
14591459
14731473 void AtomicCmpXchgInst::Init(Value *Ptr, Value *Cmp, Value *NewVal,
14741474 AtomicOrdering SuccessOrdering,
14751475 AtomicOrdering FailureOrdering,
1476 SynchronizationScope SynchScope) {
1476 SyncScope::ID SSID) {
14771477 Op<0>() = Ptr;
14781478 Op<1>() = Cmp;
14791479 Op<2>() = NewVal;
14801480 setSuccessOrdering(SuccessOrdering);
14811481 setFailureOrdering(FailureOrdering);
1482 setSynchScope(SynchScope);
1482 setSyncScopeID(SSID);
14831483
14841484 assert(getOperand(0) && getOperand(1) && getOperand(2) &&
14851485 "All operands must be non-null!");
15061506 AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
15071507 AtomicOrdering SuccessOrdering,
15081508 AtomicOrdering FailureOrdering,
1509 SynchronizationScope SynchScope,
1509 SyncScope::ID SSID,
15101510 Instruction *InsertBefore)
15111511 : Instruction(
15121512 StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext())),
15131513 AtomicCmpXchg, OperandTraits::op_begin(this),
15141514 OperandTraits::operands(this), InsertBefore) {
1515 Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SynchScope);
1515 Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SSID);
15161516 }
15171517
15181518 AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
15191519 AtomicOrdering SuccessOrdering,
15201520 AtomicOrdering FailureOrdering,
1521 SynchronizationScope SynchScope,
1521 SyncScope::ID SSID,
15221522 BasicBlock *InsertAtEnd)
15231523 : Instruction(
15241524 StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext())),
15251525 AtomicCmpXchg, OperandTraits::op_begin(this),
15261526 OperandTraits::operands(this), InsertAtEnd) {
1527 Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SynchScope);
1527 Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SSID);
15281528 }
15291529
15301530 //===----------------------------------------------------------------------===//
15331533
15341534 void AtomicRMWInst::Init(BinOp Operation, Value *Ptr, Value *Val,
15351535 AtomicOrdering Ordering,
1536 SynchronizationScope SynchScope) {
1536 SyncScope::ID SSID) {
15371537 Op<0>() = Ptr;
15381538 Op<1>() = Val;
15391539 setOperation(Operation);
15401540 setOrdering(Ordering);
1541 setSynchScope(SynchScope);
1541 setSyncScopeID(SSID);
15421542
15431543 assert(getOperand(0) && getOperand(1) &&
15441544 "All operands must be non-null!");
15531553
15541554 AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
15551555 AtomicOrdering Ordering,
1556 SynchronizationScope SynchScope,
1556 SyncScope::ID SSID,
15571557 Instruction *InsertBefore)
15581558 : Instruction(Val->getType(), AtomicRMW,
15591559 OperandTraits::op_begin(this),
15601560 OperandTraits::operands(this),
15611561 InsertBefore) {
1562 Init(Operation, Ptr, Val, Ordering, SynchScope);
1562 Init(Operation, Ptr, Val, Ordering, SSID);
15631563 }
15641564
15651565 AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
15661566 AtomicOrdering Ordering,
1567 SynchronizationScope SynchScope,
1567 SyncScope::ID SSID,
15681568 BasicBlock *InsertAtEnd)
15691569 : Instruction(Val->getType(), AtomicRMW,
15701570 OperandTraits::op_begin(this),
15711571 OperandTraits::operands(this),
15721572 InsertAtEnd) {
1573 Init(Operation, Ptr, Val, Ordering, SynchScope);
1573 Init(Operation, Ptr, Val, Ordering, SSID);
15741574 }
15751575
15761576 //===----------------------------------------------------------------------===//
15781578 //===----------------------------------------------------------------------===//
15791579
15801580 FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering,
1581 SynchronizationScope SynchScope,
1581 SyncScope::ID SSID,
15821582 Instruction *InsertBefore)
15831583 : Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertBefore) {
15841584 setOrdering(Ordering);
1585 setSynchScope(SynchScope);
1585 setSyncScopeID(SSID);
15861586 }
15871587
15881588 FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering,
1589 SynchronizationScope SynchScope,
1589 SyncScope::ID SSID,
15901590 BasicBlock *InsertAtEnd)
15911591 : Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertAtEnd) {
15921592 setOrdering(Ordering);
1593 setSynchScope(SynchScope);
1593 setSyncScopeID(SSID);
15941594 }
15951595
15961596 //===----------------------------------------------------------------------===//
37943794
37953795 LoadInst *LoadInst::cloneImpl() const {
37963796 return new LoadInst(getOperand(0), Twine(), isVolatile(),
3797 getAlignment(), getOrdering(), getSynchScope());
3797 getAlignment(), getOrdering(), getSyncScopeID());
37983798 }
37993799
38003800 StoreInst *StoreInst::cloneImpl() const {
38013801 return new StoreInst(getOperand(0), getOperand(1), isVolatile(),
3802 getAlignment(), getOrdering(), getSynchScope());
3802 getAlignment(), getOrdering(), getSyncScopeID());
38033803
38043804 }
38053805
38073807 AtomicCmpXchgInst *Result =
38083808 new AtomicCmpXchgInst(getOperand(0), getOperand(1), getOperand(2),
38093809 getSuccessOrdering(), getFailureOrdering(),
3810 getSynchScope());
3810 getSyncScopeID());
38113811 Result->setVolatile(isVolatile());
38123812 Result->setWeak(isWeak());
38133813 return Result;
38153815
38163816 AtomicRMWInst *AtomicRMWInst::cloneImpl() const {
38173817 AtomicRMWInst *Result =
3818 new AtomicRMWInst(getOperation(),getOperand(0), getOperand(1),
3819 getOrdering(), getSynchScope());
3818 new AtomicRMWInst(getOperation(), getOperand(0), getOperand(1),
3819 getOrdering(), getSyncScopeID());
38203820 Result->setVolatile(isVolatile());
38213821 return Result;
38223822 }
38233823
38243824 FenceInst *FenceInst::cloneImpl() const {
3825 return new FenceInst(getContext(), getOrdering(), getSynchScope());
3825 return new FenceInst(getContext(), getOrdering(), getSyncScopeID());
38263826 }
38273827
38283828 TruncInst *TruncInst::cloneImpl() const {
8080 assert(GCTransitionEntry->second == LLVMContext::OB_gc_transition &&
8181 "gc-transition operand bundle id drifted!");
8282 (void)GCTransitionEntry;
83
84 SyncScope::ID SingleThreadSSID =
85 pImpl->getOrInsertSyncScopeID("singlethread");
86 assert(SingleThreadSSID == SyncScope::SingleThread &&
87 "singlethread synchronization scope ID drifted!");
88
89 SyncScope::ID SystemSSID =
90 pImpl->getOrInsertSyncScopeID("");
91 assert(SystemSSID == SyncScope::System &&
92 "system synchronization scope ID drifted!");
8393 }
8494
8595 LLVMContext::~LLVMContext() { delete pImpl; }
254264 return pImpl->getOperandBundleTagID(Tag);
255265 }
256266
267 SyncScope::ID LLVMContext::getOrInsertSyncScopeID(StringRef SSN) {
268 return pImpl->getOrInsertSyncScopeID(SSN);
269 }
270
271 void LLVMContext::getSyncScopeNames(SmallVectorImpl &SSNs) const {
272 pImpl->getSyncScopeNames(SSNs);
273 }
274
257275 void LLVMContext::setGC(const Function &Fn, std::string GCName) {
258276 auto It = pImpl->GCNames.find(&Fn);
259277
204204 return I->second;
205205 }
206206
207 SyncScope::ID LLVMContextImpl::getOrInsertSyncScopeID(StringRef SSN) {
208 auto NewSSID = SSC.size();
209 assert(NewSSID < std::numeric_limits::max() &&
210 "Hit the maximum number of synchronization scopes allowed!");
211 return SSC.insert(std::make_pair(SSN, SyncScope::ID(NewSSID))).first->second;
212 }
213
214 void LLVMContextImpl::getSyncScopeNames(
215 SmallVectorImpl &SSNs) const {
216 SSNs.resize(SSC.size());
217 for (const auto &SSE : SSC)
218 SSNs[SSE.second] = SSE.first();
219 }
220
207221 /// Singleton instance of the OptBisect class.
208222 ///
209223 /// This singleton is accessed via the LLVMContext::getOptBisect() function. It
12961296 void getOperandBundleTags(SmallVectorImpl &Tags) const;
12971297 uint32_t getOperandBundleTagID(StringRef Tag) const;
12981298
1299 /// A set of interned synchronization scopes. The StringMap maps
1300 /// synchronization scope names to their respective synchronization scope IDs.
1301 StringMap SSC;
1302
1303 /// getOrInsertSyncScopeID - Maps synchronization scope name to
1304 /// synchronization scope ID. Every synchronization scope registered with
1305 /// LLVMContext has unique ID except pre-defined ones.
1306 SyncScope::ID getOrInsertSyncScopeID(StringRef SSN);
1307
1308 /// getSyncScopeNames - Populates client supplied SmallVector with
1309 /// synchronization scope names registered with LLVMContext. Synchronization
1310 /// scope names are ordered by increasing synchronization scope IDs.
1311 void getSyncScopeNames(SmallVectorImpl &SSNs) const;
1312
12991313 /// Maintain the GC name for each function.
13001314 ///
13011315 /// This saves allocating an additional word in Function for programs which
31073107 ElTy, &LI);
31083108 checkAtomicMemAccessSize(ElTy, &LI);
31093109 } else {
3110 Assert(LI.getSynchScope() == CrossThread,
3110 Assert(LI.getSyncScopeID() == SyncScope::System,
31113111 "Non-atomic load cannot have SynchronizationScope specified", &LI);
31123112 }
31133113
31363136 ElTy, &SI);
31373137 checkAtomicMemAccessSize(ElTy, &SI);
31383138 } else {
3139 Assert(SI.getSynchScope() == CrossThread,
3139 Assert(SI.getSyncScopeID() == SyncScope::System,
31403140 "Non-atomic store cannot have SynchronizationScope specified", &SI);
31413141 }
31423142 visitInstruction(SI);
33973397 static SDValue LowerATOMIC_FENCE(SDValue Op, SelectionDAG &DAG,
33983398 const ARMSubtarget *Subtarget) {
33993399 SDLoc dl(Op);
3400 ConstantSDNode *ScopeN = cast(Op.getOperand(2));
3401 auto Scope = static_cast(ScopeN->getZExtValue());
3402 if (Scope == SynchronizationScope::SingleThread)
3400 ConstantSDNode *SSIDNode = cast(Op.getOperand(2));
3401 auto SSID = static_cast(SSIDNode->getZExtValue());
3402 if (SSID == SyncScope::SingleThread)
34033403 return Op;
34043404
34053405 if (!Subtarget->hasDataBarrier()) {
31813181 SDLoc DL(Op);
31823182 AtomicOrdering FenceOrdering = static_cast(
31833183 cast(Op.getOperand(1))->getZExtValue());
3184 SynchronizationScope FenceScope = static_cast>(
3184 SyncScope::ID FenceSSID = static_cast>(
31853185 cast(Op.getOperand(2))->getZExtValue());
31863186
31873187 // The only fence that needs an instruction is a sequentially-consistent
31883188 // cross-thread fence.
31893189 if (FenceOrdering == AtomicOrdering::SequentiallyConsistent &&
3190 FenceScope == CrossThread) {
3190 FenceSSID == SyncScope::System) {
31913191 return SDValue(DAG.getMachineNode(SystemZ::Serialize, DL, MVT::Other,
31923192 Op.getOperand(0)),
31933193 0);
2284922849
2285022850 auto Builder = IRBuilder<>(AI);
2285122851 Module *M = Builder.GetInsertBlock()->getParent()->getParent();
22852 auto SynchScope = AI->getSynchScope();
22852 auto SSID = AI->getSyncScopeID();
2285322853 // We must restrict the ordering to avoid generating loads with Release or
2285422854 // ReleaseAcquire orderings.
2285522855 auto Order = AtomicCmpXchgInst::getStrongestFailureOrdering(AI->getOrdering());
2287122871 // otherwise, we might be able to be more aggressive on relaxed idempotent
2287222872 // rmw. In practice, they do not look useful, so we don't try to be
2287322873 // especially clever.
22874 if (SynchScope == SingleThread)
22874 if (SSID == SyncScope::SingleThread)
2287522875 // FIXME: we could just insert an X86ISD::MEMBARRIER here, except we are at
2287622876 // the IR level, so we must wrap it in an intrinsic.
2287722877 return nullptr;
2289022890 // Finally we can emit the atomic load.
2289122891 LoadInst *Loaded = Builder.CreateAlignedLoad(Ptr,
2289222892 AI->getType()->getPrimitiveSizeInBits());
22893 Loaded->setAtomic(Order, SynchScope);
22893 Loaded->setAtomic(Order, SSID);
2289422894 AI->replaceAllUsesWith(Loaded);
2289522895 AI->eraseFromParent();
2289622896 return Loaded;
2290122901 SDLoc dl(Op);
2290222902 AtomicOrdering FenceOrdering = static_cast(
2290322903 cast(Op.getOperand(1))->getZExtValue());
22904 SynchronizationScope FenceScope = static_cast>(
22904 SyncScope::ID FenceSSID = static_cast>(
2290522905 cast(Op.getOperand(2))->getZExtValue());
2290622906
2290722907 // The only fence that needs an instruction is a sequentially-consistent
2290822908 // cross-thread fence.
2290922909 if (FenceOrdering == AtomicOrdering::SequentiallyConsistent &&
22910 FenceScope == CrossThread) {
22910 FenceSSID == SyncScope::System) {
2291122911 if (Subtarget.hasMFence())
2291222912 return DAG.getNode(X86ISD::MFENCE, dl, MVT::Other, Op.getOperand(0));
2291322913
836836 if (StoreInst *SI = dyn_cast(GV->user_back())) {
837837 // The global is initialized when the store to it occurs.
838838 new StoreInst(ConstantInt::getTrue(GV->getContext()), InitBool, false, 0,
839 SI->getOrdering(), SI->getSynchScope(), SI);
839 SI->getOrdering(), SI->getSyncScopeID(), SI);
840840 SI->eraseFromParent();
841841 continue;
842842 }
853853 // Replace the cmp X, 0 with a use of the bool value.
854854 // Sink the load to where the compare was, if atomic rules allow us to.
855855 Value *LV = new LoadInst(InitBool, InitBool->getName()+".val", false, 0,
856 LI->getOrdering(), LI->getSynchScope(),
856 LI->getOrdering(), LI->getSyncScopeID(),
857857 LI->isUnordered() ? (Instruction*)ICI : LI);
858858 InitBoolUsed = true;
859859 switch (ICI->getPredicate()) {
16041604 assert(LI->getOperand(0) == GV && "Not a copy!");
16051605 // Insert a new load, to preserve the saved value.
16061606 StoreVal = new LoadInst(NewGV, LI->getName()+".b", false, 0,
1607 LI->getOrdering(), LI->getSynchScope(), LI);
1607 LI->getOrdering(), LI->getSyncScopeID(), LI);
16081608 } else {
16091609 assert((isa(StoredVal) || isa(StoredVal)) &&
16101610 "This is not a form that we understand!");
16131613 }
16141614 }
16151615 new StoreInst(StoreVal, NewGV, false, 0,
1616 SI->getOrdering(), SI->getSynchScope(), SI);
1616 SI->getOrdering(), SI->getSyncScopeID(), SI);
16171617 } else {
16181618 // Change the load into a load of bool then a select.
16191619 LoadInst *LI = cast(UI);
16201620 LoadInst *NLI = new LoadInst(NewGV, LI->getName()+".b", false, 0,
1621 LI->getOrdering(), LI->getSynchScope(), LI);
1621 LI->getOrdering(), LI->getSyncScopeID(), LI);
16221622 Value *NSI;
16231623 if (IsOneZero)
16241624 NSI = new ZExtInst(NLI, LI->getType(), "", LI);
460460 LoadInst *NewLoad = IC.Builder.CreateAlignedLoad(
461461 IC.Builder.CreateBitCast(Ptr, NewTy->getPointerTo(AS)),
462462 LI.getAlignment(), LI.isVolatile(), LI.getName() + Suffix);
463 NewLoad->setAtomic(LI.getOrdering(), LI.getSynchScope());
463 NewLoad->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
464464 MDBuilder MDB(NewLoad->getContext());
465465 for (const auto &MDPair : MD) {
466466 unsigned ID = MDPair.first;
520520 StoreInst *NewStore = IC.Builder.CreateAlignedStore(
521521 V, IC.Builder.CreateBitCast(Ptr, V->getType()->getPointerTo(AS)),
522522 SI.getAlignment(), SI.isVolatile());
523 NewStore->setAtomic(SI.getOrdering(), SI.getSynchScope());
523 NewStore->setAtomic(SI.getOrdering(), SI.getSyncScopeID());
524524 for (const auto &MDPair : MD) {
525525 unsigned ID = MDPair.first;
526526 MDNode *N = MDPair.second;
10241024 SI->getOperand(2)->getName()+".val");
10251025 assert(LI.isUnordered() && "implied by above");
10261026 V1->setAlignment(Align);
1027 V1->setAtomic(LI.getOrdering(), LI.getSynchScope());
1027 V1->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
10281028 V2->setAlignment(Align);
1029 V2->setAtomic(LI.getOrdering(), LI.getSynchScope());
1029 V2->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
10301030 return SelectInst::Create(SI->getCondition(), V1, V2);
10311031 }
10321032
15391539 SI.isVolatile(),
15401540 SI.getAlignment(),
15411541 SI.getOrdering(),
1542 SI.getSynchScope());
1542 SI.getSyncScopeID());
15431543 InsertNewInstBefore(NewSI, *BBI);
15441544 // The debug locations of the original instructions might differ; merge them.
15451545 NewSI->setDebugLoc(DILocation::getMergedLocation(SI.getDebugLoc(),
378378 }
379379
380380 static bool isAtomic(Instruction *I) {
381 // TODO: Ask TTI whether synchronization scope is between threads.
381382 if (LoadInst *LI = dyn_cast(I))
382 return LI->isAtomic() && LI->getSynchScope() == CrossThread;
383 return LI->isAtomic() && LI->getSyncScopeID() != SyncScope::SingleThread;
383384 if (StoreInst *SI = dyn_cast(I))
384 return SI->isAtomic() && SI->getSynchScope() == CrossThread;
385 return SI->isAtomic() && SI->getSyncScopeID() != SyncScope::SingleThread;
385386 if (isa(I))
386387 return true;
387388 if (isa(I))
675676 I->eraseFromParent();
676677 } else if (FenceInst *FI = dyn_cast(I)) {
677678 Value *Args[] = {createOrdering(&IRB, FI->getOrdering())};
678 Function *F = FI->getSynchScope() == SingleThread ?
679 Function *F = FI->getSyncScopeID() == SyncScope::SingleThread ?
679680 TsanAtomicSignalFence : TsanAtomicThreadFence;
680681 CallInst *C = CallInst::Create(F, Args);
681682 ReplaceInstWithInst(I, C);
11651165
11661166 auto *NewLoad = new LoadInst(LoadPtr, LI->getName()+".pre",
11671167 LI->isVolatile(), LI->getAlignment(),
1168 LI->getOrdering(), LI->getSynchScope(),
1168 LI->getOrdering(), LI->getSyncScopeID(),
11691169 UnavailablePred->getTerminator());
11701170
11711171 // Transfer the old load's AA tags to the new load.
12111211 LoadInst *NewVal = new LoadInst(
12121212 LoadedPtr->DoPHITranslation(LoadBB, UnavailablePred),
12131213 LI->getName() + ".pr", false, LI->getAlignment(), LI->getOrdering(),
1214 LI->getSynchScope(), UnavailablePred->getTerminator());
1214 LI->getSyncScopeID(), UnavailablePred->getTerminator());
12151215 NewVal->setDebugLoc(LI->getDebugLoc());
12161216 if (AATags)
12171217 NewVal->setAAMetadata(AATags);
23972397 LoadInst *NewLI = IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(),
23982398 LI.isVolatile(), LI.getName());
23992399 if (LI.isVolatile())
2400 NewLI->setAtomic(LI.getOrdering(), LI.getSynchScope());
2400 NewLI->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
24012401
24022402 // Any !nonnull metadata or !range metadata on the old load is also valid
24032403 // on the new load. This is even true in some cases even when the loads
24322432 getSliceAlign(TargetTy),
24332433 LI.isVolatile(), LI.getName());
24342434 if (LI.isVolatile())
2435 NewLI->setAtomic(LI.getOrdering(), LI.getSynchScope());
2435 NewLI->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
24362436
24372437 V = NewLI;
24382438 IsPtrAdjusted = true;
25752575 }
25762576 NewSI->copyMetadata(SI, LLVMContext::MD_mem_parallel_loop_access);
25772577 if (SI.isVolatile())
2578 NewSI->setAtomic(SI.getOrdering(), SI.getSynchScope());
2578 NewSI->setAtomic(SI.getOrdering(), SI.getSyncScopeID());
25792579 Pass.DeadInsts.insert(&SI);
25802580 deleteIfTriviallyDead(OldOp);
25812581
512512 if (int Res =
513513 cmpOrderings(LI->getOrdering(), cast(R)->getOrdering()))
514514 return Res;
515 if (int Res =
516 cmpNumbers(LI->getSynchScope(), cast(R)->getSynchScope()))
515 if (int Res = cmpNumbers(LI->getSyncScopeID(),
516 cast(R)->getSyncScopeID()))
517517 return Res;
518518 return cmpRangeMetadata(LI->getMetadata(LLVMContext::MD_range),
519519 cast(R)->getMetadata(LLVMContext::MD_range));
528528 if (int Res =
529529 cmpOrderings(SI->getOrdering(), cast(R)->getOrdering()))
530530 return Res;
531 return cmpNumbers(SI->getSynchScope(), cast(R)->getSynchScope());
531 return cmpNumbers(SI->getSyncScopeID(),
532 cast(R)->getSyncScopeID());
532533 }
533534 if (const CmpInst *CI = dyn_cast(L))
534535 return cmpNumbers(CI->getPredicate(), cast(R)->getPredicate());
583584 if (int Res =
584585 cmpOrderings(FI->getOrdering(), cast(R)->getOrdering()))
585586 return Res;
586 return cmpNumbers(FI->getSynchScope(), cast(R)->getSynchScope());
587 return cmpNumbers(FI->getSyncScopeID(),
588 cast(R)->getSyncScopeID());
587589 }
588590 if (const AtomicCmpXchgInst *CXI = dyn_cast(L)) {
589591 if (int Res = cmpNumbers(CXI->isVolatile(),
600602 cmpOrderings(CXI->getFailureOrdering(),
601603 cast(R)->getFailureOrdering()))
602604 return Res;
603 return cmpNumbers(CXI->getSynchScope(),
604 cast(R)->getSynchScope());
605 return cmpNumbers(CXI->getSyncScopeID(),
606 cast(R)->getSyncScopeID());
605607 }
606608 if (const AtomicRMWInst *RMWI = dyn_cast(L)) {
607609 if (int Res = cmpNumbers(RMWI->getOperation(),
613615 if (int Res = cmpOrderings(RMWI->getOrdering(),
614616 cast(R)->getOrdering()))
615617 return Res;
616 return cmpNumbers(RMWI->getSynchScope(),
617 cast(R)->getSynchScope());
618 return cmpNumbers(RMWI->getSyncScopeID(),
619 cast(R)->getSyncScopeID());
618620 }
619621 if (const PHINode *PNL = dyn_cast(L)) {
620622 const PHINode *PNR = cast(R);
44 define void @f(i32* %x) {
55 ; CHECK: load atomic i32, i32* %x unordered, align 4
66 load atomic i32, i32* %x unordered, align 4
7 ; CHECK: load atomic volatile i32, i32* %x singlethread acquire, align 4
8 load atomic volatile i32, i32* %x singlethread acquire, align 4
7 ; CHECK: load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
8 load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
9 ; CHECK: load atomic volatile i32, i32* %x syncscope("agent") acquire, align 4
10 load atomic volatile i32, i32* %x syncscope("agent") acquire, align 4
911 ; CHECK: store atomic i32 3, i32* %x release, align 4
1012 store atomic i32 3, i32* %x release, align 4
11 ; CHECK: store atomic volatile i32 3, i32* %x singlethread monotonic, align 4
12 store atomic volatile i32 3, i32* %x singlethread monotonic, align 4
13 ; CHECK: cmpxchg i32* %x, i32 1, i32 0 singlethread monotonic monotonic
14 cmpxchg i32* %x, i32 1, i32 0 singlethread monotonic monotonic
13 ; CHECK: store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
14 store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
15 ; CHECK: store atomic volatile i32 3, i32* %x syncscope("workgroup") monotonic, align 4
16 store atomic volatile i32 3, i32* %x syncscope("workgroup") monotonic, align 4
17 ; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
18 cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
19 ; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("workitem") monotonic monotonic
20 cmpxchg i32* %x, i32 1, i32 0 syncscope("workitem") monotonic monotonic
1521 ; CHECK: cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
1622 cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
1723 ; CHECK: cmpxchg i32* %x, i32 42, i32 0 acq_rel monotonic
2228 atomicrmw add i32* %x, i32 10 seq_cst
2329 ; CHECK: atomicrmw volatile xchg i32* %x, i32 10 monotonic
2430 atomicrmw volatile xchg i32* %x, i32 10 monotonic
25 ; CHECK: fence singlethread release
26 fence singlethread release
31 ; CHECK: atomicrmw volatile xchg i32* %x, i32 10 syncscope("agent") monotonic
32 atomicrmw volatile xchg i32* %x, i32 10 syncscope("agent") monotonic
33 ; CHECK: fence syncscope("singlethread") release
34 fence syncscope("singlethread") release
2735 ; CHECK: fence seq_cst
2836 fence seq_cst
37 ; CHECK: fence syncscope("device") seq_cst
38 fence syncscope("device") seq_cst
2939 ret void
3040 }
0 ; RUN: llvm-dis -o - %s.bc | FileCheck %s
1
2 ; Backwards compatibility test: make sure we can process bitcode without
3 ; synchronization scope names encoded in it.
4
5 ; CHECK: load atomic i32, i32* %x unordered, align 4
6 ; CHECK: load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
7 ; CHECK: store atomic i32 3, i32* %x release, align 4
8 ; CHECK: store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
9 ; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
10 ; CHECK: cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
11 ; CHECK: cmpxchg i32* %x, i32 42, i32 0 acq_rel monotonic
12 ; CHECK: cmpxchg weak i32* %x, i32 13, i32 0 seq_cst monotonic
13 ; CHECK: atomicrmw add i32* %x, i32 10 seq_cst
14 ; CHECK: atomicrmw volatile xchg i32* %x, i32 10 monotonic
15 ; CHECK: fence syncscope("singlethread") release
16 ; CHECK: fence seq_cst
1010 cmpxchg weak i32* %addr, i32 %desired, i32 %new acq_rel acquire
1111 ; CHECK: cmpxchg weak i32* %addr, i32 %desired, i32 %new acq_rel acquire
1212
13 cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new singlethread release monotonic
14 ; CHECK: cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new singlethread release monotonic
13 cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new syncscope("singlethread") release monotonic
14 ; CHECK: cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new syncscope("singlethread") release monotonic
1515
1616 ret void
1717 }
550550 ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
551551 %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
552552 ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
553 %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
554 ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
553 %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
554 ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
555555 %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
556556 ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
557557 %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
570570 ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
571571 %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
572572 ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
573 %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
574 ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
575 %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
576 ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
573 %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
574 ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
575 %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
576 ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
577577 fence acquire
578578 ; CHECK: fence acquire
579579 fence release
580580 ; CHECK: fence release
581581 fence acq_rel
582582 ; CHECK: fence acq_rel
583 fence singlethread seq_cst
584 ; CHECK: fence singlethread seq_cst
583 fence syncscope("singlethread") seq_cst
584 ; CHECK: fence syncscope("singlethread") seq_cst
585585
586586 ; XXX: The parser spits out the load type here.
587587 %ld.1 = load atomic i32* %word monotonic, align 4
588588 ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
589589 %ld.2 = load atomic volatile i32* %word acquire, align 8
590590 ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
591 %ld.3 = load atomic volatile i32* %word singlethread seq_cst, align 16
592 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
591 %ld.3 = load atomic volatile i32* %word syncscope("singlethread") seq_cst, align 16
592 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
593593
594594 store atomic i32 23, i32* %word monotonic, align 4
595595 ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
596596 store atomic volatile i32 24, i32* %word monotonic, align 4
597597 ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
598 store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
599 ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
598 store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
599 ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
600600 ret void
601601 }
602602
595595 ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
596596 %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
597597 ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
598 %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
599 ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
598 %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
599 ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
600600 %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
601601 ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
602602 %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
615615 ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
616616 %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
617617 ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
618 %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
619 ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
620 %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
621 ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
618 %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
619 ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
620 %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
621 ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
622622 fence acquire
623623 ; CHECK: fence acquire
624624 fence release
625625 ; CHECK: fence release
626626 fence acq_rel
627627 ; CHECK: fence acq_rel
628 fence singlethread seq_cst
629 ; CHECK: fence singlethread seq_cst
628 fence syncscope("singlethread") seq_cst
629 ; CHECK: fence syncscope("singlethread") seq_cst
630630
631631 %ld.1 = load atomic i32, i32* %word monotonic, align 4
632632 ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
633633 %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
634634 ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
635 %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
636 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
635 %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
636 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
637637
638638 store atomic i32 23, i32* %word monotonic, align 4
639639 ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
640640 store atomic volatile i32 24, i32* %word monotonic, align 4
641641 ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
642 store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
643 ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
642 store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
643 ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
644644 ret void
645645 }
646646
626626 ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
627627 %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
628628 ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
629 %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
630 ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
629 %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
630 ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
631631 %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
632632 ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
633633 %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
646646 ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
647647 %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
648648 ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
649 %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
650 ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
651 %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
652 ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
649 %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
650 ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
651 %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
652 ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
653653 fence acquire
654654 ; CHECK: fence acquire
655655 fence release
656656 ; CHECK: fence release
657657 fence acq_rel
658658 ; CHECK: fence acq_rel
659 fence singlethread seq_cst
660 ; CHECK: fence singlethread seq_cst
659 fence syncscope("singlethread") seq_cst
660 ; CHECK: fence syncscope("singlethread") seq_cst
661661
662662 %ld.1 = load atomic i32, i32* %word monotonic, align 4
663663 ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
664664 %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
665665 ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
666 %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
667 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
666 %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
667 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
668668
669669 store atomic i32 23, i32* %word monotonic, align 4
670670 ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
671671 store atomic volatile i32 24, i32* %word monotonic, align 4
672672 ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
673 store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
674 ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
673 store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
674 ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
675675 ret void
676676 }
677677
697697 ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
698698 %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
699699 ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
700 %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
701 ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
700 %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
701 ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
702702 %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
703703 ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
704704 %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
717717 ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
718718 %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
719719 ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
720 %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
721 ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
722 %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
723 ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
720 %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
721 ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
722 %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
723 ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
724724 fence acquire
725725 ; CHECK: fence acquire
726726 fence release
727727 ; CHECK: fence release
728728 fence acq_rel
729729 ; CHECK: fence acq_rel
730 fence singlethread seq_cst
731 ; CHECK: fence singlethread seq_cst
730 fence syncscope("singlethread") seq_cst
731 ; CHECK: fence syncscope("singlethread") seq_cst
732732
733733 %ld.1 = load atomic i32, i32* %word monotonic, align 4
734734 ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
735735 %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
736736 ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
737 %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
738 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
737 %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
738 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
739739
740740 store atomic i32 23, i32* %word monotonic, align 4
741741 ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
742742 store atomic volatile i32 24, i32* %word monotonic, align 4
743743 ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
744 store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
745 ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
744 store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
745 ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
746746 ret void
747747 }
748748
697697 ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
698698 %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
699699 ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
700 %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
701 ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
700 %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
701 ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
702702 %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
703703 ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
704704 %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
717717 ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
718718 %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
719719 ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
720 %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
721 ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
722 %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
723 ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
720 %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
721 ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
722 %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
723 ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
724724 fence acquire
725725 ; CHECK: fence acquire
726726 fence release
727727 ; CHECK: fence release
728728 fence acq_rel
729729 ; CHECK: fence acq_rel
730 fence singlethread seq_cst
731 ; CHECK: fence singlethread seq_cst
730 fence syncscope("singlethread") seq_cst
731 ; CHECK: fence syncscope("singlethread") seq_cst
732732
733733 %ld.1 = load atomic i32, i32* %word monotonic, align 4
734734 ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
735735 %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
736736 ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
737 %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
738 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
737 %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
738 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
739739
740740 store atomic i32 23, i32* %word monotonic, align 4
741741 ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
742742 store atomic volatile i32 24, i32* %word monotonic, align 4
743743 ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
744 store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
745 ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
744 store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
745 ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
746746 ret void
747747 }
748748
704704 ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
705705 %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
706706 ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
707 %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
708 ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
707 %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
708 ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
709709 %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
710710 ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
711711 %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
724724 ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
725725 %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
726726 ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
727 %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
728 ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
729 %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
730 ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
727 %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
728 ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
729 %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
730 ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
731731 fence acquire
732732 ; CHECK: fence acquire
733733 fence release
734734 ; CHECK: fence release
735735 fence acq_rel
736736 ; CHECK: fence acq_rel
737 fence singlethread seq_cst
738 ; CHECK: fence singlethread seq_cst
737 fence syncscope("singlethread") seq_cst
738 ; CHECK: fence syncscope("singlethread") seq_cst
739739
740740 %ld.1 = load atomic i32, i32* %word monotonic, align 4
741741 ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
742742 %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
743743 ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
744 %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
745 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
744 %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
745 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
746746
747747 store atomic i32 23, i32* %word monotonic, align 4
748748 ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
749749 store atomic volatile i32 24, i32* %word monotonic, align 4
750750 ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
751 store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
752 ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
751 store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
752 ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
753753 ret void
754754 }
755755
106106 ; CHECK-NEXT: %res8 = load atomic volatile i8, i8* %ptr1 seq_cst, align 1
107107 %res8 = load atomic volatile i8, i8* %ptr1 seq_cst, align 1
108108
109 ; CHECK-NEXT: %res9 = load atomic i8, i8* %ptr1 singlethread unordered, align 1
110 %res9 = load atomic i8, i8* %ptr1 singlethread unordered, align 1
111
112 ; CHECK-NEXT: %res10 = load atomic i8, i8* %ptr1 singlethread monotonic, align 1
113 %res10 = load atomic i8, i8* %ptr1 singlethread monotonic, align 1
114
115 ; CHECK-NEXT: %res11 = load atomic i8, i8* %ptr1 singlethread acquire, align 1
116 %res11 = load atomic i8, i8* %ptr1 singlethread acquire, align 1
117
118 ; CHECK-NEXT: %res12 = load atomic i8, i8* %ptr1 singlethread seq_cst, align 1
119 %res12 = load atomic i8, i8* %ptr1 singlethread seq_cst, align 1
120
121 ; CHECK-NEXT: %res13 = load atomic volatile i8, i8* %ptr1 singlethread unordered, align 1
122 %res13 = load atomic volatile i8, i8* %ptr1 singlethread unordered, align 1
123
124 ; CHECK-NEXT: %res14 = load atomic volatile i8, i8* %ptr1 singlethread monotonic, align 1
125 %res14 = load atomic volatile i8, i8* %ptr1 singlethread monotonic, align 1
126
127 ; CHECK-NEXT: %res15 = load atomic volatile i8, i8* %ptr1 singlethread acquire, align 1
128 %res15 = load atomic volatile i8, i8* %ptr1 singlethread acquire, align 1
129
130 ; CHECK-NEXT: %res16 = load atomic volatile i8, i8* %ptr1 singlethread seq_cst, align 1
131 %res16 = load atomic volatile i8, i8* %ptr1 singlethread seq_cst, align 1
109 ; CHECK-NEXT: %res9 = load atomic i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
110 %res9 = load atomic i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
111
112 ; CHECK-NEXT: %res10 = load atomic i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
113 %res10 = load atomic i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
114
115 ; CHECK-NEXT: %res11 = load atomic i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
116 %res11 = load atomic i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
117
118 ; CHECK-NEXT: %res12 = load atomic i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
119 %res12 = load atomic i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
120
121 ; CHECK-NEXT: %res13 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
122 %res13 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
123
124 ; CHECK-NEXT: %res14 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
125 %res14 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
126
127 ; CHECK-NEXT: %res15 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
128 %res15 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
129
130 ; CHECK-NEXT: %res16 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
131 %res16 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
132132
133133 ret void
134134 }
192192 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 seq_cst, align 1
193193 store atomic volatile i8 2, i8* %ptr1 seq_cst, align 1
194194
195 ; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread unordered, align 1
196 store atomic i8 2, i8* %ptr1 singlethread unordered, align 1
197
198 ; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread monotonic, align 1
199 store atomic i8 2, i8* %ptr1 singlethread monotonic, align 1
200
201 ; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread release, align 1
202 store atomic i8 2, i8* %ptr1 singlethread release, align 1
203
204 ; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread seq_cst, align 1
205 store atomic i8 2, i8* %ptr1 singlethread seq_cst, align 1
206
207 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread unordered, align 1
208 store atomic volatile i8 2, i8* %ptr1 singlethread unordered, align 1
209
210 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread monotonic, align 1
211 store atomic volatile i8 2, i8* %ptr1 singlethread monotonic, align 1
212
213 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread release, align 1
214 store atomic volatile i8 2, i8* %ptr1 singlethread release, align 1
215
216 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread seq_cst, align 1
217 store atomic volatile i8 2, i8* %ptr1 singlethread seq_cst, align 1
195 ; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
196 store atomic i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
197
198 ; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
199 store atomic i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
200
201 ; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
202 store atomic i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
203
204 ; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
205 store atomic i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
206
207 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
208 store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
209
210 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
211 store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
212
213 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
214 store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
215
216 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
217 store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
218218
219219 ret void
220220 }
231231 ; CHECK-NEXT: %res2 = extractvalue { i32, i1 } [[TMP]], 0
232232 %res2 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new monotonic monotonic
233233
234 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
234 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
235235 ; CHECK-NEXT: %res3 = extractvalue { i32, i1 } [[TMP]], 0
236 %res3 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
237
238 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
236 %res3 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
237
238 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
239239 ; CHECK-NEXT: %res4 = extractvalue { i32, i1 } [[TMP]], 0
240 %res4 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
240 %res4 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
241241
242242
243243 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new acquire acquire
248248 ; CHECK-NEXT: %res6 = extractvalue { i32, i1 } [[TMP]], 0
249249 %res6 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new acquire acquire
250250
251 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
251 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
252252 ; CHECK-NEXT: %res7 = extractvalue { i32, i1 } [[TMP]], 0
253 %res7 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
254
255 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
253 %res7 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
254
255 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
256256 ; CHECK-NEXT: %res8 = extractvalue { i32, i1 } [[TMP]], 0
257 %res8 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
257 %res8 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
258258
259259
260260 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new release monotonic
265265 ; CHECK-NEXT: %res10 = extractvalue { i32, i1 } [[TMP]], 0
266266 %res10 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new release monotonic
267267
268 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
268 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
269269 ; CHECK-NEXT: %res11 = extractvalue { i32, i1 } [[TMP]], 0
270 %res11 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
271
272 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
270 %res11 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
271
272 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
273273 ; CHECK-NEXT: %res12 = extractvalue { i32, i1 } [[TMP]], 0
274 %res12 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
274 %res12 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
275275
276276
277277 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new acq_rel acquire
282282 ; CHECK-NEXT: %res14 = extractvalue { i32, i1 } [[TMP]], 0
283283 %res14 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new acq_rel acquire
284284
285 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
285 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
286286 ; CHECK-NEXT: %res15 = extractvalue { i32, i1 } [[TMP]], 0
287 %res15 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
288
289 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
287 %res15 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
288