llvm.org GIT mirror llvm / 767c34a
Fix typos. Summary: This fixes a variety of typos in docs, code and headers. Subscribers: jholewinski, sanjoy, arsenm, llvm-commits Differential Revision: http://reviews.llvm.org/D12626 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@247495 91177308-0d34-0410-b5e6-96231b3b80d8 Bruce Mitchener 4 years ago
15 changed file(s) with 47 addition(s) and 47 deletion(s). Raw diff Collapse all Expand all
850850 *attr* field of function block ``INST_INVOKE`` and ``INST_CALL`` records.
851851
852852 Entries within ``PARAMATTR_BLOCK`` are constructed to ensure that each is unique
853 (i.e., no two indicies represent equivalent attribute lists).
853 (i.e., no two indices represent equivalent attribute lists).
854854
855855 .. _PARAMATTR_CODE_ENTRY:
856856
903903 constants, metadata, type symbol table entries, or other type operator records.
904904
905905 Entries within ``TYPE_BLOCK`` are constructed to ensure that each entry is
906 unique (i.e., no two indicies represent structurally equivalent types).
906 unique (i.e., no two indices represent structurally equivalent types).
907907
908908 .. _TYPE_CODE_NUMENTRY:
909909 .. _NUMENTRY:
2626 ^^^^^^^^^^^^^^
2727
2828 Metadata is only assigned to the conditional branches. There are two extra
29 operarands for the true and the false branch.
29 operands for the true and the false branch.
3030
3131 .. code-block:: llvm
3232
113113
114114 Branch Weight Metatada is not proof against CFG changes. If terminator operands'
115115 are changed some action should be taken. In other case some misoptimizations may
116 occur due to incorrent branch prediction information.
116 occur due to incorrect branch prediction information.
117117
118118 Function Entry Counts
119119 =====================
120120
121 To allow comparing different functions durint inter-procedural analysis and
121 To allow comparing different functions during inter-procedural analysis and
122122 optimization, ``MD_prof`` nodes can also be assigned to a function definition.
123123 The first operand is a string indicating the name of the associated counter.
124124
38743874 """"""""""""""
38753875
38763876 ``DILexicalBlock`` nodes describe nested blocks within a :ref:`subprogram
3877 `. The line number and column numbers are used to dinstinguish
3877 `. The line number and column numbers are used to distinguish
38783878 two lexical blocks at same depth. They are valid targets for ``scope:``
38793879 fields.
38803880
40594059
40604060 The metadata identifying each domain is itself a list containing one or two
40614061 entries. The first entry is the name of the domain. Note that if the name is a
4062 string then it can be combined accross functions and translation units. A
4062 string then it can be combined across functions and translation units. A
40634063 self-reference can be used to create globally unique domain names. A
40644064 descriptive string may optionally be provided as a second list entry.
40654065
40664066 The metadata identifying each scope is also itself a list containing two or
40674067 three entries. The first entry is the name of the scope. Note that if the name
4068 is a string then it can be combined accross functions and translation units. A
4068 is a string then it can be combined across functions and translation units. A
40694069 self-reference can be used to create globally unique scope names. A metadata
40704070 reference to the scope's domain is the second entry. A descriptive string may
40714071 optionally be provided as a third list entry.
51605160 control to catch an exception.
51615161 The ``args`` correspond to whatever information the personality
51625162 routine requires to know if this is an appropriate place to catch the
5163 exception. Control is tranfered to the ``exception`` label if the
5163 exception. Control is transfered to the ``exception`` label if the
51645164 ``catchpad`` is not an appropriate handler for the in-flight exception.
51655165 The ``normal`` label should contain the code found in the ``catch``
51665166 portion of a ``try``/``catch`` sequence. The ``resultval`` has the type
1131011310 Semantics:
1131111311 """"""""""
1131211312
11313 The '``llvm.masked.scatter``' intrinsics is designed for writing selected vector elements to arbitrary memory addresses in a single IR operation. The operation may be conditional, when not all bits in the mask are switched on. It is useful for targets that support vector masked scatter and allows vectorizing basic blocks with data and control divergency. Other targets may support this intrinsic differently, for example by lowering it into a sequence of branches that guard scalar store operations.
11313 The '``llvm.masked.scatter``' intrinsics is designed for writing selected vector elements to arbitrary memory addresses in a single IR operation. The operation may be conditional, when not all bits in the mask are switched on. It is useful for targets that support vector masked scatter and allows vectorizing basic blocks with data and control divergence. Other targets may support this intrinsic differently, for example by lowering it into a sequence of branches that guard scalar store operations.
1131411314
1131511315 ::
1131611316
707707 "``a::b::c``". So the name entered in the name table must be demangled in
708708 order to chop it up appropriately and additional names must be manually entered
709709 into the table to make it effective as a name lookup table for debuggers to
710 se.
710 use.
711711
712712 All debuggers currently ignore the "``.debug_pubnames``" table as a result of
713713 its inconsistent and useless public-only name content making it a waste of
5252 loads, merely loads of a particular type (in the original source
5353 language), or none at all.
5454
55 #. Analogously, a store barrier is a code fragement that runs
55 #. Analogously, a store barrier is a code fragment that runs
5656 immediately before the machine store instruction, but after the
5757 computation of the value stored. The most common use of a store
5858 barrier is to update a 'card table' in a generational garbage
159159 of each pointer in turn, we use the ``gc.relocate`` intrinsic with the
160160 appropriate index. Note that both the ``gc.relocate`` and ``gc.result`` are
161161 tied to the statepoint. The combination forms a "statepoint relocation
162 sequence" and represents the entitety of a parseable call or 'statepoint'.
162 sequence" and represents the entirety of a parseable call or 'statepoint'.
163163
164164 When lowered, this example would generate the following x86 assembly:
165165
270270 transitions based on the function symbols involved (e.g. a call from a
271271 function with GC strategy "foo" to a function with GC strategy "bar"),
272272 indirect calls that are also GC transitions must also be supported. This
273 requirement is the driving force behing the decision to require that GC
273 requirement is the driving force behind the decision to require that GC
274274 transitions are explicitly marked.
275275
276276 Let's revisit the sample given above, this time treating the call to ``@foo``
154154 "Attempt to construct index with 0 pointer.");
155155 }
156156
157 /// Returns true if this is a valid index. Invalid indicies do
157 /// Returns true if this is a valid index. Invalid indices do
158158 /// not point into an index table, and cannot be compared.
159159 bool isValid() const {
160160 return lie.getPointer();
285285
286286 void
287287 AsmPrinter::emitDwarfAbbrevs(const std::vector& Abbrevs) const {
288 // For each abbrevation.
288 // For each abbreviation.
289289 for (const DIEAbbrev *Abbrev : Abbrevs) {
290 // Emit the abbrevations code (base 1 index.)
290 // Emit the abbreviations code (base 1 index.)
291291 EmitULEB128(Abbrev->getNumber(), "Abbreviation Code");
292292
293293 // Emit the abbreviations data.
896896 if (!MI->getOperand(i).isFI())
897897 continue;
898898
899 // Frame indicies in debug values are encoded in a target independent
899 // Frame indices in debug values are encoded in a target independent
900900 // way with simply the frame index and offset rather than any
901901 // target-specific addressing mode.
902902 if (MI->isDebugValue()) {
903 assert(i == 0 && "Frame indicies can only appear as the first "
903 assert(i == 0 && "Frame indices can only appear as the first "
904904 "operand of a DBG_VALUE machine instruction");
905905 unsigned Reg;
906906 MachineOperand &Offset = MI->getOperand(1);
8282 assert(DstTy && DstTy->isFirstClassType() && "Invalid cast destination type");
8383 assert(CastInst::isCast(opc) && "Invalid cast opcode");
8484
85 // The the types and opcodes for the two Cast constant expressions
85 // The types and opcodes for the two Cast constant expressions
8686 Type *SrcTy = Op->getOperand(0)->getType();
8787 Type *MidTy = Op->getType();
8888 Instruction::CastOps firstOp = Instruction::CastOps(Op->getOpcode());
12761276 }
12771277
12781278 /// IdxCompare - Compare the two constants as though they were getelementptr
1279 /// indices. This allows coersion of the types to be the same thing.
1279 /// indices. This allows coercion of the types to be the same thing.
12801280 ///
1281 /// If the two constants are the "same" (after coersion), return 0. If the
1281 /// If the two constants are the "same" (after coercion), return 0. If the
12821282 /// first is less than the second, return -1, if the second is less than the
12831283 /// first, return 1. If the constants are not integral, return -2.
12841284 ///
19981998 /// \brief Test whether a given ConstantInt is in-range for a SequentialType.
19991999 static bool isIndexInRangeOfSequentialType(SequentialType *STy,
20002000 const ConstantInt *CI) {
2001 // And indicies are valid when indexing along a pointer
2001 // And indices are valid when indexing along a pointer
20022002 if (isa(STy))
20032003 return true;
20042004
16541654 // ISel Patterns
16551655 //===----------------------------------------------------------------------===//
16561656
1657 // CND*_INT Pattterns for f32 True / False values
1657 // CND*_INT Patterns for f32 True / False values
16581658
16591659 class CND_INT_f32 : Pat <
16601660 (selectcc i32:$src0, 0, f32:$src1, f32:$src2, cc),
672672 [(set RO:$rd, (OpNode RO:$rt, GPR32Opnd:$rs))], itin, FrmR,
673673 opstr>;
674674
675 // Load Upper Imediate
675 // Load Upper Immediate
676676 class LoadUpper:
677677 InstSE<(outs RO:$rt), (ins Imm:$imm16), !strconcat(opstr, "\t$rt, $imm16"),
678678 [], II_LUI, FrmI, opstr>, IsAsCheapAsAMove {
356356 }
357357
358358 // consider several special intrinsics in striping pointer casts, and
359 // provide an option to ignore GEP indicies for find out the base address only
360 // which could be used in simple alias disambigurate.
359 // provide an option to ignore GEP indices for find out the base address only
360 // which could be used in simple alias disambiguation.
361361 const Value *
362362 llvm::skipPointerTransfer(const Value *V, bool ignore_GEP_indices) {
363363 V = V->stripPointerCasts();
378378 }
379379
380380 // consider several special intrinsics in striping pointer casts, and
381 // - ignore GEP indicies for find out the base address only, and
381 // - ignore GEP indices for find out the base address only, and
382382 // - tracking PHINode
383 // which could be used in simple alias disambigurate.
383 // which could be used in simple alias disambiguation.
384384 const Value *
385385 llvm::skipPointerTransfer(const Value *V, std::set &processed) {
386386 if (processed.find(V) != processed.end())
427427 return V;
428428 }
429429
430 // The following are some useful utilities for debuggung
430 // The following are some useful utilities for debugging
431431
432432 BasicBlock *llvm::getParentBlock(Value *v) {
433433 if (BasicBlock *B = dyn_cast(v))
483483 return nullptr;
484484 }
485485
486 // Dump an instruction by nane
486 // Dump an instruction by name
487487 void llvm::dumpInst(Value *base, char *instName) {
488488 Instruction *I = getInst(base, instName);
489489 if (I)
510510 if (!T->isAggregateType())
511511 return nullptr;
512512
513 assert(LI.getAlignment() && "Alignement must be set at this point");
513 assert(LI.getAlignment() && "Alignment must be set at this point");
514514
515515 if (auto *ST = dyn_cast(T)) {
516516 // If the struct only have one element, we unpack.
680680 // FIXME: If the GEP is not inbounds, and there are extra indices after the
681681 // one we'll replace, those could cause the address computation to wrap
682682 // (rendering the IsAllNonNegative() check below insufficient). We can do
683 // better, ignoring zero indicies (and other indicies we can prove small
683 // better, ignoring zero indices (and other indices we can prove small
684684 // enough not to wrap).
685685 if (Idx+1 != GEPI->getNumOperands() && !GEPI->isInBounds())
686686 return false;
856856 ///
857857 /// \returns true if the store was successfully combined away. This indicates
858858 /// the caller must erase the store instruction. We have to let the caller erase
859 /// the store instruction sas otherwise there is no way to signal whether it was
859 /// the store instruction as otherwise there is no way to signal whether it was
860860 /// combined or not: IC.EraseInstFromFunction returns a null pointer.
861861 static bool combineStoreToValueType(InstCombiner &IC, StoreInst &SI) {
862862 // FIXME: We could probably with some care handle both volatile and atomic
349349
350350
351351 /// EnforceSmallerThan - 'this' must be a smaller VT than Other. For vectors
352 /// this shoud be based on the element type. Update this and other based on
352 /// this should be based on the element type. Update this and other based on
353353 /// this information.
354354 bool EEVT::TypeSet::EnforceSmallerThan(EEVT::TypeSet &Other, TreePattern &TP) {
355355 if (TP.hasError())
455455 return MadeChange;
456456 }
457457
458 /// EnforceVectorEltTypeIs - 'this' is now constrainted to be a vector type
458 /// EnforceVectorEltTypeIs - 'this' is now constrained to be a vector type
459459 /// whose element is specified by VTOperand.
460460 bool EEVT::TypeSet::EnforceVectorEltTypeIs(MVT::SimpleValueType VT,
461461 TreePattern &TP) {
483483 return MadeChange;
484484 }
485485
486 /// EnforceVectorEltTypeIs - 'this' is now constrainted to be a vector type
486 /// EnforceVectorEltTypeIs - 'this' is now constrained to be a vector type
487487 /// whose element is specified by VTOperand.
488488 bool EEVT::TypeSet::EnforceVectorEltTypeIs(EEVT::TypeSet &VTOperand,
489489 TreePattern &TP) {
529529 return MadeChange;
530530 }
531531
532 /// EnforceVectorSubVectorTypeIs - 'this' is now constrainted to be a
532 /// EnforceVectorSubVectorTypeIs - 'this' is now constrained to be a
533533 /// vector type specified by VTOperand.
534534 bool EEVT::TypeSet::EnforceVectorSubVectorTypeIs(EEVT::TypeSet &VTOperand,
535535 TreePattern &TP) {
610610 return MadeChange;
611611 }
612612
613 /// EnforceVectorSameNumElts - 'this' is now constrainted to
613 /// EnforceVectorSameNumElts - 'this' is now constrained to
614614 /// be a vector with same num elements as VTOperand.
615615 bool EEVT::TypeSet::EnforceVectorSameNumElts(EEVT::TypeSet &VTOperand,
616616 TreePattern &TP) {
28142814
28152815 if (InstInfo.mayLoad != PatInfo.mayLoad && !InstInfo.mayLoad_Unset) {
28162816 // Allow explicitly setting mayLoad = 1, even when the pattern has no loads.
2817 // Some targets translate imediates to loads.
2817 // Some targets translate immediates to loads.
28182818 if (!InstInfo.mayLoad) {
28192819 Error = true;
28202820 PrintError(PatDef->getLoc(), "Pattern doesn't match mayLoad = " +
33463346 if (InstInfo.InferredFrom &&
33473347 InstInfo.InferredFrom != InstInfo.TheDef &&
33483348 InstInfo.InferredFrom != PTM.getSrcRecord())
3349 PrintError(InstInfo.InferredFrom->getLoc(), "inferred from patttern");
3349 PrintError(InstInfo.InferredFrom->getLoc(), "inferred from pattern");
33503350 }
33513351 }
33523352 if (Errors)
35723572 }
35733573
35743574 // Increment indices to the next permutation by incrementing the
3575 // indicies from last index backward, e.g., generate the sequence
3575 // indices from last index backward, e.g., generate the sequence
35763576 // [0, 0], [0, 1], [1, 0], [1, 1].
35773577 int IdxsIdx;
35783578 for (IdxsIdx = Idxs.size() - 1; IdxsIdx >= 0; --IdxsIdx) {
37233723 // operands are the commutative operands, and there might be more operands
37243724 // after those.
37253725 assert(NC >= 3 &&
3726 "Commutative intrinsic should have at least 3 childrean!");
3726 "Commutative intrinsic should have at least 3 children!");
37273727 std::vector > Variants;
37283728 Variants.push_back(ChildVariants[0]); // Intrinsic id.
37293729 Variants.push_back(ChildVariants[2]);
131131 /// this an other based on this information.
132132 bool EnforceSmallerThan(EEVT::TypeSet &Other, TreePattern &TP);
133133
134 /// EnforceVectorEltTypeIs - 'this' is now constrainted to be a vector type
134 /// EnforceVectorEltTypeIs - 'this' is now constrained to be a vector type
135135 /// whose element is VT.
136136 bool EnforceVectorEltTypeIs(EEVT::TypeSet &VT, TreePattern &TP);
137137
138 /// EnforceVectorEltTypeIs - 'this' is now constrainted to be a vector type
138 /// EnforceVectorEltTypeIs - 'this' is now constrained to be a vector type
139139 /// whose element is VT.
140140 bool EnforceVectorEltTypeIs(MVT::SimpleValueType VT, TreePattern &TP);
141141
142 /// EnforceVectorSubVectorTypeIs - 'this' is now constrainted to
142 /// EnforceVectorSubVectorTypeIs - 'this' is now constrained to
143143 /// be a vector type VT.
144144 bool EnforceVectorSubVectorTypeIs(EEVT::TypeSet &VT, TreePattern &TP);
145145
146 /// EnforceVectorSameNumElts - 'this' is now constrainted to
146 /// EnforceVectorSameNumElts - 'this' is now constrained to
147147 /// be a vector with same num elements as VT.
148148 bool EnforceVectorSameNumElts(EEVT::TypeSet &VT, TreePattern &TP);
149149