6

Changeset 260323 – WebKit

 3 years ago
source link: https://trac.webkit.org/changeset/260323/webkit
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Changeset 260323 in webkit

View differences
Show lines around each change
Show the changes in full context
Ignore:
Blank lines
Case changes
White space changes
Timestamp: Apr 18, 2020 2:45:34 PM (9 months ago) Author: [email protected] Message:

Redesign how we do for-of iteration for JSArrays
​https://bugs.webkit.org/show_bug.cgi?id=175454

JSTests:

Reviewed by Filip Pizlo.

  • microbenchmarks/for-of-iterate-array-entries.js:

(foo):

  • stress/custom-iterators.js:

(catch):
(iter.return):
(iteratorInterfaceErrorTest):
(iteratorInterfaceErrorTestReturn):
(iteratorInterfaceBreakTestReturn):

  • stress/for-of-array-different-globals.js: Added.

(foo):
(array.Symbol.iterator.proto.next):

  • stress/for-of-array-mixed-values.js: Added.

(test):

  • stress/for-of-no-direct-loop-back-edge-osr.js: Added.

(osrIfFinalTier):
(test):

  • stress/generator-containing-for-of-on-map.js: Added.

(find):
(i.let.v.of.find):

  • stress/generator-containing-for-of-on-set.js: Added.

(find):
(set add):

  • stress/osr-from-for-of-done-getter.js: Added.

(i.let.iterable.next.return.get done):
(i.let.iterable.next):
(i.let.iterable.Symbol.iterator):
(i.let.iterable.return):

  • stress/osr-from-for-of-value-getter.js: Added.

(i.let.iterable.next.return.get value):
(i.let.iterable.next):
(i.let.iterable.Symbol.iterator):
(i.let.iterable.return):

  • stress/throw-for-of-next-returns-non-object.js: Added.

(i.let.iterable.next):
(i.let.iterable.Symbol.iterator):
(i.let.iterable.return):
(i.catch):

  • stress/throw-from-done-getter-in-for-of-header.js: Added.

(i.let.iterable.next):
(i.let.iterable.get done):
(i.let.iterable.Symbol.iterator):
(i.let.iterable.return):
(i.catch):

  • stress/throw-from-next-in-for-of-header.js: Added.

(i.let.iterable.next):
(i.let.iterable.Symbol.iterator):
(i.let.iterable.return):
(i.catch):

  • stress/throw-from-value-getter-in-for-of-header.js: Added.

(i.let.iterable.next):
(i.let.iterable.get value):
(i.let.iterable.Symbol.iterator):
(i.let.iterable.return):
(i.catch):

  • stress/webidl-tokenizer-for-of.js: Added.

(tokenise.attemptTokenMatch):
(tokenise):
(Tokeniser):
(Tokeniser.prototype.probe):
(Tokeniser.prototype.consume):
(Tokeniser.prototype.unconsume):

Source/JavaScriptCore:

Reviewed by Filip Pizlo and Saam Barati.

This patch intrinsics for-of iteration for JSArrays when they are
being iterated with the built-in Symbol.iterator. We do this by
adding two new bytecodes op_iterator_open and
op_iterator_next. These bytecodes are essentially a fused set of
existing bytecodes with a special case for our intrinsiced JSArray
case. This patch only adds support for these instructions on
64-bit.

The op_iterator_open bytecode is semantically the same as:
iterator = symbolIterator.@call(iterable);
next = iterator.next;

where iterable is the rhs of the for-of and symbolIterator is the
result of running iterable.symbolIterator;

The op_iterator_next bytecode is semantically the same as:
nextResult = next.@call(iterator);
done = nextResult.done;
value = done ? (undefined / bottom) : nextResult.value;

where nextResult is a temporary (the value VirtualRegister in the
LLInt/Baseline and a tmp in the DFG).

In order to make sure these bytecodes have the same perfomance as
the existing bytecode sequence, we need to make sure we have the
same profiling data and inline caching. Most of the existing
get_by_id code assumed a particular bytecode member name was the
same in each flavor get_by_id access. This patch adds template
specialized functions that vend the correct
Profile/VirtualRegister for the current bytecode/checkpoint. This
means we can have meaningful names for our Bytecode structs and
still use the generic functions.

In the LLInt most of the logic for calls/get_by_id had to be
factored into helper macros, so we could have bytecodes that are
some combination of those.

The trickiest part of this patch was getting the hand rolled DFG
IR to work correctly. This is because we don't have a great way to
express large chucks of DFG graph that doesn't involve manually
tracking all the DFG's invariants. Such as:

1) Flushing/Phantoming values at the end of each block.
2) Rolling forwards and backwards the BytecodeIndex when switching

blocks.

3) Remembering to GetLocal each variable at the top of every block.
4) Ensuring that the JSValue stored to the op_iterator_next.m_value

local does not cause us to OSR exit at the set local.

(4) is handled by a new function, bottomValueMatchingSpeculation,
on DFGGraph that produces a FrozenValue that is roughly the bottom
for a given speculated type. In a future patch we should make this
more complete, probably by adding a VM::bottomCellForSetLocal that
prediction propagation and AI know how treat as a true bottom
value. See: ​https://bugs.webkit.org/show_bug.cgi?id=210694

Lastly, this patch changes the DFG NodeType, CheckCell to be
CheckIsConstant. CheckIsConstant is equivalent to the == operator
on JSValue where it just checks the register values are the
same. In order to keep the same perf that we had for CheckCell,
CheckIsConstant supports CellUse.

  • CMakeLists.txt:
  • JavaScriptCore.xcodeproj/project.pbxproj:
  • assembler/MacroAssemblerARM64.h:

(JSC::MacroAssemblerARM64::or8):
(JSC::MacroAssemblerARM64::store8):

  • assembler/MacroAssemblerX86_64.h:

(JSC::MacroAssemblerX86_64::or8):

  • bytecode/ArrayProfile.h:

(JSC::ArrayProfile::observeStructureID):
(JSC::ArrayProfile::observeStructure):

  • bytecode/BytecodeList.rb:
  • bytecode/BytecodeLivenessAnalysis.cpp:

(JSC::tmpLivenessForCheckpoint):

  • bytecode/BytecodeOperandsForCheckpoint.h: Added.

(JSC::arrayProfileForImpl):
(JSC::hasArrayProfileFor):
(JSC::arrayProfileFor):
(JSC::valueProfileForImpl):
(JSC::hasValueProfileFor):
(JSC::valueProfileFor):
(JSC::destinationFor):
(JSC::calleeFor):
(JSC::argumentCountIncludingThisFor):
(JSC::stackOffsetInRegistersForCall):
(JSC::callLinkInfoFor):

  • bytecode/BytecodeUseDef.cpp:

(JSC::computeUsesForBytecodeIndexImpl):
(JSC::computeDefsForBytecodeIndexImpl):

  • bytecode/CallLinkInfo.cpp:

(JSC::CallLinkInfo::callTypeFor):

  • bytecode/CallLinkStatus.cpp:

(JSC::CallLinkStatus::computeFromLLInt):

  • bytecode/CodeBlock.cpp:

(JSC::CodeBlock::finishCreation):
(JSC::CodeBlock::finalizeLLIntInlineCaches):
(JSC::CodeBlock::tryGetValueProfileForBytecodeIndex):

  • bytecode/CodeBlock.h:

(JSC::CodeBlock::instructionAt const):

  • bytecode/CodeBlockInlines.h:

(JSC::CodeBlock::forEachValueProfile):
(JSC::CodeBlock::forEachArrayProfile):

  • bytecode/GetByStatus.cpp:

(JSC::GetByStatus::computeFromLLInt):

  • bytecode/Instruction.h:

(JSC::BaseInstruction::width const):
(JSC::BaseInstruction::hasCheckpoints const):
(JSC::BaseInstruction::asKnownWidth const):
(JSC::BaseInstruction::wide16 const):
(JSC::BaseInstruction::wide32 const):

  • bytecode/InstructionStream.h:
  • bytecode/IterationModeMetadata.h: Copied from Source/JavaScriptCore/bytecode/SuperSampler.h.
  • bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp:

(JSC::LLIntPrototypeLoadAdaptiveStructureWatchpoint::fireInternal):
(JSC::LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache):

  • bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.h:
  • bytecode/Opcode.h:
  • bytecode/SpeculatedType.h:

(JSC::isSubtypeSpeculation):
(JSC::speculationContains):

  • bytecode/SuperSampler.h:

(JSC::SuperSamplerScope::release):

  • bytecompiler/BytecodeGenerator.cpp:

(JSC::BytecodeGenerator::emitGenericEnumeration):
(JSC::BytecodeGenerator::emitEnumeration):
(JSC::BytecodeGenerator::emitIsEmpty):
(JSC::BytecodeGenerator::emitIteratorOpen):
(JSC::BytecodeGenerator::emitIteratorNext):
(JSC::BytecodeGenerator::emitGetGenericIterator):
(JSC::BytecodeGenerator::emitIteratorGenericNext):
(JSC::BytecodeGenerator::emitIteratorGenericNextWithValue):
(JSC::BytecodeGenerator::emitIteratorGenericClose):
(JSC::BytecodeGenerator::emitGetAsyncIterator):
(JSC::BytecodeGenerator::emitDelegateYield):
(JSC::BytecodeGenerator::emitIteratorNextWithValue): Deleted.
(JSC::BytecodeGenerator::emitIteratorClose): Deleted.
(JSC::BytecodeGenerator::emitGetIterator): Deleted.

  • bytecompiler/BytecodeGenerator.h:
  • bytecompiler/NodesCodegen.cpp:

(JSC::ArrayPatternNode::bindValue const):

  • dfg/DFGAbstractInterpreterInlines.h:

(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
(JSC::DFG::AbstractInterpreter<AbstractStateType>::forAllValues):

  • dfg/DFGAtTailAbstractState.h:

(JSC::DFG::AtTailAbstractState::size const):
(JSC::DFG::AtTailAbstractState::numberOfTmps const):
(JSC::DFG::AtTailAbstractState::atIndex):
(JSC::DFG::AtTailAbstractState::tmp):

  • dfg/DFGByteCodeParser.cpp:

(JSC::DFG::ByteCodeParser::progressToNextCheckpoint):
(JSC::DFG::ByteCodeParser::get):
(JSC::DFG::ByteCodeParser::set):
(JSC::DFG::ByteCodeParser::jsConstant):
(JSC::DFG::ByteCodeParser::weakJSConstant):
(JSC::DFG::ByteCodeParser::addCall):
(JSC::DFG::ByteCodeParser::allocateUntargetableBlock):
(JSC::DFG::ByteCodeParser::handleCall):
(JSC::DFG::ByteCodeParser::emitFunctionChecks):
(JSC::DFG::ByteCodeParser::inlineCall):
(JSC::DFG::ByteCodeParser::handleCallVariant):
(JSC::DFG::ByteCodeParser::handleVarargsInlining):
(JSC::DFG::ByteCodeParser::handleInlining):
(JSC::DFG::ByteCodeParser::handleMinMax):
(JSC::DFG::ByteCodeParser::handleIntrinsicCall):
(JSC::DFG::ByteCodeParser::handleDOMJITCall):
(JSC::DFG::ByteCodeParser::handleIntrinsicGetter):
(JSC::DFG::ByteCodeParser::handleDOMJITGetter):
(JSC::DFG::ByteCodeParser::handleModuleNamespaceLoad):
(JSC::DFG::ByteCodeParser::handleTypedArrayConstructor):
(JSC::DFG::ByteCodeParser::handleConstantInternalFunction):
(JSC::DFG::ByteCodeParser::handleGetById):
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
(JSC::DFG::ByteCodeParser::handlePutByVal):
(JSC::DFG::ByteCodeParser::handleCreateInternalFieldObject):
(JSC::DFG::ByteCodeParser::parse):

  • dfg/DFGCFGSimplificationPhase.cpp:

(JSC::DFG::CFGSimplificationPhase::keepOperandAlive):
(JSC::DFG::CFGSimplificationPhase::jettisonBlock):
(JSC::DFG::CFGSimplificationPhase::mergeBlocks):

  • dfg/DFGCapabilities.cpp:

(JSC::DFG::capabilityLevel):

  • dfg/DFGClobberize.h:

(JSC::DFG::clobberize):

  • dfg/DFGConstantFoldingPhase.cpp:

(JSC::DFG::ConstantFoldingPhase::foldConstants):

  • dfg/DFGDoesGC.cpp:

(JSC::DFG::doesGC):

  • dfg/DFGFixupPhase.cpp:

(JSC::DFG::FixupPhase::fixupNode):
(JSC::DFG::FixupPhase::addStringReplacePrimordialChecks):

  • dfg/DFGForAllKills.h:

(JSC::DFG::forAllKilledOperands):

  • dfg/DFGGraph.cpp:

(JSC::DFG::Graph::bottomValueMatchingSpeculation):

  • dfg/DFGGraph.h:
  • dfg/DFGInPlaceAbstractState.cpp:

(JSC::DFG::InPlaceAbstractState::beginBasicBlock):
(JSC::DFG::InPlaceAbstractState::initialize):
(JSC::DFG::InPlaceAbstractState::endBasicBlock):
(JSC::DFG::InPlaceAbstractState::merge):

  • dfg/DFGInPlaceAbstractState.h:

(JSC::DFG::InPlaceAbstractState::size const):
(JSC::DFG::InPlaceAbstractState::numberOfTmps const):
(JSC::DFG::InPlaceAbstractState::atIndex):
(JSC::DFG::InPlaceAbstractState::operand):
(JSC::DFG::InPlaceAbstractState::local):
(JSC::DFG::InPlaceAbstractState::argument):
(JSC::DFG::InPlaceAbstractState::variableAt): Deleted.

  • dfg/DFGLazyJSValue.h:

(JSC::DFG::LazyJSValue::speculatedType const):

  • dfg/DFGNode.h:

(JSC::DFG::Node::hasConstant):
(JSC::DFG::Node::hasCellOperand):

  • dfg/DFGNodeType.h:
  • dfg/DFGOSRExitCompilerCommon.cpp:

(JSC::DFG::callerReturnPC):

  • dfg/DFGPredictionPropagationPhase.cpp:
  • dfg/DFGSafeToExecute.h:

(JSC::DFG::safeToExecute):

  • dfg/DFGSpeculativeJIT.cpp:

(JSC::DFG::SpeculativeJIT::compileCheckIsConstant):
(JSC::DFG::SpeculativeJIT::compileCheckCell): Deleted.

  • dfg/DFGSpeculativeJIT.h:
  • dfg/DFGSpeculativeJIT32_64.cpp:

(JSC::DFG::SpeculativeJIT::compile):

  • dfg/DFGSpeculativeJIT64.cpp:

(JSC::DFG::SpeculativeJIT::compile):

  • dfg/DFGValidate.cpp:
  • ftl/FTLCapabilities.cpp:

(JSC::FTL::canCompile):

  • ftl/FTLLowerDFGToB3.cpp:

(JSC::FTL::DFG::LowerDFGToB3::compileNode):
(JSC::FTL::DFG::LowerDFGToB3::compileCheckIsConstant):
(JSC::FTL::DFG::LowerDFGToB3::compileCheckCell): Deleted.

  • generator/DSL.rb:
  • generator/Metadata.rb:
  • generator/Section.rb:
  • jit/JIT.cpp:

(JSC::JIT::privateCompileMainPass):
(JSC::JIT::privateCompileSlowCases):

  • jit/JIT.h:
  • jit/JITCall.cpp:

(JSC::JIT::emitPutCallResult):
(JSC::JIT::compileSetupFrame):
(JSC::JIT::compileOpCall):
(JSC::JIT::emit_op_iterator_open):
(JSC::JIT::emitSlow_op_iterator_open):
(JSC::JIT::emit_op_iterator_next):
(JSC::JIT::emitSlow_op_iterator_next):

  • jit/JITCall32_64.cpp:

(JSC::JIT::emit_op_iterator_open):
(JSC::JIT::emitSlow_op_iterator_open):
(JSC::JIT::emit_op_iterator_next):
(JSC::JIT::emitSlow_op_iterator_next):

  • jit/JITInlines.h:

(JSC::JIT::updateTopCallFrame):
(JSC::JIT::advanceToNextCheckpoint):
(JSC::JIT::emitJumpSlowToHotForCheckpoint):
(JSC::JIT::emitValueProfilingSite):

  • jit/JITOperations.cpp:
  • jit/JITOperations.h:
  • llint/LLIntSlowPaths.cpp:

(JSC::LLInt::setupGetByIdPrototypeCache):
(JSC::LLInt::performLLIntGetByID):
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
(JSC::LLInt::genericCall):
(JSC::LLInt::handleIteratorOpenCheckpoint):
(JSC::LLInt::handleIteratorNextCheckpoint):
(JSC::LLInt::slow_path_checkpoint_osr_exit):
(JSC::LLInt::llint_dump_value):

  • llint/LowLevelInterpreter.asm:
  • llint/LowLevelInterpreter32_64.asm:
  • llint/LowLevelInterpreter64.asm:
  • offlineasm/transform.rb:
  • runtime/CommonSlowPaths.cpp:

(JSC::iterator_open_try_fast):
(JSC::iterator_open_try_fast_narrow):
(JSC::iterator_open_try_fast_wide16):
(JSC::iterator_open_try_fast_wide32):
(JSC::iterator_next_try_fast):
(JSC::iterator_next_try_fast_narrow):
(JSC::iterator_next_try_fast_wide16):
(JSC::iterator_next_try_fast_wide32):

  • runtime/CommonSlowPaths.h:
  • runtime/Intrinsic.cpp:

(JSC::interationKindForIntrinsic):

  • runtime/Intrinsic.h:
  • runtime/JSArrayIterator.h:
  • runtime/JSCJSValue.h:
  • runtime/JSCJSValueInlines.h:

(JSC::JSValue::isCallable const):

  • runtime/JSCast.h:
  • runtime/JSGlobalObject.h:

(JSC::JSGlobalObject::arrayProtoValuesFunctionConcurrently const):

  • runtime/OptionsList.h:
  • runtime/Structure.cpp:

(JSC::Structure::dumpBrief const):

Source/WTF:

Reviewed by Filip Pizlo.

  • wtf/EnumClassOperatorOverloads.h:
Location: trunk Files:
13 added 83 edited 1 copied
  • TabularUnified trunk/JSTests/ChangeLog

    r260312 r260323   12020-04-18  Keith Miller  <[email protected]> 2 3        Redesign how we do for-of iteration for JSArrays 4        https://bugs.webkit.org/show_bug.cgi?id=175454 5 6        Reviewed by Filip Pizlo. 7 8        * microbenchmarks/for-of-iterate-array-entries.js: 9        (foo): 10        * stress/custom-iterators.js: 11        (catch): 12        (iter.return): 13        (iteratorInterfaceErrorTest): 14        (iteratorInterfaceErrorTestReturn): 15        (iteratorInterfaceBreakTestReturn): 16        * stress/for-of-array-different-globals.js: Added. 17        (foo): 18        (array.Symbol.iterator.__proto__.next): 19        * stress/for-of-array-mixed-values.js: Added. 20        (test): 21        * stress/for-of-no-direct-loop-back-edge-osr.js: Added. 22        (osrIfFinalTier): 23        (test): 24        * stress/generator-containing-for-of-on-map.js: Added. 25        (find): 26        (i.let.v.of.find): 27        * stress/generator-containing-for-of-on-set.js: Added. 28        (find): 29        (set add): 30        * stress/osr-from-for-of-done-getter.js: Added. 31        (i.let.iterable.next.return.get done): 32        (i.let.iterable.next): 33        (i.let.iterable.Symbol.iterator): 34        (i.let.iterable.return): 35        * stress/osr-from-for-of-value-getter.js: Added. 36        (i.let.iterable.next.return.get value): 37        (i.let.iterable.next): 38        (i.let.iterable.Symbol.iterator): 39        (i.let.iterable.return): 40        * stress/throw-for-of-next-returns-non-object.js: Added. 41        (i.let.iterable.next): 42        (i.let.iterable.Symbol.iterator): 43        (i.let.iterable.return): 44        (i.catch): 45        * stress/throw-from-done-getter-in-for-of-header.js: Added. 46        (i.let.iterable.next): 47        (i.let.iterable.get done): 48        (i.let.iterable.Symbol.iterator): 49        (i.let.iterable.return): 50        (i.catch): 51        * stress/throw-from-next-in-for-of-header.js: Added. 52        (i.let.iterable.next): 53        (i.let.iterable.Symbol.iterator): 54        (i.let.iterable.return): 55        (i.catch): 56        * stress/throw-from-value-getter-in-for-of-header.js: Added. 57        (i.let.iterable.next): 58        (i.let.iterable.get value): 59        (i.let.iterable.Symbol.iterator): 60        (i.let.iterable.return): 61        (i.catch): 62        * stress/webidl-tokenizer-for-of.js: Added. 63        (tokenise.attemptTokenMatch): 64        (tokenise): 65        (Tokeniser): 66        (Tokeniser.prototype.probe): 67        (Tokeniser.prototype.consume): 68        (Tokeniser.prototype.unconsume): 69 1702020-04-18  Alexey Shvayka  <[email protected]> 271
  • TabularUnified trunk/JSTests/microbenchmarks/for-of-iterate-array-entries.js

    r251463 r260323   44    for (var i = 0; i < 25000; i++) 55        array.push(i); 6    6 77    var result = 0; 8    for (var [key, value] of array.entries()) 8    for (var [key, value] of array.entries()) { 9        print(key + " " + $vm.indexingMode(array)); 910        result += key + value + array[key]; 11    } 1012    1113    return result;
  • TabularUnified trunk/JSTests/stress/custom-iterators.js

    r181077 r260323   2323} 2424if (returnCalled) 25    throw "Error: return is called."; 25    throw new Error("return was called"); 2626 2727   5454} catch (e) { 5555    if (String(e) !== "Error: Terminate iteration.") 56        throw "Error: bad error thrown: " + e; 56        throw e; 5757} 5858if (!returnCalled)   109109} catch (e) { 110110    if (String(e) !== "Error: looking up next.") 111        throw "Error: bad error thrown: " + e; 111        throw e; 112112} 113113if (returnCalled) 114    throw "Error: return is called."; 114    throw new Error("return was called"); 115115 116116   137137} catch (e) { 138138    if (String(e) !== "Error: looking up return.") 139        throw "Error: bad error thrown: " + e; 139        throw e; 140140} 141141   166166} catch (e) { 167167    if (String(e) !== "Error: next is called.") 168        throw "Error: bad error thrown: " + e; 168        throw e; 169169} 170170if (returnCalled) 171    throw "Error: return is called."; 171    throw new Error("return was called"); 172172 173173   184184    return: function () { 185185        returnCalled = true; 186        throw "Error: return is called."; 186        throw new Error("return was called"); 187187    } 188188};   194194} catch (e) { 195195    if (String(e) !== "Error: Terminate iteration.") 196        throw "Error: bad error thrown: " + e; 196        throw e; 197197} 198198if (!returnCalled)   211211    return: function () { 212212        returnCalled = true; 213        throw "Error: return is called."; 213        throw new Error("return was called"); 214214    } 215215};   219219    } 220220} catch (e) { 221    if (String(e) !== "Error: return is called.") 222        throw "Error: bad error thrown: " + e; 221    if (String(e) !== "Error: return was called") 222        throw e; 223223} 224224if (!returnCalled)   256256    } catch (e) { 257257        if (String(e) !== "TypeError: Iterator result interface is not an object.") 258            throw "Error: bad error thrown: " + e; 258            throw e; 259259    } 260260    if (returnCalled) 261        throw "Error: return is called."; 261        throw new Error("return was called"); 262262} 263263   283283    } catch (e) { 284284        if (String(e) !== "Error: Terminate iteration.") 285            throw "Error: bad error thrown: " + e; 285            throw e; 286286    } 287287    if (!returnCalled)   314314    } catch (e) { 315315        if (String(e) !== "TypeError: Iterator result interface is not an object.") 316            throw "Error: bad error thrown: " + e; 316            throw e; 317317    } 318318    if (!returnCalled)
  • TabularUnified trunk/Source/JavaScriptCore/CMakeLists.txt

    r260166 r260323   13541354    FLATTENED 13551355) 1356add_dependencies(JavaScriptCore_CopyPrivateHeaders Bytecodes JSCBuiltins) 13561357# JavaScriptCore_CopyPrivateHeaders needs to have a direct or indirect 13571358# dependency of JavaScriptCore for CMake Visual Studio generator to
  • TabularUnified trunk/Source/JavaScriptCore/ChangeLog

    r260321 r260323   12020-04-18  Keith Miller  <[email protected]> 2 3        Redesign how we do for-of iteration for JSArrays 4        https://bugs.webkit.org/show_bug.cgi?id=175454 5 6        Reviewed by Filip Pizlo and Saam Barati. 7 8        This patch intrinsics for-of iteration for JSArrays when they are 9        being iterated with the built-in Symbol.iterator. We do this by 10        adding two new bytecodes op_iterator_open and 11        op_iterator_next. These bytecodes are essentially a fused set of 12        existing bytecodes with a special case for our intrinsiced JSArray 13        case. This patch only adds support for these instructions on 14        64-bit. 15 16 17        The op_iterator_open bytecode is semantically the same as: 18        iterator = symbolIterator.@call(iterable); 19        next = iterator.next; 20 21        where iterable is the rhs of the for-of and symbolIterator is the 22        result of running iterable.symbolIterator; 23 24 25        The op_iterator_next bytecode is semantically the same as: 26        nextResult = next.@call(iterator); 27        done = nextResult.done; 28        value = done ? (undefined / bottom) : nextResult.value; 29 30        where nextResult is a temporary (the value VirtualRegister in the 31        LLInt/Baseline and a tmp in the DFG). 32 33        In order to make sure these bytecodes have the same perfomance as 34        the existing bytecode sequence, we need to make sure we have the 35        same profiling data and inline caching. Most of the existing 36        get_by_id code assumed a particular bytecode member name was the 37        same in each flavor get_by_id access. This patch adds template 38        specialized functions that vend the correct 39        Profile/VirtualRegister for the current bytecode/checkpoint. This 40        means we can have meaningful names for our Bytecode structs and 41        still use the generic functions. 42 43        In the LLInt most of the logic for calls/get_by_id had to be 44        factored into helper macros, so we could have bytecodes that are 45        some combination of those. 46 47        The trickiest part of this patch was getting the hand rolled DFG 48        IR to work correctly. This is because we don't have a great way to 49        express large chucks of DFG graph that doesn't involve manually 50        tracking all the DFG's invariants. Such as: 51 52        1) Flushing/Phantoming values at the end of each block. 53        2) Rolling forwards and backwards the BytecodeIndex when switching 54           blocks. 55        3) Remembering to GetLocal each variable at the top of every block. 56        4) Ensuring that the JSValue stored to the op_iterator_next.m_value 57           local does not cause us to OSR exit at the set local. 58 59        (4) is handled by a new function, bottomValueMatchingSpeculation, 60        on DFGGraph that produces a FrozenValue that is roughly the bottom 61        for a given speculated type. In a future patch we should make this 62        more complete, probably by adding a VM::bottomCellForSetLocal that 63        prediction propagation and AI know how treat as a true bottom 64        value. See: https://bugs.webkit.org/show_bug.cgi?id=210694 65 66        Lastly, this patch changes the DFG NodeType, CheckCell to be 67        CheckIsConstant.  CheckIsConstant is equivalent to the == operator 68        on JSValue where it just checks the register values are the 69        same. In order to keep the same perf that we had for CheckCell, 70        CheckIsConstant supports CellUse. 71 72        * CMakeLists.txt: 73        * JavaScriptCore.xcodeproj/project.pbxproj: 74        * assembler/MacroAssemblerARM64.h: 75        (JSC::MacroAssemblerARM64::or8): 76        (JSC::MacroAssemblerARM64::store8): 77        * assembler/MacroAssemblerX86_64.h: 78        (JSC::MacroAssemblerX86_64::or8): 79        * bytecode/ArrayProfile.h: 80        (JSC::ArrayProfile::observeStructureID): 81        (JSC::ArrayProfile::observeStructure): 82        * bytecode/BytecodeList.rb: 83        * bytecode/BytecodeLivenessAnalysis.cpp: 84        (JSC::tmpLivenessForCheckpoint): 85        * bytecode/BytecodeOperandsForCheckpoint.h: Added. 86        (JSC::arrayProfileForImpl): 87        (JSC::hasArrayProfileFor): 88        (JSC::arrayProfileFor): 89        (JSC::valueProfileForImpl): 90        (JSC::hasValueProfileFor): 91        (JSC::valueProfileFor): 92        (JSC::destinationFor): 93        (JSC::calleeFor): 94        (JSC::argumentCountIncludingThisFor): 95        (JSC::stackOffsetInRegistersForCall): 96        (JSC::callLinkInfoFor): 97        * bytecode/BytecodeUseDef.cpp: 98        (JSC::computeUsesForBytecodeIndexImpl): 99        (JSC::computeDefsForBytecodeIndexImpl): 100        * bytecode/CallLinkInfo.cpp: 101        (JSC::CallLinkInfo::callTypeFor): 102        * bytecode/CallLinkStatus.cpp: 103        (JSC::CallLinkStatus::computeFromLLInt): 104        * bytecode/CodeBlock.cpp: 105        (JSC::CodeBlock::finishCreation): 106        (JSC::CodeBlock::finalizeLLIntInlineCaches): 107        (JSC::CodeBlock::tryGetValueProfileForBytecodeIndex): 108        * bytecode/CodeBlock.h: 109        (JSC::CodeBlock::instructionAt const): 110        * bytecode/CodeBlockInlines.h: 111        (JSC::CodeBlock::forEachValueProfile): 112        (JSC::CodeBlock::forEachArrayProfile): 113        * bytecode/GetByStatus.cpp: 114        (JSC::GetByStatus::computeFromLLInt): 115        * bytecode/Instruction.h: 116        (JSC::BaseInstruction::width const): 117        (JSC::BaseInstruction::hasCheckpoints const): 118        (JSC::BaseInstruction::asKnownWidth const): 119        (JSC::BaseInstruction::wide16 const): 120        (JSC::BaseInstruction::wide32 const): 121        * bytecode/InstructionStream.h: 122        * bytecode/IterationModeMetadata.h: Copied from Source/JavaScriptCore/bytecode/SuperSampler.h. 123        * bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp: 124        (JSC::LLIntPrototypeLoadAdaptiveStructureWatchpoint::fireInternal): 125        (JSC::LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache): 126        * bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.h: 127        * bytecode/Opcode.h: 128        * bytecode/SpeculatedType.h: 129        (JSC::isSubtypeSpeculation): 130        (JSC::speculationContains): 131        * bytecode/SuperSampler.h: 132        (JSC::SuperSamplerScope::release): 133        * bytecompiler/BytecodeGenerator.cpp: 134        (JSC::BytecodeGenerator::emitGenericEnumeration): 135        (JSC::BytecodeGenerator::emitEnumeration): 136        (JSC::BytecodeGenerator::emitIsEmpty): 137        (JSC::BytecodeGenerator::emitIteratorOpen): 138        (JSC::BytecodeGenerator::emitIteratorNext): 139        (JSC::BytecodeGenerator::emitGetGenericIterator): 140        (JSC::BytecodeGenerator::emitIteratorGenericNext): 141        (JSC::BytecodeGenerator::emitIteratorGenericNextWithValue): 142        (JSC::BytecodeGenerator::emitIteratorGenericClose): 143        (JSC::BytecodeGenerator::emitGetAsyncIterator): 144        (JSC::BytecodeGenerator::emitDelegateYield): 145        (JSC::BytecodeGenerator::emitIteratorNextWithValue): Deleted. 146        (JSC::BytecodeGenerator::emitIteratorClose): Deleted. 147        (JSC::BytecodeGenerator::emitGetIterator): Deleted. 148        * bytecompiler/BytecodeGenerator.h: 149        * bytecompiler/NodesCodegen.cpp: 150        (JSC::ArrayPatternNode::bindValue const): 151        * dfg/DFGAbstractInterpreterInlines.h: 152        (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects): 153        (JSC::DFG::AbstractInterpreter<AbstractStateType>::forAllValues): 154        * dfg/DFGAtTailAbstractState.h: 155        (JSC::DFG::AtTailAbstractState::size const): 156        (JSC::DFG::AtTailAbstractState::numberOfTmps const): 157        (JSC::DFG::AtTailAbstractState::atIndex): 158        (JSC::DFG::AtTailAbstractState::tmp): 159        * dfg/DFGByteCodeParser.cpp: 160        (JSC::DFG::ByteCodeParser::progressToNextCheckpoint): 161        (JSC::DFG::ByteCodeParser::get): 162        (JSC::DFG::ByteCodeParser::set): 163        (JSC::DFG::ByteCodeParser::jsConstant): 164        (JSC::DFG::ByteCodeParser::weakJSConstant): 165        (JSC::DFG::ByteCodeParser::addCall): 166        (JSC::DFG::ByteCodeParser::allocateUntargetableBlock): 167        (JSC::DFG::ByteCodeParser::handleCall): 168        (JSC::DFG::ByteCodeParser::emitFunctionChecks): 169        (JSC::DFG::ByteCodeParser::inlineCall): 170        (JSC::DFG::ByteCodeParser::handleCallVariant): 171        (JSC::DFG::ByteCodeParser::handleVarargsInlining): 172        (JSC::DFG::ByteCodeParser::handleInlining): 173        (JSC::DFG::ByteCodeParser::handleMinMax): 174        (JSC::DFG::ByteCodeParser::handleIntrinsicCall): 175        (JSC::DFG::ByteCodeParser::handleDOMJITCall): 176        (JSC::DFG::ByteCodeParser::handleIntrinsicGetter): 177        (JSC::DFG::ByteCodeParser::handleDOMJITGetter): 178        (JSC::DFG::ByteCodeParser::handleModuleNamespaceLoad): 179        (JSC::DFG::ByteCodeParser::handleTypedArrayConstructor): 180        (JSC::DFG::ByteCodeParser::handleConstantInternalFunction): 181        (JSC::DFG::ByteCodeParser::handleGetById): 182        (JSC::DFG::ByteCodeParser::parseBlock): 183        (JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry): 184        (JSC::DFG::ByteCodeParser::handlePutByVal): 185        (JSC::DFG::ByteCodeParser::handleCreateInternalFieldObject): 186        (JSC::DFG::ByteCodeParser::parse): 187        * dfg/DFGCFGSimplificationPhase.cpp: 188        (JSC::DFG::CFGSimplificationPhase::keepOperandAlive): 189        (JSC::DFG::CFGSimplificationPhase::jettisonBlock): 190        (JSC::DFG::CFGSimplificationPhase::mergeBlocks): 191        * dfg/DFGCapabilities.cpp: 192        (JSC::DFG::capabilityLevel): 193        * dfg/DFGClobberize.h: 194        (JSC::DFG::clobberize): 195        * dfg/DFGConstantFoldingPhase.cpp: 196        (JSC::DFG::ConstantFoldingPhase::foldConstants): 197        * dfg/DFGDoesGC.cpp: 198        (JSC::DFG::doesGC): 199        * dfg/DFGFixupPhase.cpp: 200        (JSC::DFG::FixupPhase::fixupNode): 201        (JSC::DFG::FixupPhase::addStringReplacePrimordialChecks): 202        * dfg/DFGForAllKills.h: 203        (JSC::DFG::forAllKilledOperands): 204        * dfg/DFGGraph.cpp: 205        (JSC::DFG::Graph::bottomValueMatchingSpeculation): 206        * dfg/DFGGraph.h: 207        * dfg/DFGInPlaceAbstractState.cpp: 208        (JSC::DFG::InPlaceAbstractState::beginBasicBlock): 209        (JSC::DFG::InPlaceAbstractState::initialize): 210        (JSC::DFG::InPlaceAbstractState::endBasicBlock): 211        (JSC::DFG::InPlaceAbstractState::merge): 212        * dfg/DFGInPlaceAbstractState.h: 213        (JSC::DFG::InPlaceAbstractState::size const): 214        (JSC::DFG::InPlaceAbstractState::numberOfTmps const): 215        (JSC::DFG::InPlaceAbstractState::atIndex): 216        (JSC::DFG::InPlaceAbstractState::operand): 217        (JSC::DFG::InPlaceAbstractState::local): 218        (JSC::DFG::InPlaceAbstractState::argument): 219        (JSC::DFG::InPlaceAbstractState::variableAt): Deleted. 220        * dfg/DFGLazyJSValue.h: 221        (JSC::DFG::LazyJSValue::speculatedType const): 222        * dfg/DFGNode.h: 223        (JSC::DFG::Node::hasConstant): 224        (JSC::DFG::Node::hasCellOperand): 225        * dfg/DFGNodeType.h: 226        * dfg/DFGOSRExitCompilerCommon.cpp: 227        (JSC::DFG::callerReturnPC): 228        * dfg/DFGPredictionPropagationPhase.cpp: 229        * dfg/DFGSafeToExecute.h: 230        (JSC::DFG::safeToExecute): 231        * dfg/DFGSpeculativeJIT.cpp: 232        (JSC::DFG::SpeculativeJIT::compileCheckIsConstant): 233        (JSC::DFG::SpeculativeJIT::compileCheckCell): Deleted. 234        * dfg/DFGSpeculativeJIT.h: 235        * dfg/DFGSpeculativeJIT32_64.cpp: 236        (JSC::DFG::SpeculativeJIT::compile): 237        * dfg/DFGSpeculativeJIT64.cpp: 238        (JSC::DFG::SpeculativeJIT::compile): 239        * dfg/DFGValidate.cpp: 240        * ftl/FTLCapabilities.cpp: 241        (JSC::FTL::canCompile): 242        * ftl/FTLLowerDFGToB3.cpp: 243        (JSC::FTL::DFG::LowerDFGToB3::compileNode): 244        (JSC::FTL::DFG::LowerDFGToB3::compileCheckIsConstant): 245        (JSC::FTL::DFG::LowerDFGToB3::compileCheckCell): Deleted. 246        * generator/DSL.rb: 247        * generator/Metadata.rb: 248        * generator/Section.rb: 249        * jit/JIT.cpp: 250        (JSC::JIT::privateCompileMainPass): 251        (JSC::JIT::privateCompileSlowCases): 252        * jit/JIT.h: 253        * jit/JITCall.cpp: 254        (JSC::JIT::emitPutCallResult): 255        (JSC::JIT::compileSetupFrame): 256        (JSC::JIT::compileOpCall): 257        (JSC::JIT::emit_op_iterator_open): 258        (JSC::JIT::emitSlow_op_iterator_open): 259        (JSC::JIT::emit_op_iterator_next): 260        (JSC::JIT::emitSlow_op_iterator_next): 261        * jit/JITCall32_64.cpp: 262        (JSC::JIT::emit_op_iterator_open): 263        (JSC::JIT::emitSlow_op_iterator_open): 264        (JSC::JIT::emit_op_iterator_next): 265        (JSC::JIT::emitSlow_op_iterator_next): 266        * jit/JITInlines.h: 267        (JSC::JIT::updateTopCallFrame): 268        (JSC::JIT::advanceToNextCheckpoint): 269        (JSC::JIT::emitJumpSlowToHotForCheckpoint): 270        (JSC::JIT::emitValueProfilingSite): 271        * jit/JITOperations.cpp: 272        * jit/JITOperations.h: 273        * llint/LLIntSlowPaths.cpp: 274        (JSC::LLInt::setupGetByIdPrototypeCache): 275        (JSC::LLInt::performLLIntGetByID): 276        (JSC::LLInt::LLINT_SLOW_PATH_DECL): 277        (JSC::LLInt::genericCall): 278        (JSC::LLInt::handleIteratorOpenCheckpoint): 279        (JSC::LLInt::handleIteratorNextCheckpoint): 280        (JSC::LLInt::slow_path_checkpoint_osr_exit): 281        (JSC::LLInt::llint_dump_value): 282        * llint/LowLevelInterpreter.asm: 283        * llint/LowLevelInterpreter32_64.asm: 284        * llint/LowLevelInterpreter64.asm: 285        * offlineasm/transform.rb: 286        * runtime/CommonSlowPaths.cpp: 287        (JSC::iterator_open_try_fast): 288        (JSC::iterator_open_try_fast_narrow): 289        (JSC::iterator_open_try_fast_wide16): 290        (JSC::iterator_open_try_fast_wide32): 291        (JSC::iterator_next_try_fast): 292        (JSC::iterator_next_try_fast_narrow): 293        (JSC::iterator_next_try_fast_wide16): 294        (JSC::iterator_next_try_fast_wide32): 295        * runtime/CommonSlowPaths.h: 296        * runtime/Intrinsic.cpp: 297        (JSC::interationKindForIntrinsic): 298        * runtime/Intrinsic.h: 299        * runtime/JSArrayIterator.h: 300        * runtime/JSCJSValue.h: 301        * runtime/JSCJSValueInlines.h: 302        (JSC::JSValue::isCallable const): 303        * runtime/JSCast.h: 304        * runtime/JSGlobalObject.h: 305        (JSC::JSGlobalObject::arrayProtoValuesFunctionConcurrently const): 306        * runtime/OptionsList.h: 307        * runtime/Structure.cpp: 308        (JSC::Structure::dumpBrief const): 309 13102020-04-18  Yusuke Suzuki  <[email protected]> 2311
  • TabularUnified trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj

    r260273 r260323   11041104                53C6FEEF1E8ADFA900B18425 /* WasmOpcodeOrigin.h in Headers */ = {isa = PBXBuildFile; fileRef = 53C6FEEE1E8ADFA900B18425 /* WasmOpcodeOrigin.h */; }; 11051105                53CA730A1EA533D80076049D /* WasmBBQPlan.h in Headers */ = {isa = PBXBuildFile; fileRef = 53CA73081EA533D80076049D /* WasmBBQPlan.h */; }; 1106                53D35499240D88BD008950DD /* BytecodeOperandsForCheckpoint.h in Headers */ = {isa = PBXBuildFile; fileRef = 53D35497240D88AD008950DD /* BytecodeOperandsForCheckpoint.h */; }; 1107                53D41EC923C0081A00AE984B /* IterationModeMetadata.h in Headers */ = {isa = PBXBuildFile; fileRef = 53D41EC823C0081000AE984B /* IterationModeMetadata.h */; settings = {ATTRIBUTES = (Private, ); }; }; 11061108                53D444DC1DAF08AB00B92784 /* B3WasmAddressValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 53D444DB1DAF08AB00B92784 /* B3WasmAddressValue.h */; }; 11071109                53E1F8F82154715A0001DDBC /* JSValuePrivate.h in Headers */ = {isa = PBXBuildFile; fileRef = 53E1F8F7215471490001DDBC /* JSValuePrivate.h */; settings = {ATTRIBUTES = (Private, ); }; };   38183820                53CA73071EA533D80076049D /* WasmBBQPlan.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WasmBBQPlan.cpp; sourceTree = "<group>"; }; 38193821                53CA73081EA533D80076049D /* WasmBBQPlan.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WasmBBQPlan.h; sourceTree = "<group>"; }; 3822                53D35497240D88AD008950DD /* BytecodeOperandsForCheckpoint.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BytecodeOperandsForCheckpoint.h; sourceTree = "<group>"; }; 3823                53D35498240D88AD008950DD /* BytecodeUseDef.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = BytecodeUseDef.cpp; sourceTree = "<group>"; }; 3824                53D41EC823C0081000AE984B /* IterationModeMetadata.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = IterationModeMetadata.h; sourceTree = "<group>"; }; 38203825                53D444DB1DAF08AB00B92784 /* B3WasmAddressValue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = B3WasmAddressValue.h; path = b3/B3WasmAddressValue.h; sourceTree = "<group>"; }; 38213826                53D444DD1DAF09A000B92784 /* B3WasmAddressValue.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = B3WasmAddressValue.cpp; path = b3/B3WasmAddressValue.cpp; sourceTree = "<group>"; };   82368241                                C2FCAE0F17A9C24E0034C735 /* BytecodeLivenessAnalysis.h */, 82378242                                0F666EBE183566F900D017F1 /* BytecodeLivenessAnalysisInlines.h */, 8243                                53D35497240D88AD008950DD /* BytecodeOperandsForCheckpoint.h */, 82388244                                E3D264291D38C042000BE174 /* BytecodeRewriter.cpp */, 82398245                                E3D2642A1D38C042000BE174 /* BytecodeRewriter.h */, 8246                                53D35498240D88AD008950DD /* BytecodeUseDef.cpp */, 82408247                                0F885E101849A3BE00F1E3FA /* BytecodeUseDef.h */, 82418248                                E35BA2C0241A0E8C00B67086 /* ByValInfo.cpp */,   83318338                                14CC3BA22138A238002D58B6 /* InstructionStream.h */, 83328339                                53F6BF6C1C3F060A00F41E5D /* InternalFunctionAllocationProfile.h */, 8340                                53D41EC823C0081000AE984B /* IterationModeMetadata.h */, 83338341                                BCFD8C900EEB2EE700283848 /* JumpTable.cpp */, 83348342                                BCFD8C910EEB2EE700283848 /* JumpTable.h */,   91469154                                C2FCAE1317A9C24E0034C735 /* BytecodeLivenessAnalysis.h in Headers */, 91479155                                0F666EC0183566F900D017F1 /* BytecodeLivenessAnalysisInlines.h in Headers */, 9156                                53D35499240D88BD008950DD /* BytecodeOperandsForCheckpoint.h in Headers */, 91489157                                E328DAEB1D38D005001A2529 /* BytecodeRewriter.h in Headers */, 91499158                                6514F21918B3E1670098FF8B /* Bytecodes.h in Headers */,   97309739                                0F5E0FE72086AD480097F0DE /* IsoSubspacePerVM.h in Headers */, 97319740                                8B9F6D561D5912FA001C739F /* IterationKind.h in Headers */, 9741                                53D41EC923C0081A00AE984B /* IterationModeMetadata.h in Headers */, 97329742                                FE4D55B81AE716CA0052E459 /* IterationStatus.h in Headers */, 97339743                                70113D4C1A8DB093003848C4 /* IteratorOperations.h in Headers */,
  • TabularUnified trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h

    r259556 r260323   687687    } 688688 689    void or8(TrustedImm32 imm, AbsoluteAddress address) 690    { 691        LogicalImmediate logicalImm = LogicalImmediate::create32(imm.m_value); 692        if (logicalImm.isValid()) { 693            load8(address.m_ptr, getCachedDataTempRegisterIDAndInvalidate()); 694            m_assembler.orr<32>(dataTempRegister, dataTempRegister, logicalImm); 695            store8(dataTempRegister, address.m_ptr); 696        } else { 697            load8(address.m_ptr, getCachedMemoryTempRegisterIDAndInvalidate()); 698            or32(imm, memoryTempRegister, getCachedDataTempRegisterIDAndInvalidate()); 699            store8(dataTempRegister, address.m_ptr); 700        } 701    } 702 689703    void or64(RegisterID src, RegisterID dest) 690704    {   16041618    } 16051619 1606    void store8(RegisterID src, void* address) 1620    void store8(RegisterID src, const void* address) 16071621    { 16081622        move(TrustedImmPtr(address), getCachedMemoryTempRegisterIDAndInvalidate());   16191633    } 16201634 1621    void store8(TrustedImm32 imm, void* address) 1635    void store8(TrustedImm32 imm, const void* address) 16221636    { 16231637        TrustedImm32 imm8(static_cast<int8_t>(imm.m_value));
  • TabularUnified trunk/Source/JavaScriptCore/assembler/MacroAssemblerX86_64.h

    r258063 r260323   4949    using MacroAssemblerX86Common::or32; 5050    using MacroAssemblerX86Common::or16; 51    using MacroAssemblerX86Common::or8; 5152    using MacroAssemblerX86Common::sub32; 5253    using MacroAssemblerX86Common::load8;   9596        move(TrustedImmPtr(address.m_ptr), scratchRegister()); 9697        or16(imm, Address(scratchRegister())); 98    } 99 100    void or8(TrustedImm32 imm, AbsoluteAddress address) 101    { 102        move(TrustedImmPtr(address.m_ptr), scratchRegister()); 103        or8(imm, Address(scratchRegister())); 97104    } 98105
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/ArrayProfile.h

    r245658 r260323   208208    bool* addressOfOutOfBounds() { return &m_outOfBounds; } 209209    210    void observeStructure(Structure* structure) 211    { 212        m_lastSeenStructureID = structure->id(); 213    } 210    void observeStructureID(StructureID structureID) { m_lastSeenStructureID = structureID; } 211    void observeStructure(Structure* structure) { m_lastSeenStructureID = structure->id(); } 214212 215213    void computeUpdatedPrediction(const ConcurrentJSLocker&, CodeBlock*);
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/BytecodeList.rb

    r259676 r260323   3535    :GetPutInfo, 3636    :IndexingType, 37    :IterationModeMetadata, 3738    :JSCell, 3839    :JSGlobalLexicalEnvironment,   11611162        dst: VirtualRegister, 11621163        numParametersToSkip: unsigned, 1164    } 1165 1166# Semantically, this is iterator = symbolIterator.@call(iterable); next = iterator.next; 1167# where symbolIterator the result of iterable[Symbol.iterator] (which is done in a different bytecode). 1168# For builtin iterators, however, this has special behavior where next becomes the empty value, which 1169# indicates that we are in a known iteration mode to op_iterator_next. 1170op :iterator_open, 1171    args: { 1172        iterator: VirtualRegister, 1173        next: VirtualRegister, 1174        symbolIterator: VirtualRegister, 1175        iterable: VirtualRegister, 1176        stackOffset: unsigned, 1177    }, 1178    metadata: { 1179        iterationMetadata: IterationModeMetadata, 1180        iterableProfile: ValueProfile, 1181        callLinkInfo: LLIntCallLinkInfo, 1182        iteratorProfile: ValueProfile, 1183        modeMetadata: GetByIdModeMetadata, 1184        nextProfile: ValueProfile, 1185    }, 1186    checkpoints: { 1187        symbolCall: nil, 1188        getNext: nil, 1189    } 1190 1191# Semantically, this is nextResult = next.@call(iterator); done = nextResult.done; value = done ? undefined : nextResult.value; 1192op :iterator_next, 1193    args: { 1194        done: VirtualRegister, 1195        value: VirtualRegister, 1196        iterable: VirtualRegister, 1197        next: VirtualRegister, 1198        iterator: VirtualRegister, 1199        stackOffset: unsigned, 1200    }, 1201    metadata: { 1202        iterationMetadata: IterationModeMetadata, 1203        iterableProfile: ArrayProfile, 1204        callLinkInfo: LLIntCallLinkInfo, 1205        nextResultProfile: ValueProfile, 1206        doneModeMetadata: GetByIdModeMetadata, 1207        doneProfile: ValueProfile, 1208        valueModeMetadata: GetByIdModeMetadata, 1209        valueProfile: ValueProfile, 1210    }, 1211    tmps: { 1212        nextResult: JSValue, 1213    }, 1214    checkpoints: { 1215        computeNext: nil, 1216        getDone: nil, 1217        getValue: nil, 11631218    } 11641219   12911346op :op_put_by_id_return_location 12921347op :op_put_by_val_return_location 1348op :op_iterator_open_return_location 1349op :op_iterator_next_return_location 12931350op :wasm_function_prologue 12941351op :wasm_function_prologue_no_tls
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.cpp

    r255459 r260323   232232        return result; 233233    } 234    case op_iterator_open: { 235        return result; 236    } 237    case op_iterator_next: { 238        result.set(OpIteratorNext::nextResult); 239        return result; 240    } 234241    default: 235242        break;
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/BytecodeUseDef.cpp

    r254653 r260323   259259 260260    USES(OpYield, generator, argument) 261 262    USES(OpIteratorOpen, symbolIterator, iterable) 263    USES(OpIteratorNext, iterator, next, iterable) 261264 262265    case op_new_array_with_spread:   487490    DEFS(OpCatch, exception, thrownValue) 488491 492    DEFS(OpIteratorOpen, iterator, next) 493    DEFS(OpIteratorNext, done, value) 494 489495    case op_enter: { 490496        for (unsigned i = numVars; i--;)
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/CallLinkInfo.cpp

    r254714 r260323   4141CallLinkInfo::CallType CallLinkInfo::callTypeFor(OpcodeID opcodeID) 4242{ 43    if (opcodeID == op_call || opcodeID == op_call_eval) 43    switch (opcodeID) { 44    case op_tail_call_varargs: 45    case op_tail_call_forward_arguments: 46        return TailCallVarargs;        47 48    case op_call: 49    case op_call_eval: 50    case op_iterator_open: 51    case op_iterator_next: 4452        return Call; 45    if (opcodeID == op_call_varargs) 53 54    case op_call_varargs: 4655        return CallVarargs; 47    if (opcodeID == op_construct) 56 57    case op_construct: 4858        return Construct; 49    if (opcodeID == op_construct_varargs) 59 60    case op_construct_varargs: 5061        return ConstructVarargs; 51    if (opcodeID == op_tail_call) 62 63    case op_tail_call: 5264        return TailCall; 53    ASSERT(opcodeID == op_tail_call_varargs || opcodeID == op_tail_call_forward_arguments); 54    return TailCallVarargs; 65 66    default: 67        break; 68    } 69    RELEASE_ASSERT_NOT_REACHED(); 70    return Call; 5571} 5672
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp

    r251468 r260323   8282        callLinkInfo = &instruction->as<OpTailCall>().metadata(profiledBlock).m_callLinkInfo; 8383        break; 84    case op_iterator_open: 85        callLinkInfo = &instruction->as<OpIteratorOpen>().metadata(profiledBlock).m_callLinkInfo; 86        break; 87    case op_iterator_next: 88        callLinkInfo = &instruction->as<OpIteratorNext>().metadata(profiledBlock).m_callLinkInfo; 89        break; 90 8491    default: 8592        return CallLinkStatus();
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp

    r259835 r260323   3737#include "BytecodeGenerator.h" 3838#include "BytecodeLivenessAnalysis.h" 39#include "BytecodeOperandsForCheckpoint.h" 3940#include "BytecodeStructs.h" 4041#include "BytecodeUseDef.h"   584585        LINK(OpProfileControlFlow) 585586 587        case op_iterator_open: { 588            INITIALIZE_METADATA(OpIteratorOpen) 589 590            m_numberOfNonArgumentValueProfiles += 3; 591            break; 592        } 593 594        case op_iterator_next: { 595            INITIALIZE_METADATA(OpIteratorNext) 596 597            m_numberOfNonArgumentValueProfiles += 3; 598            break; 599        } 600 586601        case op_resolve_scope: { 587602            INITIALIZE_METADATA(OpResolveScope)   12251240        // We need to add optimizations for op_resolve_scope_for_hoisting_func_decl_in_eval to do link time scope resolution. 12261241 1227        m_metadata->forEach<OpGetById>([&] (auto& metadata) { 1228            if (metadata.m_modeMetadata.mode != GetByIdMode::Default) 1242        auto clearIfNeeded = [&] (GetByIdModeMetadata& modeMetadata, ASCIILiteral opName) { 1243            if (modeMetadata.mode != GetByIdMode::Default) 12291244                return; 1230            StructureID oldStructureID = metadata.m_modeMetadata.defaultMode.structureID; 1245            StructureID oldStructureID = modeMetadata.defaultMode.structureID; 12311246            if (!oldStructureID || vm.heap.isMarked(vm.heap.structureIDTable().get(oldStructureID))) 12321247                return; 1233            dataLogLnIf(Options::verboseOSR(), "Clearing LLInt property access."); 1234            LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache(metadata); 1248            dataLogLnIf(Options::verboseOSR(), "Clearing ", opName, " LLInt property access."); 1249            LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache(modeMetadata); 1250        }; 1251 1252        m_metadata->forEach<OpIteratorOpen>([&] (auto& metadata) { 1253            clearIfNeeded(metadata.m_modeMetadata, "iterator open"_s); 1254        }); 1255 1256        m_metadata->forEach<OpIteratorNext>([&] (auto& metadata) { 1257            clearIfNeeded(metadata.m_doneModeMetadata, "iterator next"_s); 1258            clearIfNeeded(metadata.m_valueModeMetadata, "iterator next"_s); 1259        }); 1260 1261        m_metadata->forEach<OpGetById>([&] (auto& metadata) { 1262            clearIfNeeded(metadata.m_modeMetadata, "get by id"_s); 12351263        }); 12361264   13271355            auto& instruction = instructions().at(std::get<1>(pair.key)); 13281356            OpcodeID opcode = instruction->opcodeID(); 1329            if (opcode == op_get_by_id) { 1357            switch (opcode) { 1358            case op_get_by_id: { 13301359                dataLogLnIf(Options::verboseOSR(), "Clearing LLInt property access."); 1331                LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache(instruction->as<OpGetById>().metadata(this)); 1360                LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache(instruction->as<OpGetById>().metadata(this).m_modeMetadata); 1361                break; 1362            } 1363            case op_iterator_open: { 1364                dataLogLnIf(Options::verboseOSR(), "Clearing LLInt iterator open property access."); 1365                LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache(instruction->as<OpIteratorOpen>().metadata(this).m_modeMetadata); 1366                break; 1367            } 1368            case op_iterator_next: { 1369                dataLogLnIf(Options::verboseOSR(), "Clearing LLInt iterator next property access."); 1370                // FIXME: We don't really want to clear both caches here but it's kinda annoying to figure out which one this is referring to... 1371                // See: https://bugs.webkit.org/show_bug.cgi?id=210693 1372                auto& metadata = instruction->as<OpIteratorNext>().metadata(this); 1373                LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache(metadata.m_doneModeMetadata); 1374                LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache(metadata.m_valueModeMetadata); 1375                break; 1376            } 1377            default: 1378                break; 13321379            } 13331380            return true;   30423089#undef CASE 30433090 3091    case op_iterator_open: 3092        return &valueProfileFor(instruction->as<OpIteratorOpen>().metadata(this), bytecodeIndex.checkpoint()); 3093    case op_iterator_next: 3094        return &valueProfileFor(instruction->as<OpIteratorNext>().metadata(this), bytecodeIndex.checkpoint()); 3095 30443096    default: 30453097        return nullptr;
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/CodeBlock.h

    r259676 r260323   405405 406406    const InstructionStream& instructions() const { return m_unlinkedCode->instructions(); } 407    const Instruction* instructionAt(BytecodeIndex index) const { return instructions().at(index).ptr(); } 407408 408409    size_t predictedMachineCodeSize();
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/CodeBlockInlines.h

    r245658 r260323   4040    if (m_metadata) { 4141#define VISIT(__op) \ 42    m_metadata->forEach<__op>([&] (auto& metadata) { func(metadata.m_profile, false); }); 42    m_metadata->forEach<__op>([&] (auto& metadata) { func(metadata.m_profile, false); }); 4343 4444        FOR_EACH_OPCODE_WITH_VALUE_PROFILE(VISIT) 4545 4646#undef VISIT 47    } 47 48        m_metadata->forEach<OpIteratorOpen>([&] (auto& metadata) { 49            func(metadata.m_iterableProfile, false); 50            func(metadata.m_iteratorProfile, false); 51            func(metadata.m_nextProfile, false); 52        }); 53 54        m_metadata->forEach<OpIteratorNext>([&] (auto& metadata) { 55            func(metadata.m_nextResultProfile, false); 56            func(metadata.m_doneProfile, false); 57            func(metadata.m_valueProfile, false); 58        }); 59    }    4860 4961}   6981#undef VISIT1 7082#undef VISIT2 83 84        m_metadata->forEach<OpIteratorNext>([&] (auto& metadata) { 85            func(metadata.m_iterableProfile); 86        }); 7187    } 7288}
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/GetByStatus.cpp

    r259175 r260323   8686    case op_get_by_val: 8787        return GetByStatus(NoInformation, false); 88 89    case op_iterator_open: { 90        ASSERT(bytecodeIndex.checkpoint() == OpIteratorOpen::getNext); 91        auto& metadata = instruction->as<OpIteratorOpen>().metadata(profiledBlock); 92 93        // FIXME: We should not just bail if we see a get_by_id_proto_load. 94        // https://bugs.webkit.org/show_bug.cgi?id=158039 95        if (metadata.m_modeMetadata.mode != GetByIdMode::Default) 96            return GetByStatus(NoInformation, false); 97        structureID = metadata.m_modeMetadata.defaultMode.structureID; 98        identifier = &vm.propertyNames->next; 99        break; 100    } 101 102    case op_iterator_next: { 103        auto& metadata = instruction->as<OpIteratorNext>().metadata(profiledBlock); 104        if (bytecodeIndex.checkpoint() == OpIteratorNext::getDone) { 105            if (metadata.m_doneModeMetadata.mode != GetByIdMode::Default) 106                return GetByStatus(NoInformation, false); 107            structureID = metadata.m_doneModeMetadata.defaultMode.structureID; 108            identifier = &vm.propertyNames->done; 109        } else { 110            ASSERT(bytecodeIndex.checkpoint() == OpIteratorNext::getValue); 111            if (metadata.m_valueModeMetadata.mode != GetByIdMode::Default) 112                return GetByStatus(NoInformation, false); 113            structureID = metadata.m_valueModeMetadata.defaultMode.structureID; 114            identifier = &vm.propertyNames->value; 115        } 116        break; 117    } 88118 89119    default: {
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/Instruction.h

    r254244 r260323   3333struct JSOpcodeTraits { 3434    using OpcodeID = ::JSC::OpcodeID; 35    static constexpr OpcodeID numberOfBytecodesWithCheckpoints = static_cast<OpcodeID>(NUMBER_OF_BYTECODE_WITH_CHECKPOINTS); 3536    static constexpr OpcodeID numberOfBytecodesWithMetadata = static_cast<OpcodeID>(NUMBER_OF_BYTECODE_WITH_METADATA); 37    static_assert(numberOfBytecodesWithCheckpoints <= numberOfBytecodesWithMetadata); 3638    static constexpr OpcodeID wide16 = op_wide16; 3739    static constexpr OpcodeID wide32 = op_wide32;   4244struct WasmOpcodeTraits { 4345    using OpcodeID = WasmOpcodeID; 46    static constexpr OpcodeID numberOfBytecodesWithCheckpoints = static_cast<OpcodeID>(NUMBER_OF_WASM_WITH_CHECKPOINTS); 4447    static constexpr OpcodeID numberOfBytecodesWithMetadata = static_cast<OpcodeID>(NUMBER_OF_WASM_WITH_METADATA); 48    static_assert(numberOfBytecodesWithCheckpoints <= numberOfBytecodesWithMetadata); 4549    static constexpr OpcodeID wide16 = wasm_wide16; 4650    static constexpr OpcodeID wide32 = wasm_wide32;   100104 101105    template<typename Traits = JSOpcodeTraits> 106    OpcodeSize width() const 107    { 108        if (isWide32<Traits>()) 109            return OpcodeSize::Wide32; 110        return isWide16<Traits>() ? OpcodeSize::Wide16 : OpcodeSize::Narrow; 111    } 112 113    template<typename Traits = JSOpcodeTraits> 102114    bool hasMetadata() const 103115    { 104116        return opcodeID<Traits>() < Traits::numberOfBytecodesWithMetadata; 117    } 118 119    template<typename Traits = JSOpcodeTraits> 120    bool hasCheckpoints() const 121    { 122        return opcodeID<Traits>() < Traits::numberOfBytecodesWithCheckpoints; 105123    } 106124   138156    } 139157 158    template<class T, OpcodeSize width, typename Traits = JSOpcodeTraits> 159    T asKnownWidth() const 160    { 161        ASSERT((is<T, Traits>())); 162        return T(reinterpret_cast<const typename TypeBySize<width>::unsignedType*>(this + (width == OpcodeSize::Narrow ? 1 : 2))); 163    } 164 140165    template<class T, typename Traits = JSOpcodeTraits> 141166    T* cast()   162187 163188        ASSERT(isWide16<Traits>()); 164        return reinterpret_cast<const Impl<OpcodeSize::Wide16>*>(bitwise_cast<uintptr_t>(this) + 1); 189        return reinterpret_cast<const Impl<OpcodeSize::Wide16>*>(this + 1); 165190    } 166191   170195 171196        ASSERT(isWide32<Traits>()); 172        return reinterpret_cast<const Impl<OpcodeSize::Wide32>*>(bitwise_cast<uintptr_t>(this) + 1); 197        return reinterpret_cast<const Impl<OpcodeSize::Wide32>*>(this + 1); 173198    } 174199};   176201struct Instruction : public BaseInstruction<OpcodeID> { 177202}; 203static_assert(sizeof(Instruction) == 1, "So pointer math is the same as byte math"); 178204 179205struct WasmInstance : public BaseInstruction<WasmOpcodeID> { 180206}; 207static_assert(sizeof(WasmInstance) == 1, "So pointer math is the same as byte math"); 181208 182209} // namespace JSC
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/InstructionStream.h

    r255459 r260323   177177    } 178178 179    bool contains(Instruction*) const; 179    bool contains(Instruction*) const; 180180 181181protected:
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/IterationModeMetadata.h

    r260322 r260323   11/* 2 * Copyright (C) 2016 Apple Inc. All rights reserved. 2 * Copyright (C) 2020 Apple Inc. All rights reserved. 33 * 44 * Redistribution and use in source and binary forms, with or without   2121 * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 2222 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 23 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 23 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 2424 */ 2525   2828namespace JSC { 2929 30class MacroAssembler; 31 32extern JS_EXPORT_PRIVATE volatile uint32_t g_superSamplerCount; 33 34void initializeSuperSampler(); 35 36class SuperSamplerScope { 37public: 38    SuperSamplerScope(bool doSample = true) 39        : m_doSample(doSample) 40    { 41        if (m_doSample) 42            g_superSamplerCount++; 43    } 44 45    ~SuperSamplerScope() 46    { 47        if (m_doSample) 48            g_superSamplerCount--; 49    } 50 51private: 52    bool m_doSample; 30enum class IterationMode : uint8_t { 31    Generic = 1 << 0, 32    FastArray = 1 << 1, 5333}; 5434 55JS_EXPORT_PRIVATE void resetSuperSamplerState(); 56JS_EXPORT_PRIVATE void printSuperSamplerState(); 35constexpr uint8_t numberOfIterationModes = 2; 36 37OVERLOAD_BITWISE_OPERATORS_FOR_ENUM_CLASS_WITH_INTERGRALS(IterationMode); 38 39struct IterationModeMetadata { 40    uint8_t seenModes { 0 }; 41    static_assert(sizeof(decltype(seenModes)) == sizeof(IterationMode)); 42}; 5743 5844} // namespace JSC
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp

    r245658 r260323   6161 6262    auto& instruction = m_owner->instructions().at(m_bytecodeOffset.get()); 63    clearLLIntGetByIdCache(instruction->as<OpGetById>().metadata(m_owner.get())); 63    switch (instruction->opcodeID()) { 64    case op_get_by_id: 65        clearLLIntGetByIdCache(instruction->as<OpGetById>().metadata(m_owner.get()).m_modeMetadata); 66        break; 67 68    case op_iterator_open: 69        clearLLIntGetByIdCache(instruction->as<OpIteratorOpen>().metadata(m_owner.get()).m_modeMetadata); 70        break; 71 72    case op_iterator_next: { 73        auto& metadata = instruction->as<OpIteratorNext>().metadata(m_owner.get()); 74        clearLLIntGetByIdCache(metadata.m_doneModeMetadata); 75        clearLLIntGetByIdCache(metadata.m_valueModeMetadata); 76        break; 77    } 78 79    default: 80        RELEASE_ASSERT_NOT_REACHED(); 81        break; 82    } 6483} 6584 66void LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache(OpGetById::Metadata& metadata) 85void LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache(GetByIdModeMetadata& metadata) 6786{ 68    // Keep hitCountForLLIntCaching value. 69    metadata.m_modeMetadata.clearToDefaultModeWithoutCache(); 87    metadata.clearToDefaultModeWithoutCache(); 7088} 7189 72 7390} // namespace JSC
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.h

    r247843 r260323   4040    void install(VM&); 4141 42    static void clearLLIntGetByIdCache(OpGetById::Metadata&); 42    static void clearLLIntGetByIdCache(GetByIdModeMetadata&); 4343 4444    const ObjectPropertyCondition& key() const { return m_key; }
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/Opcode.h

    r255459 r260323   149149    macro(OpCallEval) \ 150150    macro(OpConstruct) \ 151    macro(OpIteratorOpen) \ 152    macro(OpIteratorNext) \ 153 151154 152155IGNORE_WARNINGS_BEGIN("type-limits")
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/SpeculatedType.h

    r252680 r260323   121121} 122122 123inline bool isSubtypeSpeculation(SpeculatedType value, SpeculatedType category) 124{ 125    return !(value & ~category) && value; 126} 127 128inline bool speculationContains(SpeculatedType value, SpeculatedType category) 129{ 130    return !!(value & category) && value; 131} 132 123133inline bool isCellSpeculation(SpeculatedType value) 124134{
  • TabularUnified trunk/Source/JavaScriptCore/bytecode/SuperSampler.h

    r253443 r260323   4949    } 5050 51    void release() 52    { 53        ASSERT(m_doSample); 54        g_superSamplerCount--; 55        m_doSample = false; 56    } 57 5158private: 5259    bool m_doSample;
  • TabularUnified trunk/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp

    r259810 r260323   40174017} 40184018 4019void BytecodeGenerator::emitEnumeration(ThrowableExpressionData* node, ExpressionNode* subjectNode, const ScopedLambda<void(BytecodeGenerator&, RegisterID*)>& callBack, ForOfNode* forLoopNode, RegisterID* forLoopSymbolTable) 4019void BytecodeGenerator::emitGenericEnumeration(ThrowableExpressionData* node, ExpressionNode* subjectNode, const ScopedLambda<void(BytecodeGenerator&, RegisterID*)>& callBack, ForOfNode* forLoopNode, RegisterID* forLoopSymbolTable) 40204020{ 40214021    bool isForAwait = forLoopNode ? forLoopNode->isForAwait() : false; 4022    ASSERT(!isForAwait || (isForAwait && isAsyncFunctionParseMode(parseMode()))); 4022    auto shouldEmitAwait = isForAwait ? EmitAwait::Yes : EmitAwait::No; 4023    ASSERT(!isForAwait || isAsyncFunctionParseMode(parseMode())); 40234024 40244025    RefPtr<RegisterID> subject = newTemporary(); 40254026    emitNode(subject.get(), subjectNode); 4026    RefPtr<RegisterID> iterator = isForAwait ? emitGetAsyncIterator(subject.get(), node) : emitGetIterator(subject.get(), node); 4027    RefPtr<RegisterID> iterator = isForAwait ? emitGetAsyncIterator(subject.get(), node) : emitGetGenericIterator(subject.get(), node); 40274028    RefPtr<RegisterID> nextMethod = emitGetById(newTemporary(), iterator.get(), propertyNames().next); 40284029   41314132 41324133        { 4133            emitIteratorNext(value.get(), nextMethod.get(), iterator.get(), node, isForAwait ? EmitAwait::Yes : EmitAwait::No); 4134            emitIteratorGenericNext(value.get(), nextMethod.get(), iterator.get(), node, shouldEmitAwait); 41344135 41354136            emitJumpIfTrue(emitGetById(newTemporary(), value.get(), propertyNames().done), loopDone.get());   41444145        if (breakLabelIsBound) { 41454146            // IteratorClose sequence for break-ed control flow. 4146            emitIteratorClose(iterator.get(), node, isForAwait ? EmitAwait::Yes : EmitAwait::No); 4147            emitIteratorGenericClose(iterator.get(), node, shouldEmitAwait); 4148        } 4149    } 4150    emitLabel(loopDone.get()); 4151} 4152 4153 4154void BytecodeGenerator::emitEnumeration(ThrowableExpressionData* node, ExpressionNode* subjectNode, const ScopedLambda<void(BytecodeGenerator&, RegisterID*)>& callBack, ForOfNode* forLoopNode, RegisterID* forLoopSymbolTable) 4155{ 4156    if (!Options::useIterationIntrinsics() || is32Bit() || (forLoopNode && forLoopNode->isForAwait())) { 4157        emitGenericEnumeration(node, subjectNode, callBack, forLoopNode, forLoopSymbolTable); 4158        return; 4159    } 4160 4161    RefPtr<RegisterID> iterable = newTemporary(); 4162    emitNode(iterable.get(), subjectNode); 4163 4164    RefPtr<RegisterID> iteratorSymbol = emitGetById(newTemporary(), iterable.get(), propertyNames().iteratorSymbol); 4165    RefPtr<RegisterID> nextOrIndex = newTemporary(); 4166    RefPtr<RegisterID> iterator = newTemporary(); 4167    CallArguments args(*this, nullptr, 0); 4168    move(args.thisRegister(), iterable.get()); 4169    emitIteratorOpen(iterator.get(), nextOrIndex.get(), iteratorSymbol.get(), args, node); 4170 4171    Ref<Label> loopDone = newLabel(); 4172    Ref<Label> tryStartLabel = newLabel(); 4173    Ref<Label> finallyViaThrowLabel = newLabel(); 4174    Ref<Label> finallyLabel = newLabel(); 4175    Ref<Label> catchLabel = newLabel(); 4176    Ref<Label> endCatchLabel = newLabel(); 4177 4178    RefPtr<RegisterID> value = newTemporary(); 4179    emitLoad(value.get(), jsUndefined()); 4180 4181    // RefPtr<RegisterID> iterator's lifetime must be longer than IteratorCloseContext. 4182    FinallyContext finallyContext(*this, finallyLabel.get()); 4183    pushFinallyControlFlowScope(finallyContext); 4184 4185    { 4186        Ref<LabelScope> scope = newLabelScope(LabelScope::Loop); 4187 4188        Ref<Label> loopStart = newLabel(); 4189        emitLabel(loopStart.get()); 4190        emitLabel(*scope->continueTarget()); 4191        emitLoopHint(); 4192 4193        if (forLoopNode) { 4194            RELEASE_ASSERT(forLoopNode->isForOfNode()); 4195            prepareLexicalScopeForNextForLoopIteration(forLoopNode, forLoopSymbolTable); 4196            emitDebugHook(forLoopNode->lexpr()); 4197        } 4198 4199        { 4200            RefPtr<RegisterID> done = newTemporary(); 4201            CallArguments nextArgs(*this, nullptr, 0); 4202            move(nextArgs.thisRegister(), iterator.get()); 4203 4204            emitIteratorNext(done.get(), value.get(), iterable.get(), nextOrIndex.get(), nextArgs, node); 4205            emitJumpIfTrue(done.get(), loopDone.get()); 4206        } 4207 4208        emitLabel(tryStartLabel.get()); 4209        TryData* tryData = pushTry(tryStartLabel.get(), finallyViaThrowLabel.get(), HandlerType::SynthesizedFinally); 4210        callBack(*this, value.get()); 4211        emitJump(loopStart.get()); 4212 4213        // IteratorClose sequence for abrupt completions. 4214        { 4215            // Finally block for the enumeration. 4216            emitLabel(finallyViaThrowLabel.get()); 4217            popTry(tryData, finallyViaThrowLabel.get()); 4218 4219            Ref<Label> finallyBodyLabel = newLabel(); 4220            RefPtr<RegisterID> finallyExceptionRegister = newTemporary(); 4221 4222            emitOutOfLineFinallyHandler(finallyContext.completionValueRegister(), finallyContext.completionTypeRegister(), tryData); 4223            move(finallyExceptionRegister.get(), finallyContext.completionValueRegister()); 4224            emitJump(finallyBodyLabel.get()); 4225 4226            emitLabel(finallyLabel.get()); 4227            moveEmptyValue(finallyExceptionRegister.get()); 4228 4229            // Finally fall through case. 4230            emitLabel(finallyBodyLabel.get()); 4231            restoreScopeRegister(); 4232 4233            Ref<Label> finallyDone = newLabel(); 4234 4235            RefPtr<RegisterID> returnMethod = emitGetById(newTemporary(), iterator.get(), propertyNames().returnKeyword); 4236            emitJumpIfTrue(emitIsUndefined(newTemporary(), returnMethod.get()), finallyDone.get()); 4237 4238            Ref<Label> returnCallTryStart = newLabel(); 4239            emitLabel(returnCallTryStart.get()); 4240            TryData* returnCallTryData = pushTry(returnCallTryStart.get(), catchLabel.get(), HandlerType::SynthesizedCatch); 4241 4242            CallArguments returnArguments(*this, nullptr); 4243            move(returnArguments.thisRegister(), iterator.get()); 4244            emitCall(value.get(), returnMethod.get(), NoExpectedFunction, returnArguments, node->divot(), node->divotStart(), node->divotEnd(), DebuggableCall::No); 4245 4246            emitJumpIfTrue(emitIsObject(newTemporary(), value.get()), finallyDone.get()); 4247            emitThrowTypeError("Iterator result interface is not an object."_s); 4248 4249            emitLabel(finallyDone.get()); 4250            emitFinallyCompletion(finallyContext, endCatchLabel.get()); 4251 4252            popTry(returnCallTryData, finallyDone.get()); 4253 4254            // Catch block for exceptions that may be thrown while calling the return 4255            // handler in the enumeration finally block. The only reason we need this 4256            // catch block is because if entered the above finally block due to a thrown 4257            // exception, then we want to re-throw the original exception on exiting 4258            // the finally block. Otherwise, we'll let any new exception pass through. 4259            { 4260                emitLabel(catchLabel.get()); 4261 4262                RefPtr<RegisterID> exceptionRegister = newTemporary(); 4263                emitOutOfLineFinallyHandler(exceptionRegister.get(), finallyContext.completionTypeRegister(), returnCallTryData); 4264                // Since this is a synthesized catch block and we're guaranteed to never need 4265                // to resolve any symbols from the scope, we can skip restoring the scope 4266                // register here. 4267 4268                Ref<Label> throwLabel = newLabel(); 4269                emitJumpIfTrue(emitIsEmpty(newTemporary(), finallyExceptionRegister.get()), throwLabel.get()); 4270                move(exceptionRegister.get(), finallyExceptionRegister.get()); 4271 4272                emitLabel(throwLabel.get()); 4273                emitThrow(exceptionRegister.get()); 4274 4275                emitLabel(endCatchLabel.get()); 4276            } 4277        } 4278 4279        bool breakLabelIsBound = scope->breakTargetMayBeBound(); 4280        if (breakLabelIsBound) 4281            emitLabel(scope->breakTarget()); 4282        popFinallyControlFlowScope(); 4283        if (breakLabelIsBound) { 4284            // IteratorClose sequence for break-ed control flow. 4285            emitIteratorGenericClose(iterator.get(), node, EmitAwait::No); 41474286        } 41484287    }   42654404    OpIsEmpty::emit(this, dst, src); 42664405    return dst; 4267} 4268 4269RegisterID* BytecodeGenerator::emitIteratorNext(RegisterID* dst, RegisterID* nextMethod, RegisterID* iterator, const ThrowableExpressionData* node, EmitAwait doEmitAwait) 4270{ 4271    { 4272        CallArguments nextArguments(*this, nullptr); 4273        move(nextArguments.thisRegister(), iterator); 4274        emitCall(dst, nextMethod, NoExpectedFunction, nextArguments, node->divot(), node->divotStart(), node->divotEnd(), DebuggableCall::No); 4275 4276        if (doEmitAwait == EmitAwait::Yes) 4277            emitAwait(dst); 4278    } 4279    { 4280        Ref<Label> typeIsObject = newLabel(); 4281        emitJumpIfTrue(emitIsObject(newTemporary(), dst), typeIsObject.get()); 4282        emitThrowTypeError("Iterator result interface is not an object."_s); 4283        emitLabel(typeIsObject.get()); 4284    } 4285    return dst; 4286} 4287 4288RegisterID* BytecodeGenerator::emitIteratorNextWithValue(RegisterID* dst, RegisterID* nextMethod, RegisterID* iterator, RegisterID* value, const ThrowableExpressionData* node) 4289{ 4290    { 4291        CallArguments nextArguments(*this, nullptr, 1); 4292        move(nextArguments.thisRegister(), iterator); 4293        move(nextArguments.argumentRegister(0), value); 4294        emitCall(dst, nextMethod, NoExpectedFunction, nextArguments, node->divot(), node->divotStart(), node->divotEnd(), DebuggableCall::No); 4295    } 4296 4297    return dst; 4298} 4299 4300void BytecodeGenerator::emitIteratorClose(RegisterID* iterator, const ThrowableExpressionData* node, EmitAwait doEmitAwait) 4301{ 4302    Ref<Label> done = newLabel(); 4303    RefPtr<RegisterID> returnMethod = emitGetById(newTemporary(), iterator, propertyNames().returnKeyword); 4304    emitJumpIfTrue(emitIsUndefined(newTemporary(), returnMethod.get()), done.get()); 4305 4306    RefPtr<RegisterID> value = newTemporary(); 4307    CallArguments returnArguments(*this, nullptr); 4308    move(returnArguments.thisRegister(), iterator); 4309    emitCall(value.get(), returnMethod.get(), NoExpectedFunction, returnArguments, node->divot(), node->divotStart(), node->divotEnd(), DebuggableCall::No); 4310 4311    if (doEmitAwait == EmitAwait::Yes) 4312        emitAwait(value.get()); 4313 4314    emitJumpIfTrue(emitIsObject(newTemporary(), value.get()), done.get()); 4315    emitThrowTypeError("Iterator result interface is not an object."_s); 4316    emitLabel(done.get()); 43174406} 43184407   45564645} 45574646 4558RegisterID* BytecodeGenerator::emitGetIterator(RegisterID* argument, ThrowableExpressionData* node) 4647void BytecodeGenerator::emitIteratorOpen(RegisterID* iterator, RegisterID* nextOrIndex, RegisterID* symbolIterator, CallArguments& iterable, const ThrowableExpressionData* node) 4648{ 4649    // Reserve space for call frame. 4650    Vector<RefPtr<RegisterID>, CallFrame::headerSizeInRegisters, UnsafeVectorOverflow> callFrame; 4651    for (int i = 0; i < CallFrame::headerSizeInRegisters; ++i) 4652        callFrame.append(newTemporary()); 4653 4654    if (shouldEmitDebugHooks()) 4655        emitDebugHook(WillExecuteExpression, node->divotStart()); 4656 4657    emitExpressionInfo(node->divot(), node->divotStart(), node->divotEnd()); 4658    OpIteratorOpen::emit(this, iterator, nextOrIndex, symbolIterator, iterable.thisRegister(), iterable.stackOffset()); 4659} 4660 4661void BytecodeGenerator::emitIteratorNext(RegisterID* done, RegisterID* value, RegisterID* iterable, RegisterID* nextOrIndex, CallArguments& iterator, const ThrowableExpressionData* node) 4662{ 4663    // Reserve space for call frame. 4664    Vector<RefPtr<RegisterID>, CallFrame::headerSizeInRegisters, UnsafeVectorOverflow> callFrame; 4665    for (int i = 0; i < CallFrame::headerSizeInRegisters; ++i) 4666        callFrame.append(newTemporary()); 4667 4668    if (shouldEmitDebugHooks()) 4669        emitDebugHook(WillExecuteExpression, node->divotStart()); 4670 4671    emitExpressionInfo(node->divot(), node->divotStart(), node->divotEnd()); 4672    OpIteratorNext::emit(this, done, value, iterable, nextOrIndex, iterator.thisRegister(), iterator.stackOffset()); 4673} 4674 4675RegisterID* BytecodeGenerator::emitGetGenericIterator(RegisterID* argument, ThrowableExpressionData* node) 45594676{ 45604677    RefPtr<RegisterID> iterator = emitGetById(newTemporary(), argument, propertyNames().iteratorSymbol);   45634680    return iterator.get(); 45644681} 4682 4683RegisterID* BytecodeGenerator::emitIteratorGenericNext(RegisterID* dst, RegisterID* nextMethod, RegisterID* iterator, const ThrowableExpressionData* node, EmitAwait doEmitAwait) 4684{ 4685    { 4686        CallArguments nextArguments(*this, nullptr); 4687        move(nextArguments.thisRegister(), iterator); 4688        emitCall(dst, nextMethod, NoExpectedFunction, nextArguments, node->divot(), node->divotStart(), node->divotEnd(), DebuggableCall::No); 4689 4690        if (doEmitAwait == EmitAwait::Yes) 4691            emitAwait(dst); 4692    } 4693    { 4694        Ref<Label> typeIsObject = newLabel(); 4695        emitJumpIfTrue(emitIsObject(newTemporary(), dst), typeIsObject.get()); 4696        emitThrowTypeError("Iterator result interface is not an object."_s); 4697        emitLabel(typeIsObject.get()); 4698    } 4699    return dst; 4700} 4701 4702RegisterID* BytecodeGenerator::emitIteratorGenericNextWithValue(RegisterID* dst, RegisterID* nextMethod, RegisterID* iterator, RegisterID* value, const ThrowableExpressionData* node) 4703{ 4704    { 4705        CallArguments nextArguments(*this, nullptr, 1); 4706        move(nextArguments.thisRegister(), iterator); 4707        move(nextArguments.argumentRegister(0), value); 4708        emitCall(dst, nextMethod, NoExpectedFunction, nextArguments, node->divot(), node->divotStart(), node->divotEnd(), DebuggableCall::No); 4709    } 4710 4711    return dst; 4712} 4713 4714void BytecodeGenerator::emitIteratorGenericClose(RegisterID* iterator, const ThrowableExpressionData* node, EmitAwait doEmitAwait) 4715{ 4716    Ref<Label> done = newLabel(); 4717    RefPtr<RegisterID> returnMethod = emitGetById(newTemporary(), iterator, propertyNames().returnKeyword); 4718    emitJumpIfTrue(emitIsUndefined(newTemporary(), returnMethod.get()), done.get()); 4719 4720    RefPtr<RegisterID> value = newTemporary(); 4721    CallArguments returnArguments(*this, nullptr); 4722    move(returnArguments.thisRegister(), iterator); 4723    emitCall(value.get(), returnMethod.get(), NoExpectedFunction, returnArguments, node->divot(), node->divotStart(), node->divotEnd(), DebuggableCall::No); 4724 4725    if (doEmitAwait == EmitAwait::Yes) 4726        emitAwait(value.get()); 4727 4728    emitJumpIfTrue(emitIsObject(newTemporary(), value.get()), done.get()); 4729    emitThrowTypeError("Iterator result interface is not an object."_s); 4730    emitLabel(done.get()); 4731} 4732 45654733 45664734RegisterID* BytecodeGenerator::emitGetAsyncIterator(RegisterID* argument, ThrowableExpressionData* node)   45764744    emitLabel(asyncIteratorNotFound.get()); 45774745 4578    RefPtr<RegisterID> commonIterator = emitGetIterator(argument, node); 4746    RefPtr<RegisterID> commonIterator = emitGetGenericIterator(argument, node); 45794747    move(iterator.get(), commonIterator.get()); 45804748   46054773    RefPtr<RegisterID> value = newTemporary(); 46064774    { 4607        RefPtr<RegisterID> iterator = parseMode() == SourceParseMode::AsyncGeneratorBodyMode ? emitGetAsyncIterator(argument, node) : emitGetIterator(argument, node); 4775        RefPtr<RegisterID> iterator = parseMode() == SourceParseMode::AsyncGeneratorBodyMode ? emitGetAsyncIterator(argument, node) : emitGetGenericIterator(argument, node); 46084776        RefPtr<RegisterID> nextMethod = emitGetById(newTemporary(), iterator.get(), propertyNames().next); 46094777   46434811 46444812                    EmitAwait emitAwaitInIteratorClose = parseMode() == SourceParseMode::AsyncGeneratorBodyMode ? EmitAwait::Yes : EmitAwait::No; 4645                    emitIteratorClose(iterator.get(), node, emitAwaitInIteratorClose); 4813                    emitIteratorGenericClose(iterator.get(), node, emitAwaitInIteratorClose); 46464814 46474815                    emitThrowTypeError("Delegated generator does not have a 'throw' method."_s);   47044872 47054873            emitLabel(nextElement.get()); 4706            emitIteratorNextWithValue(value.get(), nextMethod.get(), iterator.get(), value.get(), node); 4874            emitIteratorGenericNextWithValue(value.get(), nextMethod.get(), iterator.get(), value.get(), node); 47074875 47084876            emitLabel(branchOnResult.get());
  • TabularUnified trunk/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h

    r260181 r260323   829829            RegisterID* valueRegister, RegisterID* getterRegister, RegisterID* setterRegister, unsigned options, const JSTextPosition&); 830830 831        void emitGenericEnumeration(ThrowableExpressionData* enumerationNode, ExpressionNode* subjectNode, const ScopedLambda<void(BytecodeGenerator&, RegisterID*)>& callBack, ForOfNode* = nullptr, RegisterID* forLoopSymbolTable = nullptr); 831832        void emitEnumeration(ThrowableExpressionData* enumerationNode, ExpressionNode* subjectNode, const ScopedLambda<void(BytecodeGenerator&, RegisterID*)>& callBack, ForOfNode* = nullptr, RegisterID* forLoopSymbolTable = nullptr); 832833   901902        void emitRequireObjectCoercible(RegisterID* value, const String& error); 902903 903        RegisterID* emitIteratorNext(RegisterID* dst, RegisterID* nextMethod, RegisterID* iterator, const ThrowableExpressionData* node, JSC::EmitAwait = JSC::EmitAwait::No); 904        RegisterID* emitIteratorNextWithValue(RegisterID* dst, RegisterID* nextMethod, RegisterID* iterator, RegisterID* value, const ThrowableExpressionData* node); 905        void emitIteratorClose(RegisterID* iterator, const ThrowableExpressionData* node, EmitAwait = EmitAwait::No); 904        void emitIteratorOpen(RegisterID* iterator, RegisterID* nextOrIndex, RegisterID* symbolIterator, CallArguments& iterable, const ThrowableExpressionData*); 905        void emitIteratorNext(RegisterID* done, RegisterID* value, RegisterID* iterable, RegisterID* nextOrIndex, CallArguments& iterator, const ThrowableExpressionData*); 906 907        RegisterID* emitGetGenericIterator(RegisterID*, ThrowableExpressionData*); 908        RegisterID* emitGetAsyncIterator(RegisterID*, ThrowableExpressionData*); 909 910        RegisterID* emitIteratorGenericNext(RegisterID* dst, RegisterID* nextMethod, RegisterID* iterator, const ThrowableExpressionData* node, JSC::EmitAwait = JSC::EmitAwait::No); 911        RegisterID* emitIteratorGenericNextWithValue(RegisterID* dst, RegisterID* nextMethod, RegisterID* iterator, RegisterID* value, const ThrowableExpressionData* node); 912        void emitIteratorGenericClose(RegisterID* iterator, const ThrowableExpressionData* node, EmitAwait = EmitAwait::No); 906913 907914        RegisterID* emitRestParameter(RegisterID* result, unsigned numParametersToSkip);   952959        void emitPushCatchScope(VariableEnvironment&); 953960        void emitPopCatchScope(VariableEnvironment&); 954 955        RegisterID* emitGetIterator(RegisterID*, ThrowableExpressionData*); 956        RegisterID* emitGetAsyncIterator(RegisterID*, ThrowableExpressionData*); 957961 958962        void emitAwait(RegisterID*);
  • TabularUnified trunk/Source/JavaScriptCore/bytecompiler/NodesCodegen.cpp

    r260275 r260323   48214821 48224822    if (m_targetPatterns.isEmpty()) { 4823        generator.emitIteratorClose(iterator.get(), this); 4823        generator.emitIteratorGenericClose(iterator.get(), this); 48244824        return; 48254825    }   48374837 48384838            RefPtr<RegisterID> value = generator.newTemporary(); 4839            generator.emitIteratorNext(value.get(), nextMethod.get(), iterator.get(), this); 4839            generator.emitIteratorGenericNext(value.get(), nextMethod.get(), iterator.get(), this); 48404840            generator.emitGetById(done.get(), value.get(), generator.propertyNames().done); 48414841            generator.emitJumpIfTrue(done.get(), iterationSkipped.get());   48734873 48744874            RefPtr<RegisterID> value = generator.newTemporary(); 4875            generator.emitIteratorNext(value.get(), nextMethod.get(), iterator.get(), this); 4875            generator.emitIteratorGenericNext(value.get(), nextMethod.get(), iterator.get(), this); 48764876            generator.emitGetById(done.get(), value.get(), generator.propertyNames().done); 48774877            generator.emitJumpIfTrue(done.get(), iterationDone.get());   48914891    Ref<Label> iteratorClosed = generator.newLabel(); 48924892    generator.emitJumpIfTrue(done.get(), iteratorClosed.get()); 4893    generator.emitIteratorClose(iterator.get(), this); 4893    generator.emitIteratorGenericClose(iterator.get(), this); 48944894    generator.emitLabel(iteratorClosed.get()); 48954895}
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h

    r260321 r260323   37603760    } 37613761    3762    case CheckCell: { 3762    case CheckIsConstant: { 37633763        JSValue value = forNode(node->child1()).value(); 3764        if (value == node->cellOperand()->value()) { 3764        if (value == node->constant()->value()) { 37653765            m_state.setShouldTryConstantFolding(true); 3766            ASSERT(value); 3767            break; 3768        } 3769        filterByValue(node->child1(), *node->cellOperand()); 3766            ASSERT(value || forNode(node->child1()).m_type == SpecEmpty); 3767            break; 3768        } 3769        filterByValue(node->child1(), *node->constant()); 37703770        break; 37713771    }   43984398        } 43994399    } 4400    for (size_t i = m_state.numberOfArguments(); i--;) 4401        functor(m_state.argument(i)); 4402    for (size_t i = m_state.numberOfLocals(); i--;) 4403        functor(m_state.local(i)); 4400    for (size_t i = m_state.size(); i--;) 4401        functor(m_state.atIndex(i)); 44044402} 44054403
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGAtTailAbstractState.h

    r254735 r260323   138138    } 139139    140    unsigned size() const { return m_block->valuesAtTail.size(); } 140141    unsigned numberOfArguments() const { return m_block->valuesAtTail.numberOfArguments(); } 141142    unsigned numberOfLocals() const { return m_block->valuesAtTail.numberOfLocals(); } 143    unsigned numberOfTmps() const { return m_block->valuesAtTail.numberOfTmps(); } 144    AbstractValue& atIndex(size_t index) { return m_block->valuesAtTail.at(index); } 142145    AbstractValue& operand(Operand operand) { return m_block->valuesAtTail.operand(operand); } 143146    AbstractValue& local(size_t index) { return m_block->valuesAtTail.local(index); } 144147    AbstractValue& argument(size_t index) { return m_block->valuesAtTail.argument(index); } 148    AbstractValue& tmp(size_t index) { return m_block->valuesAtTail.tmp(index); } 145149    146150    void clobberStructures()
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp

    r260321 r260323   3535#include "ByValInfo.h" 3636#include "BytecodeGenerator.h" 37#include "BytecodeOperandsForCheckpoint.h" 3738#include "BytecodeUseDef.h" 3839#include "CacheableIdentifierInlines.h"   160161    // Helper for min and max. 161162    template<typename ChecksFunctor> 162    bool handleMinMax(VirtualRegister result, NodeType op, int registerOffset, int argumentCountIncludingThis, const ChecksFunctor& insertChecks); 163    bool handleMinMax(Operand result, NodeType op, int registerOffset, int argumentCountIncludingThis, const ChecksFunctor& insertChecks); 163164    164165    void refineStatically(CallLinkStatus&, Node* callTarget);   177178    enum Terminality { Terminal, NonTerminal }; 178179    Terminality handleCall( 179        VirtualRegister result, NodeType op, InlineCallFrame::Kind, unsigned instructionSize, 180        Operand result, NodeType op, InlineCallFrame::Kind, unsigned instructionSize, 180181        Node* callTarget, int argumentCountIncludingThis, int registerOffset, CallLinkStatus, 181182        SpeculatedType prediction);   191192    unsigned inliningCost(CallVariant, int argumentCountIncludingThis, InlineCallFrame::Kind); // Return UINT_MAX if it's not an inlining candidate. By convention, intrinsics have a cost of 1. 192193    // Handle inlining. Return true if it succeeded, false if we need to plant a call. 193    bool handleVarargsInlining(Node* callTargetNode, VirtualRegister result, const CallLinkStatus&, int registerOffset, VirtualRegister thisArgument, VirtualRegister argumentsArgument, unsigned argumentsOffset, NodeType callOp, InlineCallFrame::Kind); 194    bool handleVarargsInlining(Node* callTargetNode, Operand result, const CallLinkStatus&, int registerOffset, VirtualRegister thisArgument, VirtualRegister argumentsArgument, unsigned argumentsOffset, NodeType callOp, InlineCallFrame::Kind); 194195    unsigned getInliningBalance(const CallLinkStatus&, CodeSpecializationKind); 195196    enum class CallOptimizationResult { OptimizedToJump, Inlined, DidNothing }; 196    CallOptimizationResult handleCallVariant(Node* callTargetNode, VirtualRegister result, CallVariant, int registerOffset, VirtualRegister thisArgument, int argumentCountIncludingThis, BytecodeIndex nextIndex, InlineCallFrame::Kind, SpeculatedType prediction, unsigned& inliningBalance, BasicBlock* continuationBlock, bool needsToCheckCallee); 197    CallOptimizationResult handleInlining(Node* callTargetNode, VirtualRegister result, const CallLinkStatus&, int registerOffset, VirtualRegister thisArgument, int argumentCountIncludingThis, BytecodeIndex nextIndex, NodeType callOp, InlineCallFrame::Kind, SpeculatedType prediction); 197    CallOptimizationResult handleCallVariant(Node* callTargetNode, Operand result, CallVariant, int registerOffset, VirtualRegister thisArgument, int argumentCountIncludingThis, BytecodeIndex nextIndex, InlineCallFrame::Kind, SpeculatedType prediction, unsigned& inliningBalance, BasicBlock* continuationBlock, bool needsToCheckCallee); 198    CallOptimizationResult handleInlining(Node* callTargetNode, Operand result, const CallLinkStatus&, int registerOffset, VirtualRegister thisArgument, int argumentCountIncludingThis, BytecodeIndex nextIndex, NodeType callOp, InlineCallFrame::Kind, SpeculatedType prediction); 198199    template<typename ChecksFunctor> 199    void inlineCall(Node* callTargetNode, VirtualRegister result, CallVariant, int registerOffset, int argumentCountIncludingThis, InlineCallFrame::Kind, BasicBlock* continuationBlock, const ChecksFunctor& insertChecks); 200    void inlineCall(Node* callTargetNode, Operand result, CallVariant, int registerOffset, int argumentCountIncludingThis, InlineCallFrame::Kind, BasicBlock* continuationBlock, const ChecksFunctor& insertChecks); 200201    // Handle intrinsic functions. Return true if it succeeded, false if we need to plant a call. 201202    template<typename ChecksFunctor> 202    bool handleIntrinsicCall(Node* callee, VirtualRegister result, Intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, const ChecksFunctor& insertChecks); 203    bool handleIntrinsicCall(Node* callee, Operand result, Intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, const ChecksFunctor& insertChecks); 203204    template<typename ChecksFunctor> 204    bool handleDOMJITCall(Node* callee, VirtualRegister result, const DOMJIT::Signature*, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, const ChecksFunctor& insertChecks); 205    bool handleDOMJITCall(Node* callee, Operand result, const DOMJIT::Signature*, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, const ChecksFunctor& insertChecks); 205206    template<typename ChecksFunctor> 206    bool handleIntrinsicGetter(VirtualRegister result, SpeculatedType prediction, const GetByIdVariant& intrinsicVariant, Node* thisNode, const ChecksFunctor& insertChecks); 207    bool handleIntrinsicGetter(Operand result, SpeculatedType prediction, const GetByIdVariant& intrinsicVariant, Node* thisNode, const ChecksFunctor& insertChecks); 207208    template<typename ChecksFunctor> 208    bool handleTypedArrayConstructor(VirtualRegister result, InternalFunction*, int registerOffset, int argumentCountIncludingThis, TypedArrayType, const ChecksFunctor& insertChecks); 209    bool handleTypedArrayConstructor(Operand result, InternalFunction*, int registerOffset, int argumentCountIncludingThis, TypedArrayType, const ChecksFunctor& insertChecks); 209210    template<typename ChecksFunctor> 210    bool handleConstantInternalFunction(Node* callTargetNode, VirtualRegister result, InternalFunction*, int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind, SpeculatedType, const ChecksFunctor& insertChecks); 211    bool handleConstantInternalFunction(Node* callTargetNode, Operand result, InternalFunction*, int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind, SpeculatedType, const ChecksFunctor& insertChecks); 211212    Node* handlePutByOffset(Node* base, unsigned identifier, PropertyOffset, Node* value); 212213    Node* handleGetByOffset(SpeculatedType, Node* base, unsigned identifierNumber, PropertyOffset, NodeType = GetByOffset); 213    bool handleDOMJITGetter(VirtualRegister result, const GetByIdVariant&, Node* thisNode, unsigned identifierNumber, SpeculatedType prediction); 214    bool handleDOMJITGetter(Operand result, const GetByIdVariant&, Node* thisNode, unsigned identifierNumber, SpeculatedType prediction); 214215    bool handleModuleNamespaceLoad(VirtualRegister result, SpeculatedType, Node* base, GetByStatus); 215216   297298        // At this point, it's again OK to OSR exit. 298299        m_exitOK = true; 299        addToGraph(ExitOK); 300 301300        processSetLocalQueue(); 302301    }   321320    } 322321 323    Node* get(VirtualRegister operand) 322    Node* get(Operand operand) 324323    { 325324        if (operand.isConstant()) { 326            unsigned constantIndex = operand.toConstantIndex(); 325            unsigned constantIndex = operand.virtualRegister().toConstantIndex(); 327326            unsigned oldSize = m_constants.size(); 328327            if (constantIndex >= oldSize || !m_constants[constantIndex]) { 329328                const CodeBlock& codeBlock = *m_inlineStackTop->m_codeBlock; 330                JSValue value = codeBlock.getConstant(operand); 331                SourceCodeRepresentation sourceCodeRepresentation = codeBlock.constantSourceCodeRepresentation(operand); 329                JSValue value = codeBlock.getConstant(operand.virtualRegister()); 330                SourceCodeRepresentation sourceCodeRepresentation = codeBlock.constantSourceCodeRepresentation(operand.virtualRegister()); 332331                if (constantIndex >= oldSize) { 333332                    m_constants.grow(constantIndex + 1);   350349            if (!inlineCallFrame()->isClosureCall) { 351350                JSFunction* callee = inlineCallFrame()->calleeConstant(); 352                if (operand.offset() == CallFrameSlot::callee) 351                if (operand == VirtualRegister(CallFrameSlot::callee)) 353352                    return weakJSConstant(callee); 354353            } 355        } else if (operand.offset() == CallFrameSlot::callee) { 354        } else if (operand == VirtualRegister(CallFrameSlot::callee)) { 356355            // We have to do some constant-folding here because this enables CreateThis folding. Note 357356            // that we don't have such watchpoint-based folding for inlined uses of Callee, since in that   413412    } 414413 415    Node* set(VirtualRegister operand, Node* value, SetMode setMode = NormalSet) 414    Node* set(Operand operand, Node* value, SetMode setMode = NormalSet) 416415    { 417416        return setDirect(m_inlineStackTop->remapOperand(operand), value, setMode);   694693    } 695694 695    Node* jsConstant(FrozenValue* constantValue) 696    { 697        return addToGraph(JSConstant, OpInfo(constantValue)); 698    } 699 696700    // Assumes that the constant should be strongly marked. 697701    Node* jsConstant(JSValue constantValue) 698702    { 699        return addToGraph(JSConstant, OpInfo(m_graph.freezeStrong(constantValue))); 703        return jsConstant(m_graph.freezeStrong(constantValue)); 700704    } 701705 702706    Node* weakJSConstant(JSValue constantValue) 703707    { 704        return addToGraph(JSConstant, OpInfo(m_graph.freeze(constantValue))); 708        return jsConstant(m_graph.freeze(constantValue)); 705709    } 706710   853857    854858    Node* addCall( 855        VirtualRegister result, NodeType op, OpInfo opInfo, Node* callee, int argCount, int registerOffset, 859        Operand result, NodeType op, OpInfo opInfo, Node* callee, int argCount, int registerOffset, 856860        SpeculatedType prediction) 857861    {   962966        return getPrediction(m_currentIndex); 963967    } 964    968 965969    ArrayMode getArrayMode(Array::Action action) 966970    {   11771181        BasicBlock* m_continuationBlock; 11781182 1179        VirtualRegister m_returnValue; 1183        Operand m_returnValue; 11801184        11811185        // Speculations about variable types collected from the profiled code block,   11971201            CodeBlock* profiledBlock, 11981202            JSFunction* callee, // Null if this is a closure call. 1199            VirtualRegister returnValueVR, 1203            Operand returnValue, 12001204            VirtualRegister inlineCallFrameStart, 12011205            int argumentCountIncludingThis,   12721276    BasicBlock* blockPtr = block.ptr(); 12731277    m_graph.appendBlock(WTFMove(block)); 1278    VERBOSE_LOG("Adding new untargetable block: ", blockPtr->index, "\n"); 12741279    return blockPtr; 12751280}   13041309{ 13051310    auto bytecode = pc->as<CallOp>(); 1306    Node* callTarget = get(bytecode.m_callee); 1307    int registerOffset = -static_cast<int>(bytecode.m_argv); 1311    Node* callTarget = get(calleeFor(bytecode, m_currentIndex.checkpoint())); 1312    int registerOffset = -static_cast<int>(stackOffsetInRegistersForCall(bytecode, m_currentIndex.checkpoint())); 13081313 13091314    CallLinkStatus callLinkStatus = CallLinkStatus::computeFor(   13131318    InlineCallFrame::Kind kind = InlineCallFrame::kindFor(callMode); 13141319 1315    return handleCall(bytecode.m_dst, op, kind, pc->size(), callTarget, 1316        bytecode.m_argc, registerOffset, callLinkStatus, getPrediction()); 1320    return handleCall(destinationFor(bytecode, m_currentIndex.checkpoint(), JITType::DFGJIT), op, kind, pc->size(), callTarget, 1321        argumentCountIncludingThisFor(bytecode, m_currentIndex.checkpoint()), registerOffset, callLinkStatus, getPrediction()); 13171322} 13181323   13241329 13251330ByteCodeParser::Terminality ByteCodeParser::handleCall( 1326    VirtualRegister result, NodeType op, InlineCallFrame::Kind kind, unsigned instructionSize, 1331    Operand result, NodeType op, InlineCallFrame::Kind kind, unsigned instructionSize, 13271332    Node* callTarget, int argumentCountIncludingThis, int registerOffset, 13281333    CallLinkStatus callLinkStatus, SpeculatedType prediction)   14281433    14291434    ASSERT(calleeCell); 1430    addToGraph(CheckCell, OpInfo(m_graph.freeze(calleeCell)), callTargetForCheck); 1435    addToGraph(CheckIsConstant, OpInfo(m_graph.freeze(calleeCell)), callTargetForCheck); 14311436    if (thisArgument) 14321437        addToGraph(Phantom, thisArgument);   16401645 16411646template<typename ChecksFunctor> 1642void ByteCodeParser::inlineCall(Node* callTargetNode, VirtualRegister result, CallVariant callee, int registerOffset, int argumentCountIncludingThis, InlineCallFrame::Kind kind, BasicBlock* continuationBlock, const ChecksFunctor& insertChecks) 1647void ByteCodeParser::inlineCall(Node* callTargetNode, Operand result, CallVariant callee, int registerOffset, int argumentCountIncludingThis, InlineCallFrame::Kind kind, BasicBlock* continuationBlock, const ChecksFunctor& insertChecks) 16431648{ 16441649    const Instruction* savedCurrentInstruction = m_currentInstruction;   16741679 16751680    if (result.isValid()) 1676        result = m_inlineStackTop->remapOperand(result).virtualRegister(); 1681        result = m_inlineStackTop->remapOperand(result); 16771682 16781683    VariableAccessData* calleeVariable = nullptr;   18261831} 18271832 1828ByteCodeParser::CallOptimizationResult ByteCodeParser::handleCallVariant(Node* callTargetNode, VirtualRegister result, CallVariant callee, int registerOffset, VirtualRegister thisArgument, int argumentCountIncludingThis, BytecodeIndex nextIndex, InlineCallFrame::Kind kind, SpeculatedType prediction, unsigned& inliningBalance, BasicBlock* continuationBlock, bool needsToCheckCallee) 1833ByteCodeParser::CallOptimizationResult ByteCodeParser::handleCallVariant(Node* callTargetNode, Operand result, CallVariant callee, int registerOffset, VirtualRegister thisArgument, int argumentCountIncludingThis, BytecodeIndex nextIndex, InlineCallFrame::Kind kind, SpeculatedType prediction, unsigned& inliningBalance, BasicBlock* continuationBlock, bool needsToCheckCallee) 18291834{ 18301835    VERBOSE_LOG("    Considering callee ", callee, "\n");   19031908} 19041909 1905bool ByteCodeParser::handleVarargsInlining(Node* callTargetNode, VirtualRegister result, 1910bool ByteCodeParser::handleVarargsInlining(Node* callTargetNode, Operand result, 19061911    const CallLinkStatus& callLinkStatus, int firstFreeReg, VirtualRegister thisArgument, 19071912    VirtualRegister argumentsArgument, unsigned argumentsOffset,   20472052 20482053ByteCodeParser::CallOptimizationResult ByteCodeParser::handleInlining( 2049    Node* callTargetNode, VirtualRegister result, const CallLinkStatus& callLinkStatus, 2054    Node* callTargetNode, Operand result, const CallLinkStatus& callLinkStatus, 20502055    int registerOffset, VirtualRegister thisArgument, 20512056    int argumentCountIncludingThis,   22232228 22242229template<typename ChecksFunctor> 2225bool ByteCodeParser::handleMinMax(VirtualRegister result, NodeType op, int registerOffset, int argumentCountIncludingThis, const ChecksFunctor& insertChecks) 2230bool ByteCodeParser::handleMinMax(Operand result, NodeType op, int registerOffset, int argumentCountIncludingThis, const ChecksFunctor& insertChecks) 22262231{ 22272232    ASSERT(op == ArithMin || op == ArithMax);   22532258 22542259template<typename ChecksFunctor> 2255bool ByteCodeParser::handleIntrinsicCall(Node* callee, VirtualRegister result, Intrinsic intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, const ChecksFunctor& insertChecks) 2260bool ByteCodeParser::handleIntrinsicCall(Node* callee, Operand result, Intrinsic intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, const ChecksFunctor& insertChecks) 22562261{ 22572262    VERBOSE_LOG("       The intrinsic is ", intrinsic, "\n");   23922397            insertChecks(); 23932398 2394            IterationKind kind; 2395            switch (intrinsic) { 2396            case ArrayValuesIntrinsic: 2397            case TypedArrayValuesIntrinsic: 2398                kind = IterationKind::Values; 2399                break; 2400            case ArrayKeysIntrinsic: 2401            case TypedArrayKeysIntrinsic: 2402                kind = IterationKind::Keys; 2403                break; 2404            case ArrayEntriesIntrinsic: 2405            case TypedArrayEntriesIntrinsic: 2406                kind = IterationKind::Entries; 2407                break; 2408            default: 2409                RELEASE_ASSERT_NOT_REACHED(); 2410                break; 2411            } 2399            Optional<IterationKind> kind = interationKindForIntrinsic(intrinsic); 2400            RELEASE_ASSERT(!!kind); 24122401 24132402            // Add the constant before exit becomes invalid because we may want to insert (redundant) checks on it in Fixup. 2414            Node* kindNode = jsConstant(jsNumber(static_cast<uint32_t>(kind))); 2403            Node* kindNode = jsConstant(jsNumber(static_cast<uint32_t>(*kind))); 24152404 24162405            // We don't have an existing error string.   24992488                    // effects of slice require that we perform a Get(array, "constructor") and we can skip 25002489                    // that if we're an original array structure. (We can relax this in the future by using 2501                    // TryGetById and CheckCell). 2490                    // TryGetById and CheckIsConstant). 25022491                    // 25032492                    // 2. We check that the array we're calling slice on has the same global object as the lexical   28652854                Node* actualProperty = addToGraph(TryGetById, OpInfo(CacheableIdentifier::createFromImmortalIdentifier(execPropertyID)), OpInfo(SpecFunction), Edge(regExpObject, CellUse)); 28662855                FrozenValue* regExpPrototypeExec = m_graph.freeze(globalObject->regExpProtoExecFunction()); 2867                addToGraph(CheckCell, OpInfo(regExpPrototypeExec), Edge(actualProperty, CellUse)); 2856                addToGraph(CheckIsConstant, OpInfo(regExpPrototypeExec), Edge(actualProperty, CellUse)); 28682857            } 28692858   37123701 37133702template<typename ChecksFunctor> 3714bool ByteCodeParser::handleDOMJITCall(Node* callTarget, VirtualRegister result, const DOMJIT::Signature* signature, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, const ChecksFunctor& insertChecks) 3703bool ByteCodeParser::handleDOMJITCall(Node* callTarget, Operand result, const DOMJIT::Signature* signature, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, const ChecksFunctor& insertChecks) 37153704{ 37163705    if (argumentCountIncludingThis != static_cast<int>(1 + signature->argumentCount))   37313720 37323721template<typename ChecksFunctor> 3733bool ByteCodeParser::handleIntrinsicGetter(VirtualRegister result, SpeculatedType prediction, const GetByIdVariant& variant, Node* thisNode, const ChecksFunctor& insertChecks) 3722bool ByteCodeParser::handleIntrinsicGetter(Operand result, SpeculatedType prediction, const GetByIdVariant& variant, Node* thisNode, const ChecksFunctor& insertChecks) 37343723{ 37353724    switch (variant.intrinsic()) {   38453834} 38463835 3847bool ByteCodeParser::handleDOMJITGetter(VirtualRegister result, const GetByIdVariant& variant, Node* thisNode, unsigned identifierNumber, SpeculatedType prediction) 3836bool ByteCodeParser::handleDOMJITGetter(Operand result, const GetByIdVariant& variant, Node* thisNode, unsigned identifierNumber, SpeculatedType prediction) 38483837{ 38493838    if (!variant.domAttribute())   38583847    addToGraph(CheckStructure, OpInfo(m_graph.addStructureSet(variant.structureSet())), thisNode); 38593848    3860    // We do not need to emit CheckCell thingy here. When the custom accessor is replaced to different one, Structure transition occurs. 3849    // We do not need to emit CheckIsConstant thingy here. When the custom accessor is replaced to different one, Structure transition occurs. 38613850    addToGraph(CheckSubClass, OpInfo(domAttribute->classInfo), thisNode); 38623851      38963885    if (m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadCell)) 38973886        return false; 3898    addToGraph(CheckCell, OpInfo(m_graph.freeze(getById.moduleNamespaceObject())), Edge(base, CellUse)); 3887    addToGraph(CheckIsConstant, OpInfo(m_graph.freeze(getById.moduleNamespaceObject())), Edge(base, CellUse)); 38993888 39003889    addToGraph(FilterGetByStatus, OpInfo(m_graph.m_plan.recordedStatuses().addGetByStatus(currentCodeOrigin(), getById)), base);   39243913template<typename ChecksFunctor> 39253914bool ByteCodeParser::handleTypedArrayConstructor( 3926    VirtualRegister result, InternalFunction* function, int registerOffset, 3915    Operand result, InternalFunction* function, int registerOffset, 39273916    int argumentCountIncludingThis, TypedArrayType type, const ChecksFunctor& insertChecks) 39283917{   39813970template<typename ChecksFunctor> 39823971bool ByteCodeParser::handleConstantInternalFunction( 3983    Node* callTargetNode, VirtualRegister result, InternalFunction* function, int registerOffset, 3972    Node* callTargetNode, Operand result, InternalFunction* function, int registerOffset, 39843973    int argumentCountIncludingThis, CodeSpecializationKind kind, SpeculatedType prediction, const ChecksFunctor& insertChecks) 39853974{   46364625    if (handleIntrinsicGetter(destination, prediction, variant, base, 46374626            [&] () { 4638                addToGraph(CheckCell, OpInfo(m_graph.freeze(variant.intrinsicFunction())), getter); 4627                addToGraph(CheckIsConstant, OpInfo(m_graph.freeze(variant.intrinsicFunction())), getter); 46394628            })) { 46404629        addToGraph(Phantom, base);   51755164 51765165                    FrozenValue* frozen = m_graph.freeze(cachedFunction); 5177                    addToGraph(CheckCell, OpInfo(frozen), callee); 5166                    addToGraph(CheckIsConstant, OpInfo(frozen), callee); 51785167 51795168                    function = static_cast<JSFunction*>(cachedFunction);   52345223                        && cachedFunction == (bytecode.m_isInternalPromise ? globalObject->internalPromiseConstructor() : globalObject->promiseConstructor())) { 52355224                        FrozenValue* frozen = m_graph.freeze(cachedFunction); 5236                        addToGraph(CheckCell, OpInfo(frozen), callee); 5225                        addToGraph(CheckIsConstant, OpInfo(frozen), callee); 52375226 52385227                        promiseConstructor = jsCast<JSPromiseConstructor*>(cachedFunction);   52585247 52595248                        FrozenValue* frozen = m_graph.freeze(cachedFunction); 5260                        addToGraph(CheckCell, OpInfo(frozen), callee); 5249                        addToGraph(CheckIsConstant, OpInfo(frozen), callee); 52615250 52625251                        function = static_cast<JSFunction*>(cachedFunction);   56645653            // recompilation. 56655654            if (JSObject* commonPrototype = status.commonPrototype()) { 5666                addToGraph(CheckCell, OpInfo(m_graph.freeze(commonPrototype)), prototype); 5655                addToGraph(CheckIsConstant, OpInfo(m_graph.freeze(commonPrototype)), prototype); 56675656                56685657                bool allOK = true;   59335922                        FrozenValue* frozen = m_graph.freezeStrong(identifier.cell()); 59345923                        if (identifier.isSymbolCell()) 5935                            addToGraph(CheckCell, OpInfo(frozen), property); 5924                            addToGraph(CheckIsConstant, OpInfo(frozen), property); 59365925                        else 59375926                            addToGraph(CheckIdent, OpInfo(uid), property);   61476136                        FrozenValue* frozen = m_graph.freezeStrong(identifier.cell()); 61486137                        if (identifier.isSymbolCell()) 6149                            addToGraph(CheckCell, OpInfo(frozen), property); 6138                            addToGraph(CheckIsConstant, OpInfo(frozen), property); 61506139                        else 61516140                            addToGraph(CheckIdent, OpInfo(uid), property);   66726661                LAST_OPCODE(op_tail_call_forward_arguments); 66736662        } 6674            6663 66756664        case op_construct_varargs: { 66766665            handleVarargsCall<OpConstructVarargs>(currentInstruction, ConstructVarargs, CallMode::Construct);   66786667            NEXT_OPCODE(op_construct_varargs); 66796668        } 6680            6669 66816670        case op_call_eval: { 66826671            auto bytecode = currentInstruction->as<OpCallEval>();   66856674            NEXT_OPCODE(op_call_eval); 66866675        } 6687            6676 6677        case op_iterator_open: { 6678            auto bytecode = currentInstruction->as<OpIteratorOpen>(); 6679            auto& metadata = bytecode.metadata(codeBlock); 6680            uint32_t seenModes = metadata.m_iterationMetadata.seenModes; 6681 6682            unsigned numberOfRemainingModes = WTF::bitCount(seenModes); 6683            ASSERT(numberOfRemainingModes <= numberOfIterationModes); 6684            bool generatedCase = false; 6685 6686            JSGlobalObject* globalObject = m_inlineStackTop->m_codeBlock->globalObjectFor(currentCodeOrigin()); 6687            BasicBlock* genericBlock = nullptr; 6688            BasicBlock* continuation = allocateUntargetableBlock(); 6689 6690            BytecodeIndex startIndex = m_currentIndex; 6691 6692            Node* symbolIterator = getDirect(bytecode.m_symbolIterator); 6693            auto& arrayIteratorProtocolWatchpointSet = globalObject->arrayIteratorProtocolWatchpointSet(); 6694 6695            if (seenModes & IterationMode::FastArray && arrayIteratorProtocolWatchpointSet.isStillValid()) { 6696                // First set up the watchpoint conditions we need for correctness. 6697                m_graph.watchpoints().addLazily(arrayIteratorProtocolWatchpointSet); 6698 6699                ASSERT_WITH_MESSAGE(globalObject->arrayProtoValuesFunctionConcurrently(), "The only way we could have seen FastArray is if we saw this function in the LLInt/Baseline so the iterator function should be allocated."); 6700                FrozenValue* frozenSymbolIteratorFunction = m_graph.freeze(globalObject->arrayProtoValuesFunctionConcurrently()); 6701                numberOfRemainingModes--; 6702                if (!numberOfRemainingModes) 6703                    addToGraph(CheckIsConstant, OpInfo(frozenSymbolIteratorFunction), symbolIterator); 6704                else { 6705                    BasicBlock* fastArrayBlock = allocateUntargetableBlock(); 6706                    genericBlock = allocateUntargetableBlock(); 6707 6708                    Node* isKnownIterFunction = addToGraph(CompareEqPtr, OpInfo(frozenSymbolIteratorFunction), symbolIterator); 6709                    Node* isArray = addToGraph(IsCellWithType, OpInfo(ArrayType), get(bytecode.m_iterable)); 6710 6711                    BranchData* branchData = m_graph.m_branchData.add(); 6712                    branchData->taken = BranchTarget(fastArrayBlock); 6713                    branchData->notTaken = BranchTarget(genericBlock); 6714 6715                    Node* andResult = addToGraph(ArithBitAnd, isArray, isKnownIterFunction); 6716 6717                    // We know the ArithBitAnd cannot have effects so it's ok to exit here. 6718                    m_exitOK = true; 6719                    addToGraph(ExitOK); 6720 6721                    addToGraph(Branch, OpInfo(branchData), andResult); 6722                    flushForTerminal(); 6723 6724                    m_currentBlock = fastArrayBlock; 6725                    clearCaches(); 6726                } 6727 6728                Node* kindNode = jsConstant(jsNumber(static_cast<uint32_t>(IterationKind::Values))); 6729                Node* next = jsConstant(JSValue()); 6730                Node* iterator = addToGraph(NewInternalFieldObject, OpInfo(m_graph.registerStructure(globalObject->arrayIteratorStructure()))); 6731                addToGraph(PutInternalField, OpInfo(static_cast<uint32_t>(JSArrayIterator::Field::IteratedObject)), iterator, get(bytecode.m_iterable)); 6732                addToGraph(PutInternalField, OpInfo(static_cast<uint32_t>(JSArrayIterator::Field::Kind)), iterator, kindNode); 6733                set(bytecode.m_iterator, iterator); 6734 6735                // Set m_next to JSValue() so if we exit between here and iterator_next instruction it knows we are in the fast case. 6736                set(bytecode.m_next, next); 6737 6738                // Do our set locals. We don't want to exit backwards so move our exit to the next bytecode. 6739                m_currentIndex = BytecodeIndex(m_currentIndex.offset() + currentInstruction->size()); 6740                m_exitOK = true; 6741                processSetLocalQueue(); 6742 6743                addToGraph(Jump, OpInfo(continuation)); 6744                generatedCase = true; 6745            } 6746 6747            m_currentIndex = startIndex; 6748 6749            if (seenModes & IterationMode::Generic) { 6750                ASSERT(numberOfRemainingModes); 6751                if (genericBlock) { 6752                    ASSERT(generatedCase); 6753                    m_currentBlock = genericBlock; 6754                    clearCaches(); 6755                } else 6756                    ASSERT(!generatedCase); 6757 6758                Terminality terminality = handleCall<OpIteratorOpen>(currentInstruction, Call, CallMode::Regular); 6759                ASSERT_UNUSED(terminality, terminality == NonTerminal); 6760                progressToNextCheckpoint(); 6761 6762                Node* iterator = get(bytecode.m_iterator); 6763                BasicBlock* notObjectBlock = allocateUntargetableBlock(); 6764                BasicBlock* isObjectBlock = allocateUntargetableBlock(); 6765                BranchData* branchData = m_graph.m_branchData.add(); 6766                branchData->taken = BranchTarget(isObjectBlock); 6767                branchData->notTaken = BranchTarget(notObjectBlock); 6768                addToGraph(Branch, OpInfo(branchData), addToGraph(IsObject, iterator)); 6769 6770                { 6771                    m_currentBlock = notObjectBlock; 6772                    clearCaches(); 6773                    LazyJSValue errorString = LazyJSValue::newString(m_graph, "Iterator result interface is not an object."_s); 6774                    OpInfo info = OpInfo(m_graph.m_lazyJSValues.add(errorString)); 6775                    Node* errorMessage = addToGraph(LazyJSConstant, info); 6776                    addToGraph(ThrowStaticError, OpInfo(ErrorType::TypeError), errorMessage); 6777                    flushForTerminal(); 6778                } 6779 6780                { 6781                    m_currentBlock = isObjectBlock; 6782                    clearCaches(); 6783                    SpeculatedType prediction = getPrediction(); 6784 6785                    Node* base = get(bytecode.m_iterator); 6786                    auto* nextImpl = m_vm->propertyNames->next.impl(); 6787                    unsigned identifierNumber = m_graph.identifiers().ensure(nextImpl); 6788 6789                    AccessType type = AccessType::GetById; 6790                    unsigned opcodeLength = currentInstruction->size(); 6791 6792                    GetByStatus getByStatus = GetByStatus::computeFor( 6793                        m_inlineStackTop->m_profiledBlock, 6794                        m_inlineStackTop->m_baselineMap, m_icContextStack, 6795                        currentCodeOrigin()); 6796 6797 6798                    handleGetById(bytecode.m_next, prediction, base, CacheableIdentifier::createFromImmortalIdentifier(nextImpl), identifierNumber, getByStatus, type, opcodeLength); 6799 6800                    // Do our set locals. We don't want to run our get_by_id again so we move to the next bytecode. 6801                    m_currentIndex = BytecodeIndex(m_currentIndex.offset() + currentInstruction->size()); 6802                    m_exitOK = true; 6803                    processSetLocalQueue(); 6804 6805                    addToGraph(Jump, OpInfo(continuation)); 6806                } 6807                generatedCase = true; 6808            } 6809 6810            if (!generatedCase) { 6811                Node* result = jsConstant(JSValue()); 6812                addToGraph(ForceOSRExit); 6813                set(bytecode.m_iterator, result); 6814                set(bytecode.m_next, result); 6815 6816                m_currentIndex = BytecodeIndex(m_currentIndex.offset() + currentInstruction->size()); 6817                m_exitOK = true; 6818                processSetLocalQueue(); 6819 6820                addToGraph(Jump, OpInfo(continuation)); 6821            } 6822 6823            m_currentIndex = startIndex; 6824            m_currentBlock = continuation; 6825            clearCaches(); 6826 6827            NEXT_OPCODE(op_iterator_open); 6828        } 6829 6830        case op_iterator_next: { 6831            auto bytecode = currentInstruction->as<OpIteratorNext>(); 6832            auto& metadata = bytecode.metadata(codeBlock); 6833            uint32_t seenModes = metadata.m_iterationMetadata.seenModes; 6834 6835            unsigned numberOfRemainingModes = WTF::bitCount(seenModes); 6836            ASSERT(numberOfRemainingModes <= numberOfIterationModes); 6837            bool generatedCase = false; 6838 6839            BytecodeIndex startIndex = m_currentIndex; 6840            JSGlobalObject* globalObject = m_inlineStackTop->m_codeBlock->globalObjectFor(currentCodeOrigin()); 6841            auto& arrayIteratorProtocolWatchpointSet = globalObject->arrayIteratorProtocolWatchpointSet(); 6842            BasicBlock* genericBlock = nullptr; 6843            BasicBlock* continuation = allocateUntargetableBlock(); 6844 6845            if (seenModes & IterationMode::FastArray && arrayIteratorProtocolWatchpointSet.isStillValid()) { 6846                // First set up the watchpoint conditions we need for correctness. 6847                m_graph.watchpoints().addLazily(arrayIteratorProtocolWatchpointSet); 6848 6849                if (numberOfRemainingModes != 1) { 6850                    Node* hasNext = addToGraph(IsEmpty, get(bytecode.m_next)); 6851                    genericBlock = allocateUntargetableBlock(); 6852                    BasicBlock* fastArrayBlock = allocateUntargetableBlock(); 6853 6854                    BranchData* branchData = m_graph.m_branchData.add(); 6855                    branchData->taken = BranchTarget(fastArrayBlock); 6856                    branchData->notTaken = BranchTarget(genericBlock); 6857                    addToGraph(Branch, OpInfo(branchData), hasNext); 6858 6859                    m_currentBlock = fastArrayBlock; 6860                    clearCaches(); 6861                } else 6862                    addToGraph(CheckIsConstant, OpInfo(m_graph.freeze(JSValue())), get(bytecode.m_next)); 6863 6864                Node* iterator = get(bytecode.m_iterator); 6865                addToGraph(CheckStructure, OpInfo(m_graph.addStructureSet(globalObject->arrayIteratorStructure())), iterator); 6866 6867                BasicBlock* isDoneBlock = allocateUntargetableBlock(); 6868                BasicBlock* notDoneBlock = allocateUntargetableBlock(); 6869                BasicBlock* doLoadBlock = allocateUntargetableBlock(); 6870 6871                { 6872                    Node* doneIndex = jsConstant(jsNumber(-1)); 6873                    Node* index = addToGraph(GetInternalField, OpInfo(static_cast<uint32_t>(JSArrayIterator::Field::Index)), OpInfo(SpecInt32Only), iterator); 6874                    Node* isDone = addToGraph(CompareStrictEq, index, doneIndex); 6875 6876                    BranchData* isDoneBranchData = m_graph.m_branchData.add(); 6877                    isDoneBranchData->taken = BranchTarget(isDoneBlock); 6878                    isDoneBranchData->notTaken = BranchTarget(notDoneBlock); 6879                    addToGraph(Branch, OpInfo(isDoneBranchData), isDone); 6880                } 6881 6882                ArrayMode arrayMode = getArrayMode(metadata.m_iterableProfile, Array::Read); 6883                auto prediction = getPredictionWithoutOSRExit(BytecodeIndex(m_currentIndex.offset(), OpIteratorNext::getValue)); 6884 6885                { 6886                    m_currentBlock = notDoneBlock; 6887                    clearCaches(); 6888 6889                    Node* index = addToGraph(GetInternalField, OpInfo(static_cast<uint32_t>(JSArrayIterator::Field::Index)), OpInfo(SpecInt32Only), get(bytecode.m_iterator)); 6890                    Node* iterable = get(bytecode.m_iterable); 6891                    Node* butterfly = addToGraph(GetButterfly, iterable); 6892                    Node* length = addToGraph(GetArrayLength, OpInfo(arrayMode.asWord()), iterable, butterfly); 6893                    Node* isDone = addToGraph(CompareGreaterEq, Edge(index, Int32Use), Edge(length, Int32Use)); 6894                    m_exitOK = true; // The above compare doesn't produce effects since we know index and length are int32s. 6895                    addToGraph(ExitOK); 6896 6897                    BranchData* branchData = m_graph.m_branchData.add(); 6898                    branchData->taken = BranchTarget(isDoneBlock); 6899                    branchData->notTaken = BranchTarget(doLoadBlock); 6900                    addToGraph(Branch, OpInfo(branchData), isDone); 6901                } 6902 6903                { 6904                    m_currentBlock = doLoadBlock; 6905                    clearCaches(); 6906                    Node* index = addToGraph(GetInternalField, OpInfo(static_cast<uint32_t>(JSArrayIterator::Field::Index)), OpInfo(SpecInt32Only), get(bytecode.m_iterator)); 6907                    Node* one = jsConstant(jsNumber(1)); 6908                    Node* newIndex = makeSafe(addToGraph(ArithAdd, index, one)); 6909                    Node* falseNode = jsConstant(jsBoolean(false)); 6910 6911 6912                    // FIXME: We could consider making this not vararg, since it only uses three child 6913                    // slots. 6914                    // https://bugs.webkit.org/show_bug.cgi?id=184192 6915                    addVarArgChild(get(bytecode.m_iterable)); 6916                    addVarArgChild(index); 6917                    addVarArgChild(0); // Leave room for property storage. 6918                    Node* getByVal = addToGraph(Node::VarArg, GetByVal, OpInfo(arrayMode.asWord()), OpInfo(prediction)); 6919                    set(bytecode.m_value, getByVal); 6920                    set(bytecode.m_done, falseNode); 6921                    addToGraph(PutInternalField, OpInfo(static_cast<uint32_t>(JSArrayIterator::Field::Index)), get(bytecode.m_iterator), newIndex); 6922 6923                    // Do our set locals. We don't want to run our getByVal again so we move to the next bytecode. 6924                    m_currentIndex = BytecodeIndex(m_currentIndex.offset() + currentInstruction->size()); 6925                    m_exitOK = true; 6926                    processSetLocalQueue(); 6927 6928                    addToGraph(Jump, OpInfo(continuation)); 6929                } 6930 6931                // Roll back the checkpoint. 6932                m_currentIndex = startIndex; 6933 6934                { 6935                    m_currentBlock = isDoneBlock; 6936                    clearCaches(); 6937                    Node* trueNode = jsConstant(jsBoolean(true)); 6938                    Node* doneIndex = jsConstant(jsNumber(-1)); 6939                    Node* bottomNode = jsConstant(m_graph.bottomValueMatchingSpeculation(prediction)); 6940 6941                    set(bytecode.m_value, bottomNode); 6942                    set(bytecode.m_done, trueNode); 6943                    addToGraph(PutInternalField, OpInfo(static_cast<uint32_t>(JSArrayIterator::Field::Index)), get(bytecode.m_iterator), doneIndex); 6944 6945                    // Do our set locals. We don't want to run this again so we have to move the exit origin forward. 6946                    m_currentIndex = BytecodeIndex(m_currentIndex.offset() + currentInstruction->size()); 6947                    m_exitOK = true; 6948                    processSetLocalQueue(); 6949 6950                    addToGraph(Jump, OpInfo(continuation)); 6951                } 6952 6953                m_currentIndex = startIndex; 6954                generatedCase = true; 6955            } 6956 6957            if (seenModes & IterationMode::Generic) { 6958                if (genericBlock) { 6959                    ASSERT(generatedCase); 6960                    m_currentBlock = genericBlock; 6961                    clearCaches(); 6962                } else 6963                    ASSERT(!generatedCase); 6964 6965                Terminality terminality = handleCall<OpIteratorNext>(currentInstruction, Call, CallMode::Regular); 6966                ASSERT_UNUSED(terminality, terminality == NonTerminal); 6967                progressToNextCheckpoint(); 6968 6969                BasicBlock* notObjectBlock = allocateUntargetableBlock(); 6970                BasicBlock* isObjectBlock = allocateUntargetableBlock(); 6971                BasicBlock* notDoneBlock = allocateUntargetableBlock(); 6972 6973                Operand nextResult = Operand::tmp(OpIteratorNext::nextResult); 6974                { 6975                    Node* iteratorResult = get(nextResult); 6976                    BranchData* branchData = m_graph.m_branchData.add(); 6977                    branchData->taken = BranchTarget(isObjectBlock); 6978                    branchData->notTaken = BranchTarget(notObjectBlock); 6979                    addToGraph(Branch, OpInfo(branchData), addToGraph(IsObject, iteratorResult)); 6980                } 6981 6982                { 6983                    m_currentBlock = notObjectBlock; 6984                    clearCaches(); 6985                    LazyJSValue errorString = LazyJSValue::newString(m_graph, "Iterator result interface is not an object."_s); 6986                    OpInfo info = OpInfo(m_graph.m_lazyJSValues.add(errorString)); 6987                    Node* errorMessage = addToGraph(LazyJSConstant, info); 6988                    addToGraph(ThrowStaticError, OpInfo(ErrorType::TypeError), errorMessage); 6989                    flushForTerminal(); 6990                } 6991 6992                auto valuePredicition = getPredictionWithoutOSRExit(BytecodeIndex(m_currentIndex.offset(), OpIteratorNext::getValue)); 6993 6994                { 6995                    m_exitOK = true; 6996                    m_currentBlock = isObjectBlock; 6997                    clearCaches(); 6998                    SpeculatedType prediction = getPrediction(); 6999                    Node* bottomValue = jsConstant(m_graph.bottomValueMatchingSpeculation(valuePredicition)); 7000 7001                    Node* base = get(nextResult); 7002                    auto* doneImpl = m_vm->propertyNames->done.impl(); 7003                    unsigned identifierNumber = m_graph.identifiers().ensure(doneImpl); 7004 7005                    AccessType type = AccessType::GetById; 7006                    unsigned opcodeLength = currentInstruction->size(); 7007 7008                    GetByStatus getByStatus = GetByStatus::computeFor( 7009                        m_inlineStackTop->m_profiledBlock, 7010                        m_inlineStackTop->m_baselineMap, m_icContextStack, 7011                        currentCodeOrigin()); 7012 7013                    handleGetById(bytecode.m_done, prediction, base, CacheableIdentifier::createFromImmortalIdentifier(doneImpl), identifierNumber, getByStatus, type, opcodeLength); 7014                    // Set a value for m_value so we don't exit on it differing from what we expected. 7015                    set(bytecode.m_value, bottomValue); 7016                    progressToNextCheckpoint(); 7017 7018                    BranchData* branchData = m_graph.m_branchData.add(); 7019                    branchData->taken = BranchTarget(continuation); 7020                    branchData->notTaken = BranchTarget(notDoneBlock); 7021                    addToGraph(Branch, OpInfo(branchData), get(bytecode.m_done)); 7022                } 7023 7024                { 7025                    m_currentBlock = notDoneBlock; 7026                    clearCaches(); 7027 7028                    Node* base = get(nextResult); 7029                    auto* valueImpl = m_vm->propertyNames->value.impl(); 7030                    unsigned identifierNumber = m_graph.identifiers().ensure(valueImpl); 7031 7032                    AccessType type = AccessType::GetById; 7033                    unsigned opcodeLength = currentInstruction->size(); 7034 7035                    GetByStatus getByStatus = GetByStatus::computeFor( 7036                        m_inlineStackTop->m_profiledBlock, 7037                        m_inlineStackTop->m_baselineMap, m_icContextStack, 7038                        currentCodeOrigin()); 7039 7040                    handleGetById(bytecode.m_value, valuePredicition, base, CacheableIdentifier::createFromImmortalIdentifier(valueImpl), identifierNumber, getByStatus, type, opcodeLength); 7041                    progressToNextCheckpoint(); 7042 7043                    addToGraph(Jump, OpInfo(continuation)); 7044                } 7045 7046                generatedCase = true; 7047            } 7048 7049            if (!generatedCase) { 7050                Node* result = jsConstant(JSValue()); 7051                addToGraph(ForceOSRExit); 7052                set(bytecode.m_value, result); 7053                set(bytecode.m_done, result); 7054 7055                // Do our set locals. We don't want to run our get by id again so we move to the next bytecode. 7056                m_currentIndex = BytecodeIndex(m_currentIndex.offset() + currentInstruction->size()); 7057                m_exitOK = true; 7058                processSetLocalQueue(); 7059 7060                addToGraph(Jump, OpInfo(continuation)); 7061            } 7062 7063            m_currentIndex = startIndex; 7064            m_currentBlock = continuation; 7065            clearCaches(); 7066 7067            NEXT_OPCODE(op_iterator_next); 7068        } 7069 66887070        case op_jneq_ptr: { 66897071            auto bytecode = currentInstruction->as<OpJneqPtr>();   66967078                LAST_OPCODE(op_jneq_ptr); 66977079            } 6698            addToGraph(CheckCell, OpInfo(frozenPointer), child); 7080            addToGraph(CheckIsConstant, OpInfo(frozenPointer), child); 66997081            NEXT_OPCODE(op_jneq_ptr); 67007082        }   71757557            // bytecode-level liveness of the scope register. 71767558            auto bytecode = currentInstruction->as<OpGetScope>(); 7177            Node* callee = get(CallFrameSlot::callee); 7559            Node* callee = get(VirtualRegister(CallFrameSlot::callee)); 71787560            Node* result; 71797561            if (JSFunction* function = callee->dynamicCastConstant<JSFunction*>(*m_vm))   75547936    CodeBlock* profiledBlock, 75557937    JSFunction* callee, // Null if this is a closure call. 7556    VirtualRegister returnValueVR, 7938    Operand returnValue, 75577939    VirtualRegister inlineCallFrameStart, 75587940    int argumentCountIncludingThis,   75637945    , m_profiledBlock(profiledBlock) 75647946    , m_continuationBlock(continuationBlock) 7565    , m_returnValue(returnValueVR) 7947    , m_returnValue(returnValue) 75667948    , m_caller(byteCodeParser->m_inlineStackTop) 75677949{   76298011        ASSERT(codeBlock == byteCodeParser->m_codeBlock); 76308012        ASSERT(!callee); 7631        ASSERT(!returnValueVR.isValid()); 8013        ASSERT(!returnValue.isValid()); 76328014        ASSERT(!inlineCallFrameStart.isValid()); 76338015   77808162 77818163                if (identifier.isSymbolCell()) 7782                    addToGraph(CheckCell, OpInfo(frozen), property); 8164                    addToGraph(CheckIsConstant, OpInfo(frozen), property); 77838165                else { 77848166                    ASSERT(!uid->isSymbol());   78788260 78798261            FrozenValue* frozen = m_graph.freeze(cachedFunction); 7880            addToGraph(CheckCell, OpInfo(frozen), callee); 8262            addToGraph(CheckIsConstant, OpInfo(frozen), callee); 78818263 78828264            function = static_cast<JSFunction*>(cachedFunction);   80448426 80458427                auto nodeAndIndex = block->findTerminal(); 8046                RELEASE_ASSERT(nodeAndIndex.node->op() == Unreachable); 8428                DFG_ASSERT(m_graph, nodeAndIndex.node, nodeAndIndex.node->op() == Unreachable); 80478429                block->resize(nodeAndIndex.index + 1); 80488430                break;
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGCFGSimplificationPhase.cpp

    r254735 r260323   281281    } 282282    283    void keepOperandAlive(BasicBlock* block, BasicBlock* jettisonedBlock, NodeOrigin nodeOrigin, VirtualRegister operand) 283    void keepOperandAlive(BasicBlock* block, BasicBlock* jettisonedBlock, NodeOrigin nodeOrigin, Operand operand) 284284    { 285285        Node* livenessNode = jettisonedBlock->variablesAtHead.operand(operand);   303303    void jettisonBlock(BasicBlock* block, BasicBlock* jettisonedBlock, NodeOrigin boundaryNodeOrigin) 304304    { 305        for (size_t i = 0; i < jettisonedBlock->variablesAtHead.numberOfArguments(); ++i) 306            keepOperandAlive(block, jettisonedBlock, boundaryNodeOrigin, virtualRegisterForArgumentIncludingThis(i)); 307        for (size_t i = 0; i < jettisonedBlock->variablesAtHead.numberOfLocals(); ++i) 308            keepOperandAlive(block, jettisonedBlock, boundaryNodeOrigin, virtualRegisterForLocal(i)); 309        305        for (size_t i = 0; i < jettisonedBlock->variablesAtHead.size(); ++i) 306            keepOperandAlive(block, jettisonedBlock, boundaryNodeOrigin, jettisonedBlock->variablesAtHead.operandForIndex(i)); 307 310308        fixJettisonedPredecessors(block, jettisonedBlock); 311309    }   353351            // exit prior to hitting the firstBlock's terminal, and end up going down a 354352            // different path than secondBlock. 355            356            for (size_t i = 0; i < jettisonedBlock->variablesAtHead.numberOfArguments(); ++i) 357                keepOperandAlive(firstBlock, jettisonedBlock, boundaryNodeOrigin, virtualRegisterForArgumentIncludingThis(i)); 358            for (size_t i = 0; i < jettisonedBlock->variablesAtHead.numberOfLocals(); ++i) 359                keepOperandAlive(firstBlock, jettisonedBlock, boundaryNodeOrigin, virtualRegisterForLocal(i)); 353            for (size_t i = 0; i < jettisonedBlock->variablesAtHead.size(); ++i) 354                keepOperandAlive(firstBlock, jettisonedBlock, boundaryNodeOrigin, jettisonedBlock->variablesAtHead.operandForIndex(i)); 360355        } 361356       
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGCapabilities.cpp

    r254801 r260323   275275    case op_create_rest: 276276    case op_get_rest_length: 277    case op_iterator_open: 278    case op_iterator_next: 277279    case op_log_shadow_chicken_prologue: 278280    case op_log_shadow_chicken_tail:   312314    case checkpoint_osr_exit_trampoline: 313315    case handleUncaughtException: 316    case op_iterator_open_return_location: 317    case op_iterator_next_return_location: 314318    case op_call_return_location: 315319    case op_construct_return_location:
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGClobberize.h

    r260321 r260323   415415        return; 416416 417    case CheckCell: 418        def(PureValue(CheckCell, AdjacencyList(AdjacencyList::Fixed, node->child1()), node->cellOperand())); 417    case CheckIsConstant: 418        def(PureValue(CheckIsConstant, AdjacencyList(AdjacencyList::Fixed, node->child1()), node->constant())); 419419        return; 420420
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp

    r260321 r260323   308308            } 309309                310            case CheckCell: { 311                if (m_state.forNode(node->child1()).value() != node->cellOperand()->value()) 310            case CheckIsConstant: { 311                if (m_state.forNode(node->child1()).value() != node->constant()->value()) 312312                    break; 313313                node->remove(m_graph);
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGDoesGC.cpp

    r260321 r260323   134134    case GetGlobalLexicalVariable: 135135    case PutGlobalVariable: 136    case CheckCell: 136    case CheckIsConstant: 137137    case CheckNotEmpty: 138138    case AssertNotEmpty:
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp

    r260321 r260323   17411741        case OverridesHasInstance: 17421742        case CheckStructure: 1743        case CheckCell: 17441743        case CreateThis: 17451744        case CreatePromise:   17481747        case GetButterfly: { 17491748            fixEdge<CellUse>(node->child1()); 1749            break; 1750        } 1751 1752        case CheckIsConstant: { 1753            if (node->constant()->value().isCell() && node->constant()->value()) 1754                fixEdge<CellUse>(node->child1()); 17501755            break; 17511756        }   24122417 24132418        case IdentityWithProfile: { 2414            node->clearFlags(NodeMustGenerate); 2419            node->convertToIdentity(); 24152420            break; 24162421        }   32783283 32793284            m_insertionSet.insertNode( 3280                m_indexInBlock, SpecNone, CheckCell, node->origin, 3285                m_indexInBlock, SpecNone, CheckIsConstant, node->origin, 32813286                OpInfo(m_graph.freeze(primordialProperty)), Edge(actualProperty, CellUse)); 32823287        };
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGForAllKills.h

    r254735 r260323   7878        if (after.bytecodeIndex().checkpoint()) { 7979            ASSERT(before.bytecodeIndex().checkpoint() != after.bytecodeIndex().checkpoint()); 80            ASSERT_WITH_MESSAGE(before.bytecodeIndex().offset() == after.bytecodeIndex().offset(), "When the DFG does code motion it should change the forExit origin to match the surrounding bytecodes."); 80            ASSERT_WITH_MESSAGE(before.bytecodeIndex().offset() == after.bytecodeIndex().offset() || nodeAfter->op() == ExitOK || nodeAfter->op() == InvalidationPoint, "When the DFG does code motion it should change the forExit origin to match the surrounding bytecodes."); 8181 8282            auto liveBefore = tmpLivenessForCheckpoint(*codeBlock, before.bytecodeIndex());   102102        return; 103103    } 104 105    ASSERT_WITH_MESSAGE(!after.bytecodeIndex().checkpoint(), "Transitioning across a checkpoint but before and after don't share an inlineCallFrame."); 106104 107105    // Detect kills the super conservative way: it is killed if it was live before and dead after.
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGGraph.cpp

    r259583 r260323   15251525} 15261526 1527FrozenValue* Graph::bottomValueMatchingSpeculation(SpeculatedType prediction) 1528{ 1529    // It probably doesn't matter what we return here. 1530    if (prediction == SpecNone) 1531        return freeze(JSValue()); 1532 1533    if (speculationContains(prediction, SpecOther)) 1534        return freeze(jsNull()); 1535 1536    if (speculationContains(prediction, SpecBoolean)) 1537        return freeze(jsBoolean(true)); 1538 1539    if (speculationContains(prediction, SpecFullNumber)) 1540        return freeze(jsNumber(0)); 1541 1542    if (speculationContains(prediction, SpecBigInt)) 1543        return freeze(m_vm.bigIntConstantOne.get()); 1544 1545    if (speculationContains(prediction, SpecString | SpecSymbol)) 1546        return freeze(m_vm.smallStrings.emptyString()); 1547 1548    if (speculationContains(prediction, SpecCellOther | SpecObject)) 1549        return freeze(jsNull()); 1550 1551    ASSERT(speculationContains(prediction, SpecEmpty)); 1552    return freeze(JSValue()); 1553} 1554 15271555RegisteredStructure Graph::registerStructure(Structure* structure, StructureRegistrationResult& result) 15281556{
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGGraph.h

    r259676 r260323   245245    void convertToConstant(Node* node, JSValue value); 246246    void convertToStrongConstant(Node* node, JSValue value); 247 248    // Use this to produce a value you know won't be accessed but the compiler 249    // might think is live. For exmaple, in our op_iterator_next parsing 250    // value VirtualRegister is only read if we are not "done". Because the 251    // done control flow is not in the op_iterator_next bytecode this is not 252    // obvious to the compiler. 253    // FIXME: This isn't quite a true bottom value. For example, any object 254    // speculation will now be Object|Other as this returns null. We should 255    // fix this when we can allocate on the Compiler thread. 256    // https://bugs.webkit.org/show_bug.cgi?id=210627 257    FrozenValue* bottomValueMatchingSpeculation(SpeculatedType); 247258    248259    RegisteredStructure registerStructure(Structure* structure)
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.cpp

    r254735 r260323   6060    ASSERT(basicBlock->variablesAtTail.numberOfLocals() == basicBlock->valuesAtTail.numberOfLocals()); 6161    ASSERT(basicBlock->variablesAtHead.numberOfLocals() == basicBlock->variablesAtTail.numberOfLocals()); 62    ASSERT(basicBlock->variablesAtHead.numberOfTmps() == basicBlock->valuesAtHead.numberOfTmps()); 63    ASSERT(basicBlock->variablesAtTail.numberOfTmps() == basicBlock->valuesAtTail.numberOfTmps()); 64    ASSERT(basicBlock->variablesAtHead.numberOfTmps() == basicBlock->variablesAtTail.numberOfTmps()); 6265 6366    m_abstractValues.resize();   162165            entrypoint->valuesAtTail.local(i).clear(); 163166        } 167        for (size_t i = 0; i < entrypoint->valuesAtHead.numberOfTmps(); ++i) { 168            entrypoint->valuesAtHead.tmp(i).clear(); 169            entrypoint->valuesAtTail.tmp(i).clear(); 170        } 164171    } 165172   176183        block->cfaStructureClobberStateAtHead = StructuresAreWatched; 177184        block->cfaStructureClobberStateAtTail = StructuresAreWatched; 178        for (size_t i = 0; i < block->valuesAtHead.numberOfArguments(); ++i) { 179            block->valuesAtHead.argument(i).clear(); 180            block->valuesAtTail.argument(i).clear(); 181        } 182        for (size_t i = 0; i < block->valuesAtHead.numberOfLocals(); ++i) { 183            block->valuesAtHead.local(i).clear(); 184            block->valuesAtTail.local(i).clear(); 185        for (size_t i = 0; i < block->valuesAtHead.size(); ++i) { 186            block->valuesAtHead[i].clear(); 187            block->valuesAtTail[i].clear(); 185188        } 186189    }   240243            case Flush: { 241244                // The block transfers the value from head to tail. 242                destination = variableAt(index); 245                destination = atIndex(index); 243246                break; 244247            }   305308            } 306309            307            block->valuesAtTail[i] = variableAt(i); 310            block->valuesAtTail[i] = atIndex(i); 308311        } 309312   344347    ASSERT(from->variablesAtTail.numberOfArguments() == to->variablesAtHead.numberOfArguments()); 345348    ASSERT(from->variablesAtTail.numberOfLocals() == to->variablesAtHead.numberOfLocals()); 349    ASSERT(from->variablesAtTail.numberOfTmps() == to->variablesAtHead.numberOfTmps()); 346350    347351    bool changed = false;   353357    switch (m_graph.m_form) { 354358    case ThreadedCPS: { 355        for (size_t argument = 0; argument < from->variablesAtTail.numberOfArguments(); ++argument) { 356            AbstractValue& destination = to->valuesAtHead.argument(argument); 357            changed |= mergeVariableBetweenBlocks(destination, from->valuesAtTail.argument(argument), to->variablesAtHead.argument(argument), from->variablesAtTail.argument(argument)); 358        } 359        360        for (size_t local = 0; local < from->variablesAtTail.numberOfLocals(); ++local) { 361            AbstractValue& destination = to->valuesAtHead.local(local); 362            changed |= mergeVariableBetweenBlocks(destination, from->valuesAtTail.local(local), to->variablesAtHead.local(local), from->variablesAtTail.local(local)); 359        for (size_t index = 0; index < from->variablesAtTail.size(); ++index) { 360            AbstractValue& destination = to->valuesAtHead.at(index); 361            changed |= mergeVariableBetweenBlocks(destination, from->valuesAtTail.at(index), to->variablesAtHead.at(index), from->variablesAtTail.at(index)); 363362        } 364363        break;
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.h

    r254735 r260323   159159    Operands<AbstractValue>& variablesForDebugging(); 160160 161    unsigned size() const { return m_variables.size(); } 161162    unsigned numberOfArguments() const { return m_variables.numberOfArguments(); } 162163    unsigned numberOfLocals() const { return m_variables.numberOfLocals(); } 163    164    AbstractValue& variableAt(size_t index) 164    unsigned numberOfTmps() const { return m_variables.numberOfTmps(); } 165    166    AbstractValue& atIndex(size_t index) 165167    { 166168        activateVariableIfNecessary(index);   170172    AbstractValue& operand(Operand operand) 171173    { 172        return variableAt(m_variables.operandIndex(operand)); 174        return atIndex(m_variables.operandIndex(operand)); 173175    } 174176    175177    AbstractValue& local(size_t index) 176178    { 177        return variableAt(m_variables.localIndex(index)); 179        return atIndex(m_variables.localIndex(index)); 178180    } 179181    180182    AbstractValue& argument(size_t index) 181183    { 182        return variableAt(m_variables.argumentIndex(index)); 184        return atIndex(m_variables.argumentIndex(index)); 183185    } 184186   
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGLazyJSValue.h

    r251690 r260323   7878 7979    LazinessKind kind() const { return m_kind; } 80    SpeculatedType speculatedType() const { return kind() == KnownValue ? SpecBytecodeTop : SpecString; } 8081    8182    FrozenValue* tryGetValue(Graph&) const
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGNode.h

    r260321 r260323   521521    { 522522        switch (op()) { 523        case CheckIsConstant: 523524        case JSConstant: 524525        case DoubleConstant:   18511852    { 18521853        switch (op()) { 1853        case CheckCell: 1854        case CheckIsConstant: 1855            return isCell(child1().useKind()); 18541856        case OverridesHasInstance: 18551857        case NewFunction:
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGNodeType.h

    r260321 r260323   8282    macro(MovHint, NodeMustGenerate) \ 8383    macro(ZombieHint, NodeMustGenerate) \ 84    macro(ExitOK, NodeMustGenerate) /* Indicates that exit state is intact. */ \ 84    macro(ExitOK, NodeMustGenerate) /* Indicates that exit state is intact and it is safe to exit back to the beginning of the exit origin. */ \ 8585    macro(Phantom, NodeMustGenerate) \ 8686    macro(Check, NodeMustGenerate) /* Used if we want just a type check but not liveness. Non-checking uses will be removed. */\   274274    macro(SetRegExpObjectLastIndex, NodeMustGenerate) \ 275275    macro(RecordRegExpCachedResult, NodeMustGenerate | NodeHasVarArgs) \ 276    macro(CheckCell, NodeMustGenerate) \ 276    macro(CheckIsConstant, NodeMustGenerate) \ 277277    macro(CheckNotEmpty, NodeMustGenerate) \ 278278    macro(AssertNotEmpty, NodeMustGenerate) \
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp

    r259786 r260323   160160 161161        switch (trueCallerCallKind) { 162        case InlineCallFrame::Call: 163            jumpTarget = LLINT_RETURN_LOCATION(op_call); 164            break; 162        case InlineCallFrame::Call: { 163            if (callInstruction.opcodeID() == op_call) 164                jumpTarget = LLINT_RETURN_LOCATION(op_call); 165            else if (callInstruction.opcodeID() == op_iterator_open) 166                jumpTarget = LLINT_RETURN_LOCATION(op_iterator_open); 167            else if (callInstruction.opcodeID() == op_iterator_next) 168                jumpTarget = LLINT_RETURN_LOCATION(op_iterator_next); 169            break; 170        } 165171        case InlineCallFrame::Construct: 166172            jumpTarget = LLINT_RETURN_LOCATION(op_construct);   240246    } 241247 248    ASSERT(jumpTarget); 242249    return jumpTarget; 243250}
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp

    r260321 r260323   11041104            break; 11051105        } 1106 1107        case LazyJSConstant: { 1108            setPrediction(m_currentNode->lazyJSValue().speculatedType()); 1109            break; 1110        } 1111 11061112        case StringCharAt: 11071113        case CallStringConstructor:   13081314        case SetRegExpObjectLastIndex: 13091315        case RecordRegExpCachedResult: 1310        case LazyJSConstant: 13111316        case CallDOM: { 13121317            // This node should never be visible at this stage of compilation.   13651370        case SetFunctionName: 13661371        case CheckStructure: 1367        case CheckCell: 1372        case CheckIsConstant: 13681373        case CheckNotEmpty: 13691374        case AssertNotEmpty:
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGSafeToExecute.h

    r260321 r260323   242242    case GetGlobalVar: 243243    case GetGlobalLexicalVariable: 244    case CheckCell: 244    case CheckIsConstant: 245245    case CheckNotEmpty: 246246    case AssertNotEmpty:
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp

    r260321 r260323   92379237} 92389238 9239void SpeculativeJIT::compileCheckCell(Node* node) 9240{ 9241    SpeculateCellOperand cell(this, node->child1()); 9242    speculationCheck(BadCell, JSValueSource::unboxedCell(cell.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, cell.gpr(), node->cellOperand()->cell())); 9239void SpeculativeJIT::compileCheckIsConstant(Node* node) 9240{ 9241    if (node->child1().useKind() == CellUse) { 9242        SpeculateCellOperand cell(this, node->child1()); 9243        speculationCheck(BadCell, JSValueSource::unboxedCell(cell.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, cell.gpr(), node->cellOperand()->cell())); 9244    } else { 9245        ASSERT(!node->constant()->value().isCell() || !node->constant()->value()); 9246        JSValueOperand operand(this, node->child1()); 9247        JSValueRegs regs = operand.jsValueRegs(); 9248 9249#if USE(JSVALUE64) 9250        speculationCheck(BadCache, regs, node->child1(), m_jit.branch64(JITCompiler::NotEqual, regs.gpr(), TrustedImm64(JSValue::encode(node->constant()->value())))); 9251#else 9252        speculationCheck(BadCache, regs, node->child1(), m_jit.branch32(JITCompiler::NotEqual, regs.tagGPR(), TrustedImm32(node->constant()->value().tag()))); 9253        speculationCheck(BadCache, regs, node->child1(), m_jit.branch32(JITCompiler::NotEqual, regs.payloadGPR(), TrustedImm32(node->constant()->value().payload()))); 9254#endif 9255    } 9256 9257 92439258    noResult(node); 92449259}
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h

    r260321 r260323   14001400    void compileIsFunction(Node*); 14011401    void compileTypeOf(Node*); 1402    void compileCheckCell(Node*); 1402    void compileCheckIsConstant(Node*); 14031403    void compileCheckNotEmpty(Node*); 14041404    void compileCheckStructure(Node*);
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp

    r260321 r260323   33963396    } 33973397        3398    case CheckCell: { 3399        compileCheckCell(node); 3398    case CheckIsConstant: { 3399        compileCheckIsConstant(node); 34003400        break; 34013401    }
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp

    r260321 r260323   37283728    } 37293729        3730    case CheckCell: { 3731        compileCheckCell(node); 3730    case CheckIsConstant: { 3731        compileCheckIsConstant(node); 37323732        break; 37333733    }
  • TabularUnified trunk/Source/JavaScriptCore/dfg/DFGValidate.cpp

    r259583 r260323   7272            dataLog(right); \ 7373            dataLogF(") (%s:%d).\n", __FILE__, __LINE__); \ 74            dataLog("\n\n\n"); \ 75            m_graph.baselineCodeBlockFor(nullptr)->dumpBytecode(); \ 7476            dumpGraphIfAppropriate(); \ 7577            WTFReportAssertionFailure(__FILE__, __LINE__, WTF_PRETTY_FUNCTION, #left " == " #right); \   569571            Operands<size_t> setLocalPositions(OperandsLike, block->variablesAtHead); 570572            573            for (size_t i = 0; i < block->variablesAtHead.numberOfTmps(); ++i) { 574                VALIDATE((Operand::tmp(i), block), !block->variablesAtHead.tmp(i) || block->variablesAtHead.tmp(i)->accessesStack(m_graph)); 575                if (m_graph.m_form == ThreadedCPS) 576                    VALIDATE((Operand::tmp(i), block), !block->variablesAtTail.tmp(i) || block->variablesAtTail.tmp(i)->accessesStack(m_graph)); 577 578                getLocalPositions.tmp(i) = notSet; 579                setLocalPositions.tmp(i) = notSet; 580            } 571581            for (size_t i = 0; i < block->variablesAtHead.numberOfArguments(); ++i) { 572582                VALIDATE((virtualRegisterForArgumentIncludingThis(i), block), !block->variablesAtHead.argument(i) || block->variablesAtHead.argument(i)->accessesStack(m_graph));   717727                continue; 718728            729            for (size_t i = 0; i < block->variablesAtHead.numberOfTmps(); ++i) { 730                checkOperand( 731                    block, getLocalPositions, setLocalPositions, Operand::tmp(i)); 732            } 733 719734            for (size_t i = 0; i < block->variablesAtHead.numberOfArguments(); ++i) { 720735                checkOperand(   945960    void checkOperand( 946961        BasicBlock* block, Operands<size_t>& getLocalPositions, 947        Operands<size_t>& setLocalPositions, VirtualRegister operand) 962        Operands<size_t>& setLocalPositions, Operand operand) 948963    { 949964        if (getLocalPositions.operand(operand) == notSet)
  • TabularUnified trunk/Source/JavaScriptCore/ftl/FTLCapabilities.cpp

    r260321 r260323   159159    case InvalidationPoint: 160160    case StringCharAt: 161    case CheckCell: 161    case CheckIsConstant: 162162    case CheckBadCell: 163163    case CheckNotEmpty:
  • TabularUnified trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp

    r260321 r260323   884884            compileCheckStructureOrEmpty(); 885885            break; 886        case CheckCell: 887            compileCheckCell(); 886        case CheckIsConstant: 887            compileCheckIsConstant(); 888888            break; 889889        case CheckNotEmpty:   34383438    } 34393439    3440    void compileCheckCell() 3440    void compileCheckIsConstant() 34413441    { 34423442        LValue cell = lowCell(m_node->child1());
  • TabularUnified trunk/Source/JavaScriptCore/generator/DSL.rb

    r253335 r260323   138138#include "GetByValHistory.h" 139139#include "Instruction.h" 140#include "IterationModeMetadata.h" 140141#include "Opcode.h" 141142#include "PutByIdStatus.h"
  • TabularUnified trunk/Source/JavaScriptCore/generator/Metadata.rb

    r251903 r260323   6868 6969    public: 70        #{op.opcodeID} 7071        Metadata(const #{op.capitalized_name}{" __op" if inits})#{inits} { } 7172
  • TabularUnified trunk/Source/JavaScriptCore/generator/Section.rb

    r255459 r260323   5454 5555  def sort! 56      @opcodes = @opcodes.sort { |a, b| a.metadata.empty? ? b.metadata.empty? ? 0 : 1 : -1 } 56      @opcodes = @opcodes.sort { |a, b| 57          result = nil 58          if a.checkpoints or b.checkpoints 59              raise "Bytecodes with checkpoints should have metadata: #{a.name}" if a.checkpoints and a.metadata.empty? 60              raise "Bytecodes with checkpoints should have metadata: #{b.name}" if b.checkpoints and b.metadata.empty? 61              result = a.checkpoints ? b.checkpoints ? 0 : -1 : 1 62          elsif 63              result = a.metadata.empty? ? b.metadata.empty? ? 0 : 1 : -1 64          end 65          result 66      } 5767      @opcodes.each(&:create_id!) 5868  end   7484 7585      if config[:emit_in_structs_file] 86          i = 0 87          while true 88              if !opcodes[i].checkpoints 89                  out << "\n" 90                  out << "#define NUMBER_OF_#{config[:macro_name_component]}_WITH_CHECKPOINTS #{i}\n" 91                  break 92              end 93 94              i += 1 95          end 96          out << "\n" 97 7698          out.write("#define FOR_EACH_#{config[:macro_name_component]}_METADATA_SIZE(macro) \\\n") 7799          i = 0
  • TabularUnified trunk/Source/JavaScriptCore/jit/JIT.cpp

    r257399 r260323   262262 263263        if (JITInternal::verbose) 264            dataLogLn("Old JIT emitting code for ", m_bytecodeIndex, " at offset ", (long)debugOffset()); 264            dataLogLn("Baseline JIT emitting code for ", m_bytecodeIndex, " at offset ", (long)debugOffset()); 265265 266266        OpcodeID opcodeID = currentInstruction->opcodeID();   439439        DEFINE_OP(op_put_internal_field) 440440 441        DEFINE_OP(op_iterator_open) 442        DEFINE_OP(op_iterator_next) 443 441444        DEFINE_OP(op_ret) 442445        DEFINE_OP(op_rshift)   526529 527530        if (JITInternal::verbose) 528            dataLogLn("Old JIT emitting slow code for ", m_bytecodeIndex, " at offset ", (long)debugOffset()); 531            dataLogLn("Baseline JIT emitting slow code for ", m_bytecodeIndex, " at offset ", (long)debugOffset()); 529532 530533        if (m_disassembler)   590593        DEFINE_SLOWCASE_OP(op_put_to_scope) 591594 595        DEFINE_SLOWCASE_OP(op_iterator_open) 596        DEFINE_SLOWCASE_OP(op_iterator_next) 597 592598        DEFINE_SLOWCASE_SLOW_OP(unsigned) 593599        DEFINE_SLOWCASE_SLOW_OP(inc)   627633            dataLog("At ", firstTo, " slow: ", iter - m_slowCases.begin(), "\n"); 628634 629        RELEASE_ASSERT_WITH_MESSAGE(iter == m_slowCases.end() || firstTo != iter->to, "Not enough jumps linked in slow case codegen."); 630        RELEASE_ASSERT_WITH_MESSAGE(firstTo == (iter - 1)->to, "Too many jumps linked in slow case codegen."); 635        RELEASE_ASSERT_WITH_MESSAGE(iter == m_slowCases.end() || firstTo.offset() != iter->to.offset(), "Not enough jumps linked in slow case codegen."); 636        RELEASE_ASSERT_WITH_MESSAGE(firstTo.offset() == (iter - 1)->to.offset(), "Too many jumps linked in slow case codegen."); 631637        632638        if (shouldEmitProfiling())
  • TabularUnified trunk/Source/JavaScriptCore/jit/JIT.h

    r259175 r260323   297297 298298        void privateCompileExceptionHandlers(); 299 300        void advanceToNextCheckpoint(); 301        void emitJumpSlowToHotForCheckpoint(Jump); 299302 300303        void addSlowCase(Jump);   677680        void emitSlowCaseCall(const Instruction*, Vector<SlowCaseEntry>::iterator&, SlowPathFunction); 678681 682        void emit_op_iterator_open(const Instruction*); 683        void emitSlow_op_iterator_open(const Instruction*, Vector<SlowCaseEntry>::iterator&); 684        void emit_op_iterator_next(const Instruction*); 685        void emitSlow_op_iterator_next(const Instruction*, Vector<SlowCaseEntry>::iterator&); 686 679687        void emitRightShift(const Instruction*, bool isUnsigned); 680688        void emitRightShiftSlowCase(const Instruction*, Vector<SlowCaseEntry>::iterator&, bool isUnsigned);   912920        Vector<CallRecord> m_calls; 913921        Vector<Label> m_labels; 922        HashMap<BytecodeIndex, Label> m_checkpointLabels; 914923        Vector<JITGetByIdGenerator> m_getByIds; 915924        Vector<JITGetByValGenerator> m_getByVals;
  • TabularUnified trunk/Source/JavaScriptCore/jit/JITCall.cpp

    r259676 r260323   3030#include "JIT.h" 3131 32#include "BytecodeOperandsForCheckpoint.h" 33#include "CacheableIdentifierInlines.h" 3234#include "CallFrameShuffler.h" 3335#include "CodeBlock.h"   4345#include "StackAlignment.h" 4446#include "ThunkGenerators.h" 47 4548#include <wtf/StringPrintStream.h> 4649 47 4850namespace JSC { 4951   5254{ 5355    emitValueProfilingSite(bytecode.metadata(m_codeBlock)); 54    emitPutVirtualRegister(bytecode.m_dst); 56    emitPutVirtualRegister(destinationFor(bytecode, m_bytecodeIndex.checkpoint()).virtualRegister(), regT0); 5557} 5658   6264JIT::compileSetupFrame(const Op& bytecode, CallLinkInfo*) 6365{ 66    unsigned checkpoint = m_bytecodeIndex.checkpoint(); 6467    auto& metadata = bytecode.metadata(m_codeBlock); 65    int argCount = bytecode.m_argc; 66    int registerOffset = -static_cast<int>(bytecode.m_argv); 68    int argCountIncludingThis = argumentCountIncludingThisFor(bytecode, checkpoint); 69    int registerOffset = -static_cast<int>(stackOffsetInRegistersForCall(bytecode, checkpoint)); 6770 6871    if (Op::opcodeID == op_call && shouldEmitProfiling()) {   7073        Jump done = branchIfNotCell(regT0); 7174        load32(Address(regT0, JSCell::structureIDOffset()), regT0); 72        store32(regT0, metadata.m_callLinkInfo.m_arrayProfile.addressOfLastSeenStructureID()); 75        store32(regT0, arrayProfileFor(metadata, checkpoint).addressOfLastSeenStructureID()); 7376        done.link(this); 7477    } 7578 7679    addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister); 77    store32(TrustedImm32(argCount), Address(stackPointerRegister, CallFrameSlot::argumentCountIncludingThis * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC))); 80    store32(TrustedImm32(argCountIncludingThis), Address(stackPointerRegister, CallFrameSlot::argumentCountIncludingThis * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC))); 7881} 7982   208211    OpcodeID opcodeID = Op::opcodeID; 209212    auto bytecode = instruction->as<Op>(); 210    VirtualRegister callee = bytecode.m_callee; 213    VirtualRegister callee = calleeFor(bytecode, m_bytecodeIndex.checkpoint()); 211214 212215    /* Caller always:   235238    store64(regT0, Address(stackPointerRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC))); 236239 237    if (compileCallEval(bytecode)) { 240    if (compileCallEval(bytecode)) 238241        return; 239    } 240242 241243    DataLabelPtr addressOfLinkedFunctionCheck;   381383} 382384 385void JIT::emit_op_iterator_open(const Instruction* instruction) 386{ 387    auto bytecode = instruction->as<OpIteratorOpen>(); 388    auto& metadata = bytecode.metadata(m_codeBlock); 389    auto* tryFastFunction = ([&] () { 390        switch (instruction->width()) { 391        case Narrow: return iterator_open_try_fast_narrow; 392        case Wide16: return iterator_open_try_fast_wide16; 393        case Wide32: return iterator_open_try_fast_wide32; 394        default: RELEASE_ASSERT_NOT_REACHED(); 395        } 396    })(); 397    setupArguments<decltype(tryFastFunction)>(instruction, &metadata); 398    appendCallWithExceptionCheck(tryFastFunction); 399    Jump fastCase = branch32(NotEqual, GPRInfo::returnValueGPR2, TrustedImm32(static_cast<uint32_t>(IterationMode::Generic))); 400 401    compileOpCall<OpIteratorOpen>(instruction, m_callLinkInfoIndex++); 402    advanceToNextCheckpoint(); 403    // call result (iterator) is in regT0 404 405    const Identifier* ident = &vm().propertyNames->next; 406 407    emitJumpSlowCaseIfNotJSCell(regT0); 408 409    JITGetByIdGenerator gen( 410        m_codeBlock, CodeOrigin(m_bytecodeIndex), CallSiteIndex(BytecodeIndex(m_bytecodeIndex.offset())), RegisterSet::stubUnavailableRegisters(), 411        CacheableIdentifier::createFromImmortalIdentifier(ident->impl()), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetById); 412    gen.generateFastPath(*this); 413    addSlowCase(gen.slowPathJump()); 414    m_getByIds.append(gen); 415 416    emitValueProfilingSite(bytecode.metadata(m_codeBlock)); 417    emitPutVirtualRegister(bytecode.m_next); 418 419    fastCase.link(this); 420} 421 422void JIT::emitSlow_op_iterator_open(const Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter) 423{ 424    linkAllSlowCases(iter); 425    compileOpCallSlowCase<OpIteratorOpen>(instruction, iter, m_callLinkInfoIndex++); 426    emitJumpSlowToHotForCheckpoint(jump()); 427 428 429    linkAllSlowCases(iter); 430    auto bytecode = instruction->as<OpIteratorOpen>(); 431    VirtualRegister nextVReg = bytecode.m_next; 432    UniquedStringImpl* ident = vm().propertyNames->next.impl(); 433 434    JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; 435 436    Label coldPathBegin = label(); 437 438    Call call = callOperationWithProfile(bytecode.metadata(m_codeBlock), operationGetByIdOptimize, nextVReg, TrustedImmPtr(m_codeBlock->globalObject()), gen.stubInfo(), regT0, CacheableIdentifier::createFromImmortalIdentifier(ident).rawBits()); 439 440    gen.reportSlowPathCall(coldPathBegin, call); 441} 442 443void JIT::emit_op_iterator_next(const Instruction* instruction) 444{ 445    auto bytecode = instruction->as<OpIteratorNext>(); 446    auto& metadata = bytecode.metadata(m_codeBlock); 447    auto* tryFastFunction = ([&] () { 448        switch (instruction->width()) { 449        case Narrow: return iterator_next_try_fast_narrow; 450        case Wide16: return iterator_next_try_fast_wide16; 451        case Wide32: return iterator_next_try_fast_wide32; 452        default: RELEASE_ASSERT_NOT_REACHED(); 453        } 454    })(); 455 456    emitGetVirtualRegister(bytecode.m_next, regT0); 457    Jump genericCase = branchIfNotEmpty(regT0); 458    setupArguments<decltype(tryFastFunction)>(instruction, &metadata); 459    appendCallWithExceptionCheck(tryFastFunction); 460    Jump fastCase = branch32(NotEqual, GPRInfo::returnValueGPR2, TrustedImm32(static_cast<uint32_t>(IterationMode::Generic))); 461 462    genericCase.link(this); 463    or8(TrustedImm32(static_cast<uint8_t>(IterationMode::Generic)), AbsoluteAddress(&metadata.m_iterationMetadata.seenModes)); 464    compileOpCall<OpIteratorNext>(instruction, m_callLinkInfoIndex++); 465    advanceToNextCheckpoint(); 466    // call result ({ done, value } JSObject) in regT0 467 468    GPRReg valueGPR = regT0; 469    GPRReg iterResultGPR = regT2; 470    GPRReg doneGPR = regT1; 471    // iterResultGPR will get trashed by the first get by id below. 472    move(valueGPR, iterResultGPR); 473 474    { 475        emitJumpSlowCaseIfNotJSCell(iterResultGPR); 476 477        RegisterSet preservedRegs = RegisterSet::stubUnavailableRegisters(); 478        preservedRegs.add(valueGPR); 479        JITGetByIdGenerator gen( 480            m_codeBlock, CodeOrigin(m_bytecodeIndex), CallSiteIndex(BytecodeIndex(m_bytecodeIndex.offset())), preservedRegs, 481            CacheableIdentifier::createFromImmortalIdentifier(vm().propertyNames->next.impl()), JSValueRegs(iterResultGPR), JSValueRegs(doneGPR), AccessType::GetById); 482        gen.generateFastPath(*this); 483        addSlowCase(gen.slowPathJump()); 484        m_getByIds.append(gen); 485 486        emitValueProfilingSite(metadata); 487        emitPutVirtualRegister(bytecode.m_done, doneGPR); 488        advanceToNextCheckpoint(); 489    } 490 491 492    { 493        GPRReg scratch1 = regT2; 494        GPRReg scratch2 = regT3; 495        const bool shouldCheckMasqueradesAsUndefined = false; 496        JumpList iterationDone = branchIfTruthy(vm(), JSValueRegs(doneGPR), scratch1, scratch2, fpRegT0, fpRegT1, shouldCheckMasqueradesAsUndefined, m_codeBlock->globalObject()); 497 498        JITGetByIdGenerator gen( 499            m_codeBlock, CodeOrigin(m_bytecodeIndex), CallSiteIndex(BytecodeIndex(m_bytecodeIndex.offset())), RegisterSet::stubUnavailableRegisters(), 500            CacheableIdentifier::createFromImmortalIdentifier(vm().propertyNames->value.impl()), JSValueRegs(valueGPR), JSValueRegs(valueGPR), AccessType::GetById); 501        gen.generateFastPath(*this); 502        addSlowCase(gen.slowPathJump()); 503        m_getByIds.append(gen); 504 505        emitValueProfilingSite(metadata); 506        emitPutVirtualRegister(bytecode.m_value, valueGPR); 507 508        iterationDone.link(this); 509    } 510 511    fastCase.link(this); 512} 513 514void JIT::emitSlow_op_iterator_next(const Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter) 515{ 516    linkAllSlowCases(iter); 517    compileOpCallSlowCase<OpIteratorNext>(instruction, iter, m_callLinkInfoIndex++); 518    emitJumpSlowToHotForCheckpoint(jump()); 519 520    auto bytecode = instruction->as<OpIteratorNext>(); 521    { 522        VirtualRegister doneVReg = bytecode.m_done; 523        GPRReg iterResultGPR = regT2; 524 525        linkAllSlowCases(iter); 526        JumpList notObject; 527        notObject.append(branchIfNotCell(iterResultGPR)); 528        notObject.append(branchIfNotObject(iterResultGPR)); 529 530        UniquedStringImpl* ident = vm().propertyNames->done.impl(); 531        JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; 532 533        Label coldPathBegin = label(); 534 535        Call call = callOperationWithProfile(bytecode.metadata(m_codeBlock), operationGetByIdOptimize, doneVReg, TrustedImmPtr(m_codeBlock->globalObject()), gen.stubInfo(), iterResultGPR, CacheableIdentifier::createFromImmortalIdentifier(ident).rawBits()); 536 537        gen.reportSlowPathCall(coldPathBegin, call); 538        emitGetVirtualRegister(doneVReg, regT1); 539        emitGetVirtualRegister(bytecode.m_value, regT0); 540        emitJumpSlowToHotForCheckpoint(jump()); 541 542        notObject.link(this); 543        callOperation(operationThrowIteratorResultIsNotObject, TrustedImmPtr(m_codeBlock->globalObject())); 544    } 545 546    {    547        linkAllSlowCases(iter); 548        VirtualRegister valueVReg = bytecode.m_value; 549        GPRReg iterResultGPR = regT0; 550 551        UniquedStringImpl* ident = vm().propertyNames->value.impl(); 552        JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; 553 554        Label coldPathBegin = label(); 555 556        Call call = callOperationWithProfile(bytecode.metadata(m_codeBlock), operationGetByIdOptimize, valueVReg, TrustedImmPtr(m_codeBlock->globalObject()), gen.stubInfo(), iterResultGPR, CacheableIdentifier::createFromImmortalIdentifier(ident).rawBits()); 557 558        gen.reportSlowPathCall(coldPathBegin, call); 559    } 560 561} 562 383563} // namespace JSC 384564
  • TabularUnified trunk/Source/JavaScriptCore/jit/JITCall32_64.cpp

    r259715 r260323   367367} 368368 369void JIT::emit_op_iterator_open(const Instruction*) 370{ 371    UNREACHABLE_FOR_PLATFORM(); 372} 373 374void JIT::emitSlow_op_iterator_open(const Instruction*, Vector<SlowCaseEntry>::iterator&) 375{ 376    UNREACHABLE_FOR_PLATFORM(); 377} 378 379void JIT::emit_op_iterator_next(const Instruction*) 380{ 381    UNREACHABLE_FOR_PLATFORM(); 382} 383 384void JIT::emitSlow_op_iterator_next(const Instruction*, Vector<SlowCaseEntry>::iterator&) 385{ 386    UNREACHABLE_FOR_PLATFORM(); 387} 388 369389} // namespace JSC 370390
  • TabularUnified trunk/Source/JavaScriptCore/jit/JITInlines.h

    r259676 r260323   2727 2828#if ENABLE(JIT) 29#include "BytecodeOperandsForCheckpoint.h" 2930#include "CommonSlowPathsInlines.h" 31#include "JIT.h" 3032#include "JSCInlines.h" 3133   106108ALWAYS_INLINE void JIT::updateTopCallFrame() 107109{ 108    uint32_t locationBits = CallSiteIndex(m_bytecodeIndex).bits(); 110    uint32_t locationBits = CallSiteIndex(m_bytecodeIndex.offset()).bits(); 109111    store32(TrustedImm32(locationBits), tagFor(CallFrameSlot::argumentCountIncludingThis)); 110112      182184        return true; 183185    return false; 186} 187 188inline void JIT::advanceToNextCheckpoint() 189{ 190    ASSERT_WITH_MESSAGE(m_bytecodeIndex, "This method should only be called during hot/cold path generation, so that m_bytecodeIndex is set"); 191    ASSERT(m_codeBlock->instructionAt(m_bytecodeIndex)->hasCheckpoints()); 192    m_bytecodeIndex = BytecodeIndex(m_bytecodeIndex.offset(), m_bytecodeIndex.checkpoint() + 1); 193 194    auto result = m_checkpointLabels.add(m_bytecodeIndex, label()); 195    ASSERT_UNUSED(result, result.isNewEntry); 196} 197 198inline void JIT::emitJumpSlowToHotForCheckpoint(Jump jump) 199{ 200    ASSERT_WITH_MESSAGE(m_bytecodeIndex, "This method should only be called during hot/cold path generation, so that m_bytecodeIndex is set"); 201    ASSERT(m_codeBlock->instructionAt(m_bytecodeIndex)->hasCheckpoints()); 202    m_bytecodeIndex = BytecodeIndex(m_bytecodeIndex.offset(), m_bytecodeIndex.checkpoint() + 1); 203 204    auto iter = m_checkpointLabels.find(m_bytecodeIndex); 205    ASSERT(iter != m_checkpointLabels.end()); 206    jump.linkTo(iter->value, this); 184207} 185208   320343    if (!shouldEmitProfiling()) 321344        return; 322    emitValueProfilingSite(metadata.m_profile); 345    emitValueProfilingSite(valueProfileFor(metadata, m_bytecodeIndex.checkpoint())); 323346} 324347
  • TabularUnified trunk/Source/JavaScriptCore/jit/JITOperations.cpp

    r259786 r260323   135135} 136136 137void JIT_OPERATION operationThrowIteratorResultIsNotObject(JSGlobalObject* globalObject) 138{ 139    VM& vm = globalObject->vm(); 140    CallFrame* callFrame = DECLARE_CALL_FRAME(vm); 141    JITOperationPrologueCallFrameTracer tracer(vm, callFrame); 142    auto scope = DECLARE_THROW_SCOPE(vm); 143 144    throwTypeError(globalObject, scope, "Iterator result interface is not an object."_s); 145} 146 137147int32_t JIT_OPERATION operationCallArityCheck(JSGlobalObject* globalObject) 138148{
  • TabularUnified trunk/Source/JavaScriptCore/jit/JITOperations.h

    r259676 r260323   156156void JIT_OPERATION operationVMHandleException(VM*) WTF_INTERNAL; 157157void JIT_OPERATION operationThrowStackOverflowErrorFromThunk(JSGlobalObject*) WTF_INTERNAL; 158void JIT_OPERATION operationThrowIteratorResultIsNotObject(JSGlobalObject*) WTF_INTERNAL; 158159 159160void JIT_OPERATION operationThrowStackOverflowError(CodeBlock*) WTF_INTERNAL;
  • TabularUnified trunk/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp

    r259676 r260323   2929#include "ArrayConstructor.h" 3030#include "BytecodeGenerator.h" 31#include "BytecodeOperandsForCheckpoint.h" 3132#include "CallFrame.h" 3233#include "CheckpointOSRExitSideState.h"   8384    VM& vm = codeBlock->vm(); \ 8485    SlowPathFrameTracer tracer(vm, callFrame); \ 86    dataLogLnIf(LLINT_TRACING && Options::traceLLIntSlowPath(), "Calling slow path ", WTF_PRETTY_FUNCTION); \ 8587    auto throwScope = DECLARE_THROW_SCOPE(vm) 8688   694696 695697 696static void setupGetByIdPrototypeCache(JSGlobalObject* globalObject, VM& vm, CodeBlock* codeBlock, const Instruction* pc, OpGetById::Metadata& metadata, JSCell* baseCell, PropertySlot& slot, const Identifier& ident) 698static void setupGetByIdPrototypeCache(JSGlobalObject* globalObject, VM& vm, CodeBlock* codeBlock, const Instruction* pc, GetByIdModeMetadata& metadata, JSCell* baseCell, PropertySlot& slot, const Identifier& ident) 697699{ 698700    Structure* structure = baseCell->structure(vm);   742744        ConcurrentJSLocker locker(codeBlock->m_lock); 743745        if (slot.isUnset()) 744            metadata.m_modeMetadata.setUnsetMode(structure); 746            metadata.setUnsetMode(structure); 745747        else { 746748            ASSERT(slot.isValue()); 747            metadata.m_modeMetadata.setProtoLoadMode(structure, offset, slot.slotBase()); 749            metadata.setProtoLoadMode(structure, offset, slot.slotBase()); 748750        } 749751    }   751753} 752754 753 754LLINT_SLOW_PATH_DECL(slow_path_get_by_id) 755{ 756    LLINT_BEGIN(); 757    auto bytecode = pc->as<OpGetById>(); 758    auto& metadata = bytecode.metadata(codeBlock); 759    const Identifier& ident = codeBlock->identifier(bytecode.m_property); 760    JSValue baseValue = getOperand(callFrame, bytecode.m_base); 755static JSValue performLLIntGetByID(const Instruction* pc, CodeBlock* codeBlock, JSGlobalObject* globalObject, JSValue baseValue, const Identifier& ident, GetByIdModeMetadata& metadata) 756{ 757    VM& vm = globalObject->vm(); 758    auto throwScope = DECLARE_THROW_SCOPE(vm); 761759    PropertySlot slot(baseValue, PropertySlot::PropertySlot::InternalMethodType::Get); 762760 763761    JSValue result = baseValue.get(globalObject, ident, slot); 764    LLINT_CHECK_EXCEPTION(); 765    callFrame->uncheckedR(bytecode.m_dst) = result; 766    762    RETURN_IF_EXCEPTION(throwScope, { }); 763 767764    if (!LLINT_ALWAYS_ACCESS_SLOW 768765        && baseValue.isCell()   771768        { 772769            StructureID oldStructureID; 773            switch (metadata.m_modeMetadata.mode) { 770            switch (metadata.mode) { 774771            case GetByIdMode::Default: 775                oldStructureID = metadata.m_modeMetadata.defaultMode.structureID; 772                oldStructureID = metadata.defaultMode.structureID; 776773                break; 777774            case GetByIdMode::Unset: 778                oldStructureID = metadata.m_modeMetadata.unsetMode.structureID; 775                oldStructureID = metadata.unsetMode.structureID; 779776                break; 780777            case GetByIdMode::ProtoLoad: 781                oldStructureID = metadata.m_modeMetadata.protoLoadMode.structureID; 778                oldStructureID = metadata.protoLoadMode.structureID; 782779                break; 783780            default:   800797            ConcurrentJSLocker locker(codeBlock->m_lock); 801798            // Start out by clearing out the old cache. 802            metadata.m_modeMetadata.clearToDefaultModeWithoutCache(); 799            metadata.clearToDefaultModeWithoutCache(); 803800 804801            // Prevent the prototype cache from ever happening. 805            metadata.m_modeMetadata.hitCountForLLIntCaching = 0; 802            metadata.hitCountForLLIntCaching = 0; 806803        807804            if (structure->propertyAccessesAreCacheable() && !structure->needImpurePropertyWatchpoint()) { 808                metadata.m_modeMetadata.defaultMode.structureID = structure->id(); 809                metadata.m_modeMetadata.defaultMode.cachedOffset = slot.cachedOffset(); 805                metadata.defaultMode.structureID = structure->id(); 806                metadata.defaultMode.cachedOffset = slot.cachedOffset(); 810807                vm.heap.writeBarrier(codeBlock); 811808            } 812        } else if (UNLIKELY(metadata.m_modeMetadata.hitCountForLLIntCaching && slot.isValue())) { 809        } else if (UNLIKELY(metadata.hitCountForLLIntCaching && slot.isValue())) { 813810            ASSERT(slot.slotBase() != baseValue); 814811 815            if (!(--metadata.m_modeMetadata.hitCountForLLIntCaching)) 812            if (!(--metadata.hitCountForLLIntCaching)) 816813                setupGetByIdPrototypeCache(globalObject, vm, codeBlock, pc, metadata, baseCell, slot, ident); 817814        }   819816        { 820817            ConcurrentJSLocker locker(codeBlock->m_lock); 821            metadata.m_modeMetadata.setArrayLengthMode(); 822            metadata.m_modeMetadata.arrayLengthMode.arrayProfile.observeStructure(baseValue.asCell()->structure(vm)); 818            metadata.setArrayLengthMode(); 819            metadata.arrayLengthMode.arrayProfile.observeStructure(baseValue.asCell()->structure(vm)); 823820        } 824821        vm.heap.writeBarrier(codeBlock); 825822    } 826823 827    LLINT_PROFILE_VALUE(result); 824    return result; 825} 826 827LLINT_SLOW_PATH_DECL(slow_path_get_by_id) 828{ 829    LLINT_BEGIN(); 830    auto bytecode = pc->as<OpGetById>(); 831    auto& metadata = bytecode.metadata(codeBlock); 832    const Identifier& ident = codeBlock->identifier(bytecode.m_property); 833    JSValue baseValue = getOperand(callFrame, bytecode.m_base); 834 835    JSValue result = performLLIntGetByID(pc, codeBlock, globalObject, baseValue, ident, metadata.m_modeMetadata); 836    LLINT_RETURN_PROFILED(result); 837} 838 839LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_iterator_open_get_next); 840LLINT_SLOW_PATH_DECL(slow_path_iterator_open_get_next) 841{ 842    LLINT_BEGIN(); 843 844    auto bytecode = pc->as<OpIteratorOpen>(); 845    auto& metadata = bytecode.metadata(codeBlock); 846    JSValue iterator = getOperand(callFrame, bytecode.m_iterator); 847    Register& nextRegister = callFrame->uncheckedR(bytecode.m_next); 848 849    JSValue result = performLLIntGetByID(pc, codeBlock, globalObject, iterator, vm.propertyNames->next, metadata.m_modeMetadata); 850    LLINT_CHECK_EXCEPTION(); 851    nextRegister = result; 852    bytecode.metadata(codeBlock).m_nextProfile.m_buckets[0] = JSValue::encode(result); 828853    LLINT_END(); 829854} 855 856LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_iterator_next_get_done); 857LLINT_SLOW_PATH_DECL(slow_path_iterator_next_get_done) 858{ 859    LLINT_BEGIN(); 860 861    auto bytecode = pc->as<OpIteratorNext>(); 862    auto& metadata = bytecode.metadata(codeBlock); 863    // We use m_value to hold the iterator return value since it's either not live past this bytecode or it's going to be filled later. 864    JSValue iteratorReturn = getOperand(callFrame, bytecode.m_value); 865    Register& doneRegister = callFrame->uncheckedR(bytecode.m_done); 866 867    if (!iteratorReturn.isObject()) 868        LLINT_THROW(createTypeError(globalObject, "Iterator result interface is not an object."_s)); 869 870    JSValue result = performLLIntGetByID(pc, codeBlock, globalObject, iteratorReturn, vm.propertyNames->done, metadata.m_doneModeMetadata); 871    LLINT_CHECK_EXCEPTION(); 872    doneRegister = result; 873    bytecode.metadata(codeBlock).m_doneProfile.m_buckets[0] = JSValue::encode(result); 874    LLINT_END(); 875} 876 877LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_iterator_next_get_value); 878LLINT_SLOW_PATH_DECL(slow_path_iterator_next_get_value) 879{ 880    LLINT_BEGIN(); 881 882    auto bytecode = pc->as<OpIteratorNext>(); 883    auto& metadata = bytecode.metadata(codeBlock); 884    // We use m_value to hold the iterator return value tmp since it's either not live past this bytecode or it's going to be filled later. 885    Register& valueRegister = callFrame->uncheckedR(bytecode.m_value); 886    JSValue iteratorReturn = valueRegister.jsValue(); 887 888    JSValue result = performLLIntGetByID(pc, codeBlock, globalObject, iteratorReturn, vm.propertyNames->value, metadata.m_valueModeMetadata); 889    LLINT_CHECK_EXCEPTION(); 890    valueRegister = result; 891    bytecode.metadata(codeBlock).m_valueProfile.m_buckets[0] = JSValue::encode(result); 892    LLINT_END(); 893} 894 830895 831896LLINT_SLOW_PATH_DECL(slow_path_put_by_id)   15641629 15651630template<typename Op> 1566inline SlowPathReturnType genericCall(CodeBlock* codeBlock, CallFrame* callFrame, Op&& bytecode, CodeSpecializationKind kind) 1631inline SlowPathReturnType genericCall(CodeBlock* codeBlock, CallFrame* callFrame, Op&& bytecode, CodeSpecializationKind kind, unsigned checkpointIndex = 0) 15671632{ 15681633    // This needs to:   15721637    // - Return a tuple of machine code address to call and the new call frame. 15731638    1574    JSValue calleeAsValue = getOperand(callFrame, bytecode.m_callee); 1575    1576    CallFrame* calleeFrame = callFrame - bytecode.m_argv; 1577    1578    calleeFrame->setArgumentCountIncludingThis(bytecode.m_argc); 1639    JSValue calleeAsValue = getOperand(callFrame, calleeFor(bytecode, checkpointIndex)); 1640    1641    CallFrame* calleeFrame = callFrame - stackOffsetInRegistersForCall(bytecode, checkpointIndex); 1642    1643    calleeFrame->setArgumentCountIncludingThis(argumentCountIncludingThisFor(bytecode, checkpointIndex)); 15791644    calleeFrame->uncheckedR(VirtualRegister(CallFrameSlot::callee)) = calleeAsValue; 15801645    calleeFrame->setCallerFrame(callFrame); 15811646    15821647    auto& metadata = bytecode.metadata(codeBlock); 1583    return setUpCall(calleeFrame, kind, calleeAsValue, &metadata.m_callLinkInfo); 1648    return setUpCall(calleeFrame, kind, calleeAsValue, &callLinkInfoFor(metadata, checkpointIndex)); 15841649} 15851650   16031668    UNUSED_PARAM(globalObject); 16041669    RELEASE_AND_RETURN(throwScope, genericCall(codeBlock, callFrame, pc->as<OpConstruct>(), CodeForConstruct)); 1670} 1671 1672LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_iterator_open_call); 1673LLINT_SLOW_PATH_DECL(slow_path_iterator_open_call) 1674{ 1675    LLINT_BEGIN(); 1676    UNUSED_PARAM(globalObject); 1677 1678//    dataLogLn("Calling slow open_call"); 1679 1680    RELEASE_AND_RETURN(throwScope, genericCall(codeBlock, callFrame, pc->as<OpIteratorOpen>(), CodeForCall, OpIteratorOpen::symbolCall)); 1681} 1682 1683LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_iterator_next_call); 1684LLINT_SLOW_PATH_DECL(slow_path_iterator_next_call) 1685{ 1686    LLINT_BEGIN(); 1687    UNUSED_PARAM(globalObject); 1688 1689//    dataLogLn("Calling slow next_call"); 1690 1691    RELEASE_AND_RETURN(throwScope, genericCall(codeBlock, callFrame, pc->as<OpIteratorNext>(), CodeForCall, OpIteratorNext::computeNext)); 16051692} 16061693   19912078} 19922079 2080static void handleIteratorOpenCheckpoint(VM& vm, CallFrame* callFrame, JSGlobalObject* globalObject, const OpIteratorOpen& bytecode) 2081{ 2082    auto scope = DECLARE_THROW_SCOPE(vm); 2083    JSValue iterator = callFrame->uncheckedR(bytecode.m_iterator).jsValue(); 2084    if (!iterator.isObject()) { 2085        throwVMTypeError(globalObject, scope, "Iterator result interface is not an object."_s); 2086        return; 2087    } 2088 2089    JSValue next = iterator.get(globalObject, vm.propertyNames->next); 2090    RETURN_IF_EXCEPTION(scope, void()); 2091    callFrame->uncheckedR(bytecode.m_next) = next; 2092} 2093 2094static void handleIteratorNextCheckpoint(VM& vm, CallFrame* callFrame, JSGlobalObject* globalObject, const OpIteratorNext& bytecode, CheckpointOSRExitSideState& sideState) 2095{ 2096    auto scope = DECLARE_THROW_SCOPE(vm); 2097    unsigned checkpointIndex = sideState.bytecodeIndex.checkpoint(); 2098 2099    auto& valueRegister = callFrame->uncheckedR(bytecode.m_value); 2100    auto iteratorResultObject = sideState.tmps[OpIteratorNext::nextResult]; 2101    auto next = callFrame->uncheckedR(bytecode.m_next).jsValue();    2102 2103    RELEASE_ASSERT_WITH_MESSAGE(next, "We should not OSR exit to a checkpoint for fast cases."); 2104 2105    auto& doneRegister = callFrame->uncheckedR(bytecode.m_done); 2106    if (checkpointIndex == OpIteratorNext::getDone) { 2107        doneRegister = iteratorResultObject.get(globalObject, vm.propertyNames->done); 2108        RETURN_IF_EXCEPTION(scope, void()); 2109    } 2110 2111    scope.release(); 2112    if (doneRegister.jsValue().toBoolean(globalObject)) 2113        valueRegister = jsUndefined(); 2114    else 2115        valueRegister = iteratorResultObject.get(globalObject, vm.propertyNames->value); 2116} 2117 19932118inline SlowPathReturnType dispatchToNextInstruction(CodeBlock* codeBlock, InstructionStream::Ref pc) 19942119{   20672192        break; 20682193 2194    case op_iterator_open: { 2195        handleIteratorOpenCheckpoint(vm, callFrame, globalObject, pc->as<OpIteratorOpen>()); 2196        break; 2197    } 2198    case op_iterator_next: { 2199        handleIteratorNextCheckpoint(vm, callFrame, globalObject, pc->as<OpIteratorNext>(), *sideState.get()); 2200        break; 2201    } 2202 20692203    default: 20702204        RELEASE_ASSERT_NOT_REACHED();   21042238} 21052239 2240extern "C" void llint_dump_value(EncodedJSValue value); 2241extern "C" void llint_dump_value(EncodedJSValue value) 2242{ 2243    dataLogLn(JSValue::decode(value)); 2244} 2245 21062246extern "C" NO_RETURN_DUE_TO_CRASH void llint_crash() 21072247{
  • TabularUnified trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm

    r259866 r260323   10331033end 10341034 1035macro callTargetFunction(opcodeName, size, opcodeStruct, dispatch, callee, callPtrTag) 1035macro callTargetFunction(opcodeName, size, opcodeStruct, valueProfileName, dstVirtualRegister, dispatch, callee, callPtrTag) 10361036    if C_LOOP or C_LOOP_WIN 10371037        cloopCallJSFunction callee   10461046        # crashes. 10471047        restoreStackPointerAfterCall() 1048        dispatchAfterCall(size, opcodeStruct, dispatch) 1048        dispatchAfterCall(size, opcodeStruct, valueProfileName, dstVirtualRegister, dispatch) 10491049    end 10501050    defineOSRExitReturnLabel(opcodeName, size) 10511051    restoreStackPointerAfterCall() 1052    dispatchAfterCall(size, opcodeStruct, dispatch) 1052    dispatchAfterCall(size, opcodeStruct, valueProfileName, dstVirtualRegister, dispatch) 10531053end 10541054   11181118end 11191119 1120macro slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) 1120macro slowPathForCommonCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) 1121    slowPathForCall(opcodeName, size, opcodeStruct, m_profile, m_dst, dispatch, slowPath, prepareCall) 1122end 1123 1124macro slowPathForCall(opcodeName, size, opcodeStruct, valueProfileName, dstVirtualRegister, dispatch, slowPath, prepareCall) 11211125    callCallSlowPath( 11221126        slowPath,   11271131            prepareCall(callee, t2, t3, t4, SlowPathPtrTag) 11281132        .dontUpdateSP: 1129            callTargetFunction(%opcodeName%_slow, size, opcodeStruct, dispatch, callee, SlowPathPtrTag) 1133            callTargetFunction(%opcodeName%_slow, size, opcodeStruct, valueProfileName, dstVirtualRegister, dispatch, callee, SlowPathPtrTag) 11301134        end) 11311135end   20172021        end 20182022    end 2019    slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) 2023    slowPathForCommonCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) 20202024end 20212025   20802084 20812085_llint_op_call_eval: 2082    slowPathForCall( 2086    slowPathForCommonCall( 20832087        op_call_eval_narrow, 20842088        narrow,   20892093 20902094_llint_op_call_eval_wide16: 2091    slowPathForCall( 2095    slowPathForCommonCall( 20922096        op_call_eval_wide16, 20932097        wide16,   20982102 20992103_llint_op_call_eval_wide32: 2100    slowPathForCall( 2104    slowPathForCommonCall( 21012105        op_call_eval_wide32, 21022106        wide32,   21082112 21092113commonOp(llint_generic_return_point, macro () end, macro (size) 2110    dispatchAfterCall(size, OpCallEval, macro () 2114    dispatchAfterCall(size, OpCallEval, m_profile, m_dst, macro () 21112115        dispatchOp(size, op_call_eval) 21122116    end)
  • TabularUnified trunk/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm

    r259866 r260323   6868 6969        metadata(t5, t2) 70        valueProfile(opcodeStruct, t5, t1, t0) 70        valueProfile(opcodeStruct, m_profile, t5, t1, t0) 7171        get(m_dst, t2) 7272        storei t1, TagOffset[cfr, t2, 8]   7878 7979# After calling, calling bytecode is claiming input registers are not used. 80macro dispatchAfterCall(size, opcodeStruct, dispatch) 80macro dispatchAfterCall(size, opcodeStruct, valueProfileName, dstVirtualRegister, dispatch) 8181    loadi ArgumentCountIncludingThis + TagOffset[cfr], PC 8282    loadp CodeBlock[cfr], PB 8383    loadp CodeBlock::m_instructionsRawPointer[PB], PB 84    get(size, opcodeStruct, m_dst, t3) 84    get(size, opcodeStruct, dstVirtualRegister, t3) 8585    storei r1, TagOffset[cfr, t3, 8] 8686    storei r0, PayloadOffset[cfr, t3, 8] 8787    metadata(size, opcodeStruct, t2, t3) 88    valueProfile(opcodeStruct, t2, r1, r0) 89    dispatch() 88    valueProfile(opcodeStruct, valueProfileName, t2, r1, r0) 89    dispatch() 90 9091end 9192   633634end 634635 635macro valueProfile(opcodeStruct, metadata, tag, payload) 636macro valueProfile(opcodeStruct, profileName, metadata, tag, payload) 636637    storei tag, %opcodeStruct%::Metadata::m_profile.m_buckets + TagOffset[metadata] 637    storei payload, %opcodeStruct%::Metadata::m_profile.m_buckets + PayloadOffset[metadata] 638    storei payload, %opcodeStruct%::Metadata::%profileName%.m_buckets + PayloadOffset[metadata] 638639end 639640   13791380    bineq JSCell::m_structureID[t3], t1, .opGetByIdDirectSlow 13801381    loadPropertyAtVariableOffset(t2, t3, t0, t1) 1381    valueProfile(OpGetByIdDirect, t5, t0, t1) 1382    valueProfile(OpGetByIdDirect, m_profile, t5, t0, t1) 13821383    return(t0, t1) 13831384   14011402    loadp OpGetById::Metadata::m_modeMetadata.protoLoadMode.cachedSlot[t5], t3 14021403    loadPropertyAtVariableOffset(t2, t3, t0, t1) 1403    valueProfile(OpGetById, t5, t0, t1) 1404    valueProfile(OpGetById, m_profile, t5, t0, t1) 14041405    return(t0, t1) 14051406   14141415    loadi -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], t0 14151416    bilt t0, 0, .opGetByIdSlow 1416    valueProfile(OpGetById, t5, Int32Tag, t0) 1417    valueProfile(OpGetById, m_profile, t5, Int32Tag, t0) 14171418    return(Int32Tag, t0) 14181419   14221423    loadConstantOrVariablePayload(size, t0, CellTag, t3, .opGetByIdSlow) 14231424    bineq JSCell::m_structureID[t3], t1, .opGetByIdSlow 1424    valueProfile(OpGetById, t5, UndefinedTag, 0) 1425    valueProfile(OpGetById, m_profile, t5, UndefinedTag, 0) 14251426    return(UndefinedTag, 0) 14261427   14311432    bineq JSCell::m_structureID[t3], t1, .opGetByIdSlow 14321433    loadPropertyAtVariableOffset(t2, t3, t0, t1) 1433    valueProfile(OpGetById, t5, t0, t1) 1434    valueProfile(OpGetById, m_profile, t5, t0, t1) 14341435    return(t0, t1) 14351436   14411442    getterSetterOSRExitReturnPoint(op_get_by_id, size) 14421443    metadata(t2, t3) 1443    valueProfile(OpGetById, t2, r1, r0) 1444    valueProfile(OpGetById, m_profile, t2, r1, r0) 14441445    return(r1, r0) 14451446   15191520        storei Int32Tag, TagOffset[cfr, scratch, 8] 15201521        storei resultPayload, PayloadOffset[cfr, scratch, 8] 1521        valueProfile(OpGetByVal, t5, Int32Tag, resultPayload) 1522        valueProfile(OpGetByVal, m_profile, t5, Int32Tag, resultPayload) 15221523        dispatch() 15231524    end   15281529        storei scratch3, TagOffset[cfr, scratch1, 8] 15291530        storei scratch2, PayloadOffset[cfr, scratch1, 8] 1530        valueProfile(OpGetByVal, t5, scratch3, scratch2) 1531        valueProfile(OpGetByVal, m_profile, t5, scratch3, scratch2) 15311532        dispatch() 15321533    end   15731574    storei t2, TagOffset[cfr, t0, 8] 15741575    storei t1, PayloadOffset[cfr, t0, 8] 1575    valueProfile(OpGetByVal, t5, t2, t1) 1576    valueProfile(OpGetByVal, m_profile, t5, t2, t1) 15761577    dispatch() 15771578   15861587    getterSetterOSRExitReturnPoint(op_get_by_val, size) 15871588    metadata(t2, t3) 1588    valueProfile(OpGetByVal, t2, r1, r0) 1589    valueProfile(OpGetByVal, m_profile, t2, r1, r0) 15891590    return(r1, r0) 15901591   19581959        move t3, sp 19591960        prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) 1960        callTargetFunction(opcodeName, size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag) 1961        callTargetFunction(opcodeName, size, opcodeStruct, m_profile, m_dst, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag) 19611962 19621963    .opCallSlow: 1963        slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) 1964        slowPathForCall(opcodeName, size, opcodeStruct, m_profile, m_dst, dispatch, slowPath, prepareCall) 19641965    end) 19651966end   22892290        loadp OpGetFromScope::Metadata::m_operand[t5], t3 22902291        loadPropertyAtVariableOffset(t3, t0, t1, t2) 2291        valueProfile(OpGetFromScope, t5, t1, t2) 2292        valueProfile(OpGetFromScope, m_profile, t5, t1, t2) 22922293        return(t1, t2) 22932294    end   22982299        loadp PayloadOffset[t0], t2 22992300        tdzCheckIfNecessary(t1) 2300        valueProfile(OpGetFromScope, t5, t1, t2) 2301        valueProfile(OpGetFromScope, m_profile, t5, t1, t2) 23012302        return(t1, t2) 23022303    end   23062307        loadp JSLexicalEnvironment_variables + TagOffset[t0, t3, 8], t1 23072308        loadp JSLexicalEnvironment_variables + PayloadOffset[t0, t3, 8], t2 2308        valueProfile(OpGetFromScope, t5, t1, t2) 2309        valueProfile(OpGetFromScope, m_profile, t5, t1, t2) 23092310        return(t1, t2) 23102311    end   25952596end) 25962597 2598llintOp(op_iterator_open, OpIteratorOpen, macro (size, get, dispatch) 2599    defineOSRExitReturnLabel(op_iterator_open, size) 2600    break 2601end) 2602 2603llintOp(op_iterator_next, OpIteratorNext, macro (size, get, dispatch) 2604    defineOSRExitReturnLabel(op_iterator_next, size) 2605    break 2606end) 25972607 25982608llintOpWithProfile(op_get_internal_field, OpGetInternalField, macro (size, get, dispatch, return)
  • TabularUnified trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm

    r259866 r260323   6969        move value, t3 7070        metadata(t1, t2) 71        valueProfile(opcodeStruct, t1, t3) 71        valueProfile(opcodeStruct, m_profile, t1, t3) 7272        get(m_dst, t1) 7373        storeq t3, [cfr, t1, 8]   7676end 7777 78macro valueProfile(opcodeStruct, metadata, value) 79    storeq value, %opcodeStruct%::Metadata::m_profile.m_buckets[metadata] 78macro valueProfile(opcodeStruct, profileName, metadata, value) 79    storeq value, %opcodeStruct%::Metadata::%profileName%.m_buckets[metadata] 8080end 8181 8282# After calling, calling bytecode is claiming input registers are not used. 83macro dispatchAfterCall(size, opcodeStruct, dispatch) 83macro dispatchAfterCall(size, opcodeStruct, valueProfileName, dstVirtualRegister, dispatch) 8484    loadPC() 8585    loadp CodeBlock[cfr], PB 8686    loadp CodeBlock::m_instructionsRawPointer[PB], PB 87    get(size, opcodeStruct, m_dst, t1) 87    get(size, opcodeStruct, dstVirtualRegister, t1) 8888    storeq r0, [cfr, t1, 8] 8989    metadata(size, opcodeStruct, t2, t1) 90    valueProfile(opcodeStruct, t2, r0) 90    valueProfile(opcodeStruct, valueProfileName, t2, r0) 9191    dispatch() 9292end   13411341    loadi OpGetByIdDirect::Metadata::m_offset[t2], t1 13421342    loadPropertyAtVariableOffset(t1, t3, t0) 1343    valueProfile(OpGetByIdDirect, t2, t0) 1343    valueProfile(OpGetByIdDirect, m_profile, t2, t0) 13441344    return(t0) 13451345   13491349end) 13501350 1351llintOpWithMetadata(op_get_by_id, OpGetById, macro (size, get, dispatch, metadata, return) 1351# The base object is expected in t3 1352macro performGetByIDHelper(opcodeStruct, modeMetadataName, valueProfileName, slowLabel, size, metadata, return) 13521353    metadata(t2, t1) 1353    loadb OpGetById::Metadata::m_modeMetadata.mode[t2], t1 1354    get(m_base, t0) 1355    loadConstantOrVariableCell(size, t0, t3, .opGetByIdSlow) 1354    loadb %opcodeStruct%::Metadata::%modeMetadataName%.mode[t2], t1 13561355 13571356.opGetByIdDefault: 13581357    bbneq t1, constexpr GetByIdMode::Default, .opGetByIdProtoLoad 13591358    loadi JSCell::m_structureID[t3], t1 1360    loadi OpGetById::Metadata::m_modeMetadata.defaultMode.structureID[t2], t0 1361    bineq t0, t1, .opGetByIdSlow 1362    loadis OpGetById::Metadata::m_modeMetadata.defaultMode.cachedOffset[t2], t1 1359    loadi %opcodeStruct%::Metadata::%modeMetadataName%.defaultMode.structureID[t2], t0 1360    bineq t0, t1, slowLabel 1361    loadis %opcodeStruct%::Metadata::%modeMetadataName%.defaultMode.cachedOffset[t2], t1 13631362    loadPropertyAtVariableOffset(t1, t3, t0) 1364    valueProfile(OpGetById, t2, t0) 1363    valueProfile(opcodeStruct, valueProfileName, t2, t0) 13651364    return(t0) 13661365   13681367    bbneq t1, constexpr GetByIdMode::ProtoLoad, .opGetByIdArrayLength 13691368    loadi JSCell::m_structureID[t3], t1 1370    loadi OpGetById::Metadata::m_modeMetadata.protoLoadMode.structureID[t2], t3 1371    bineq t3, t1, .opGetByIdSlow 1372    loadis OpGetById::Metadata::m_modeMetadata.protoLoadMode.cachedOffset[t2], t1 1373    loadp OpGetById::Metadata::m_modeMetadata.protoLoadMode.cachedSlot[t2], t3 1369    loadi %opcodeStruct%::Metadata::%modeMetadataName%.protoLoadMode.structureID[t2], t3 1370    bineq t3, t1, slowLabel 1371    loadis %opcodeStruct%::Metadata::%modeMetadataName%.protoLoadMode.cachedOffset[t2], t1 1372    loadp %opcodeStruct%::Metadata::%modeMetadataName%.protoLoadMode.cachedSlot[t2], t3 13741373    loadPropertyAtVariableOffset(t1, t3, t0) 1375    valueProfile(OpGetById, t2, t0) 1374    valueProfile(opcodeStruct, valueProfileName, t2, t0) 13761375    return(t0) 13771376   13791378    bbneq t1, constexpr GetByIdMode::ArrayLength, .opGetByIdUnset 13801379    move t3, t0 1381    arrayProfile(OpGetById::Metadata::m_modeMetadata.arrayLengthMode.arrayProfile, t0, t2, t5) 1382    btiz t0, IsArray, .opGetByIdSlow 1383    btiz t0, IndexingShapeMask, .opGetByIdSlow 1380    arrayProfile(%opcodeStruct%::Metadata::%modeMetadataName%.arrayLengthMode.arrayProfile, t0, t2, t5) 1381    btiz t0, IsArray, slowLabel 1382    btiz t0, IndexingShapeMask, slowLabel 13841383    loadCagedJSValue(JSObject::m_butterfly[t3], t0, t1) 13851384    loadi -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], t0 1386    bilt t0, 0, .opGetByIdSlow 1385    bilt t0, 0, slowLabel 13871386    orq numberTag, t0 1388    valueProfile(OpGetById, t2, t0) 1387    valueProfile(opcodeStruct, valueProfileName, t2, t0) 13891388    return(t0) 13901389 13911390.opGetByIdUnset: 13921391    loadi JSCell::m_structureID[t3], t1 1393    loadi OpGetById::Metadata::m_modeMetadata.unsetMode.structureID[t2], t0 1394    bineq t0, t1, .opGetByIdSlow 1395    valueProfile(OpGetById, t2, ValueUndefined) 1392    loadi %opcodeStruct%::Metadata::%modeMetadataName%.unsetMode.structureID[t2], t0 1393    bineq t0, t1, slowLabel 1394    valueProfile(opcodeStruct, valueProfileName, t2, ValueUndefined) 13961395    return(ValueUndefined) 1396 1397end 1398 1399llintOpWithMetadata(op_get_by_id, OpGetById, macro (size, get, dispatch, metadata, return) 1400    get(m_base, t0) 1401    loadConstantOrVariableCell(size, t0, t3, .opGetByIdSlow) 1402    performGetByIDHelper(OpGetById, m_modeMetadata, m_profile, .opGetByIdSlow, size, metadata, return) 13971403 13981404.opGetByIdSlow:   14031409    getterSetterOSRExitReturnPoint(op_get_by_id, size) 14041410    metadata(t2, t3) 1405    valueProfile(OpGetById, t2, r0) 1411    valueProfile(OpGetById, m_profile, t2, r0) 14061412    return(r0) 14071413   14981504        get(m_dst, scratch) 14991505        storeq result, [cfr, scratch, 8] 1500        valueProfile(OpGetByVal, t5, result) 1506        valueProfile(OpGetByVal, m_profile, t5, result) 15011507        dispatch() 15021508    end   15591565.opGetByValDone: 15601566    storeq t2, [cfr, t0, 8] 1561    valueProfile(OpGetByVal, t5, t2) 1567    valueProfile(OpGetByVal, m_profile, t5, t2) 15621568    dispatch() 15631569   15721578    getterSetterOSRExitReturnPoint(op_get_by_val, size) 15731579    metadata(t5, t2) 1574    valueProfile(OpGetByVal, t5, r0) 1580    valueProfile(OpGetByVal, m_profile, t5, r0) 15751581    return(r0) 15761582   19491955end 19501956 1957macro callHelper(opcodeName, slowPath, opcodeStruct, valueProfileName, dstVirtualRegister, prepareCall, size, dispatch, metadata, getCallee, getArgumentStart, getArgumentCountIncludingThis) 1958    metadata(t5, t0) 1959    getCallee(t0) 1960 1961    loadp %opcodeStruct%::Metadata::m_callLinkInfo.m_calleeOrLastSeenCalleeWithLinkBit[t5], t2 1962    loadConstantOrVariable(size, t0, t3) 1963    bqneq t3, t2, .opCallSlow 1964 1965    getArgumentStart(t3) 1966    lshifti 3, t3 1967    negp t3 1968    addp cfr, t3 1969    storeq t2, Callee[t3] 1970    getArgumentCountIncludingThis(t2) 1971    storePC() 1972    storei t2, ArgumentCountIncludingThis + PayloadOffset[t3] 1973    move t3, sp 1974    prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) 1975    callTargetFunction(opcodeName, size, opcodeStruct, valueProfileName, dstVirtualRegister, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag) 1976 1977.opCallSlow: 1978    slowPathForCall(opcodeName, size, opcodeStruct, valueProfileName, dstVirtualRegister, dispatch, slowPath, prepareCall) 1979end 1980 19511981macro commonCallOp(opcodeName, slowPath, opcodeStruct, prepareCall, prologue) 19521982    llintOpWithMetadata(opcodeName, opcodeStruct, macro (size, get, dispatch, metadata, return)   19571987        end, metadata) 19581988 1959        get(m_callee, t0) 1960        loadp %opcodeStruct%::Metadata::m_callLinkInfo.m_calleeOrLastSeenCalleeWithLinkBit[t5], t2 1961        loadConstantOrVariable(size, t0, t3) 1962        bqneq t3, t2, .opCallSlow 1963        getu(size, opcodeStruct, m_argv, t3) 1964        lshifti 3, t3 1965        negp t3 1966        addp cfr, t3 1967        storeq t2, Callee[t3] 1968        getu(size, opcodeStruct, m_argc, t2) 1969        storePC() 1970        storei t2, ArgumentCountIncludingThis + PayloadOffset[t3] 1971        move t3, sp 1972        prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) 1973        callTargetFunction(opcodeName, size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag) 1974 1975    .opCallSlow: 1976        slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) 1989        macro getCallee(dst) 1990            get(m_callee, t0) 1991        end 1992 1993        macro getArgumentStart(dst) 1994            getu(size, opcodeStruct, m_argv, dst) 1995        end 1996 1997        macro getArgumentCount(dst) 1998            getu(size, opcodeStruct, m_argc, dst) 1999        end 2000 2001        callHelper(opcodeName, slowPath, opcodeStruct, m_profile, m_dst, prepareCall, size, dispatch, metadata, getCallee, getArgumentStart, getArgumentCount) 19772002    end) 19782003end   22772302        loadp OpGetFromScope::Metadata::m_operand[t5], t1 22782303        loadPropertyAtVariableOffset(t1, t0, t2) 2279        valueProfile(OpGetFromScope, t5, t2) 2304        valueProfile(OpGetFromScope, m_profile, t5, t2) 22802305        return(t2) 22812306    end   22852310        loadq [t0], t0 22862311        tdzCheckIfNecessary(t0) 2287        valueProfile(OpGetFromScope, t5, t0) 2312        valueProfile(OpGetFromScope, m_profile, t5, t0) 22882313        return(t0) 22892314    end   22922317        loadp OpGetFromScope::Metadata::m_operand[t5], t1 22932318        loadq JSLexicalEnvironment_variables[t0, t1, 8], t0 2294        valueProfile(OpGetFromScope, t5, t0) 2319        valueProfile(OpGetFromScope, m_profile, t5, t0) 22952320        return(t0) 22962321    end   25512576    dispatch() 25522577end) 2553 25542578 25552579llintOpWithReturn(op_get_rest_length, OpGetRestLength, macro (size, get, dispatch, return)   25682592 25692593 2594llintOpWithMetadata(op_iterator_open, OpIteratorOpen, macro (size, get, dispatch, metadata, return) 2595    metadata(a2, t5) 2596    macro fastNarrow() 2597        callSlowPath(_iterator_open_try_fast_narrow) 2598    end 2599    macro fastWide16() 2600        callSlowPath(_iterator_open_try_fast_wide16) 2601    end 2602    macro fastWide32() 2603        callSlowPath(_iterator_open_try_fast_wide32) 2604    end 2605    size(fastNarrow, fastWide16, fastWide32, macro (callOp) callOp() end) 2606 2607    # FIXME: We should do this with inline assembly since it's the "fast" case. 2608    bbeq r1, constexpr IterationMode::Generic, .iteratorOpenGeneric 2609    dispatch() 2610 2611.iteratorOpenGeneric: 2612    macro gotoGetByIdCheckpoint() 2613        jmp .getByIdStart 2614    end 2615 2616    macro getCallee(dst) 2617        get(m_symbolIterator, dst) 2618    end 2619 2620    macro getArgumentIncludingThisStart(dst) 2621        getu(size, OpIteratorOpen, m_stackOffset, dst) 2622    end 2623 2624    macro getArgumentIncludingThisCount(dst) 2625        move 1, dst 2626    end 2627 2628 2629    callHelper(op_iterator_open, _llint_slow_path_iterator_open_call, OpIteratorOpen, m_iteratorProfile, m_iterator, prepareForRegularCall, size, gotoGetByIdCheckpoint, metadata, getCallee, getArgumentIncludingThisStart, getArgumentIncludingThisCount) 2630 2631.getByIdStart: 2632    macro storeNextAndDispatch(value) 2633        move value, t2 2634        get(m_next, t1) 2635        storeq t2, [cfr, t1, 8] 2636        dispatch() 2637    end 2638 2639    loadVariable(get, m_iterator, t3) 2640    btqnz t3, notCellMask, .iteratorOpenGenericGetNextSlow 2641    performGetByIDHelper(OpIteratorOpen, m_modeMetadata, m_nextProfile, .iteratorOpenGenericGetNextSlow, size, metadata, storeNextAndDispatch) 2642 2643.iteratorOpenGenericGetNextSlow: 2644    callSlowPath(_llint_slow_path_iterator_open_get_next) 2645    dispatch() 2646 2647end) 2648 2649llintOpWithMetadata(op_iterator_next, OpIteratorNext, macro (size, get, dispatch, metadata, return) 2650 2651    loadVariable(get, m_next, t0) 2652    btqnz t0, t0, .iteratorNextGeneric 2653    metadata(a2, t5) 2654    macro fastNarrow() 2655        callSlowPath(_iterator_next_try_fast_narrow) 2656    end 2657    macro fastWide16() 2658        callSlowPath(_iterator_next_try_fast_wide16) 2659    end 2660    macro fastWide32() 2661        callSlowPath(_iterator_next_try_fast_wide32) 2662    end 2663    size(fastNarrow, fastWide16, fastWide32, macro (callOp) callOp() end) 2664 2665    # FIXME: We should do this with inline assembly since it's the "fast" case. 2666    bbeq r1, constexpr IterationMode::Generic, .iteratorNextGeneric 2667    dispatch() 2668 2669.iteratorNextGeneric: 2670    macro gotoGetDoneCheckpoint() 2671        jmp .getDoneStart 2672    end 2673 2674    macro getCallee(dst) 2675        get(m_next, dst) 2676    end 2677 2678    macro getArgumentIncludingThisStart(dst) 2679        getu(size, OpIteratorNext, m_stackOffset, dst) 2680    end 2681 2682    macro getArgumentIncludingThisCount(dst) 2683        move 1, dst 2684    end 2685 2686    # Use m_value slot as a tmp since we are going to write to it later. 2687    callHelper(op_iterator_next, _llint_slow_path_iterator_next_call, OpIteratorNext, m_nextResultProfile, m_value, prepareForRegularCall, size, gotoGetDoneCheckpoint, metadata, getCallee, getArgumentIncludingThisStart, getArgumentIncludingThisCount) 2688 2689.getDoneStart: 2690    macro storeDoneAndJmpToGetValue(doneValue) 2691        # use t0 because performGetByIDHelper usually puts the doneValue there and offlineasm will elide the self move. 2692        move doneValue, t0 2693        get(m_done, t1) 2694        storeq t0, [cfr, t1, 8] 2695        jmp .getValueStart 2696    end 2697 2698 2699    loadVariable(get, m_value, t3) 2700    btqnz t3, notCellMask, .getDoneSlow 2701    performGetByIDHelper(OpIteratorNext, m_doneModeMetadata, m_doneProfile, .getDoneSlow, size, metadata, storeDoneAndJmpToGetValue) 2702 2703.getDoneSlow: 2704    callSlowPath(_llint_slow_path_iterator_next_get_done) 2705    branchIfException(_llint_throw_from_slow_path_trampoline) 2706    loadVariable(get, m_done, t0) 2707 2708    # storeDoneAndJmpToGetValue puts the doneValue into t0 2709.getValueStart: 2710    # Branch to slow if not misc primitive. 2711    btqnz t0, ~0xf, .getValueSlow 2712    btiz t0, 0x1, .notDone 2713    dispatch() 2714 2715.notDone: 2716    macro storeValueAndDispatch(v) 2717        move v, t2 2718        get(m_value, t1) 2719        storeq t2, [cfr, t1, 8] 2720        checkStackPointerAlignment(t0, 0xbaddb01e) 2721        dispatch() 2722    end 2723 2724    # Reload the next result tmp since the get_by_id above may have clobbered t3. 2725    loadVariable(get, m_value, t3) 2726    # We don't need to check if the iterator result is a cell here since we will have thrown an error before. 2727    performGetByIDHelper(OpIteratorNext, m_valueModeMetadata, m_valueProfile, .getValueSlow, size, metadata, storeValueAndDispatch) 2728 2729.getValueSlow: 2730    callSlowPath(_llint_slow_path_iterator_next_get_value) 2731    dispatch() 2732end) 2733 2734 25702735llintOpWithProfile(op_get_internal_field, OpGetInternalField, macro (size, get, dispatch, return) 25712736    loadVariable(get, m_base, t1)
  • TabularUnified trunk/Source/JavaScriptCore/offlineasm/transform.rb

    r251886 r260323   630630class Node 631631    def validate 632        raise "Unresolved #{dump} at #{codeOriginString}" 632        raise "Unresolved '#{dump}' at #{codeOriginString}" 633633    end 634634   
  • TabularUnified trunk/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp

    r259676 r260323   4646#include "JIT.h" 4747#include "JSArrayInlines.h" 48#include "JSArrayIterator.h" 4849#include "JSAsyncGenerator.h" 4950#include "JSCInlines.h"   154155    RETURN_WITH_PROFILING(value__, PROFILE_VALUE(returnValue__)) 155156 156#define PROFILE_VALUE(value) do { \ 157        bytecode.metadata(codeBlock).m_profile.m_buckets[0] = JSValue::encode(value); \ 157#define PROFILE_VALUE(value__) \ 158    PROFILE_VALUE_IN(value__, m_profile) 159 160#define PROFILE_VALUE_IN(value, profileName) do { \ 161        bytecode.metadata(codeBlock).profileName.m_buckets[0] = JSValue::encode(value); \ 158162    } while (false) 159163   966970} 967971 972template<OpcodeSize width> 973SlowPathReturnType SLOW_PATH iterator_open_try_fast(CallFrame* callFrame, const Instruction* pc, void* metadataPtr) 974{ 975    // Don't set PC; we can't throw and it's relatively slow. 976    BEGIN_NO_SET_PC(); 977 978    auto bytecode = pc->asKnownWidth<OpIteratorOpen, width>(); 979    auto& metadata = *reinterpret_cast<OpIteratorOpen::Metadata*>(metadataPtr); 980    JSValue iterable = GET_C(bytecode.m_iterable).jsValue(); 981    PROFILE_VALUE_IN(iterable, m_iterableProfile); 982    JSValue symbolIterator = GET_C(bytecode.m_symbolIterator).jsValue(); 983    auto& iterator = GET(bytecode.m_iterator); 984 985    auto prepareForFastArrayIteration = [&] { 986        if (!globalObject->arrayIteratorProtocolWatchpointSet().isStillValid()) 987            return IterationMode::Generic; 988 989        // This is correct because we just checked the watchpoint is still valid. 990        JSFunction* symbolIteratorFunction = jsDynamicCast<JSFunction*>(vm, symbolIterator); 991        if (!symbolIteratorFunction) 992            return IterationMode::Generic; 993 994        // We don't want to allocate the values function just to check if it's the same as our function at so we use the concurrent accessor. 995        // FIXME: This only works for arrays from the same global object as ourselves but we should be able to support any pairing. 996        if (globalObject->arrayProtoValuesFunctionConcurrently() != symbolIteratorFunction) 997            return IterationMode::Generic; 998 999        // We should be good to go. 1000        metadata.m_iterationMetadata.seenModes = metadata.m_iterationMetadata.seenModes | IterationMode::FastArray; 1001        GET(bytecode.m_next) = JSValue(); 1002        auto* iteratedObject = jsCast<JSObject*>(iterable); 1003        iterator = JSArrayIterator::create(vm, globalObject->arrayIteratorStructure(), iteratedObject, IterationKind::Values); 1004        PROFILE_VALUE_IN(iterator.jsValue(), m_iteratorProfile); 1005        return IterationMode::FastArray; 1006    }; 1007 1008    if (iterable.inherits<JSArray>(vm)) { 1009        if (prepareForFastArrayIteration() == IterationMode::FastArray) 1010            return encodeResult(pc, reinterpret_cast<void*>(IterationMode::FastArray)); 1011    } 1012 1013    // Return to the bytecode to try in generic mode. 1014    metadata.m_iterationMetadata.seenModes = metadata.m_iterationMetadata.seenModes | IterationMode::Generic; 1015    return encodeResult(pc, reinterpret_cast<void*>(IterationMode::Generic)); 1016} 1017 1018SlowPathReturnType SLOW_PATH iterator_open_try_fast_narrow(CallFrame* callFrame, const Instruction* pc, void* metadataPtr) 1019{ 1020    return iterator_open_try_fast<Narrow>(callFrame, pc, metadataPtr); 1021} 1022 1023SlowPathReturnType SLOW_PATH iterator_open_try_fast_wide16(CallFrame* callFrame, const Instruction* pc, void* metadataPtr) 1024{ 1025    return iterator_open_try_fast<Wide16>(callFrame, pc, metadataPtr); 1026} 1027 1028SlowPathReturnType SLOW_PATH iterator_open_try_fast_wide32(CallFrame* callFrame, const Instruction* pc, void* metadataPtr) 1029{ 1030    return iterator_open_try_fast<Wide32>(callFrame, pc, metadataPtr); 1031} 1032 1033template<OpcodeSize width> 1034SlowPathReturnType SLOW_PATH iterator_next_try_fast(CallFrame* callFrame, const Instruction* pc, void* metadataPtr) 1035{ 1036    BEGIN(); 1037 1038    auto bytecode = pc->asKnownWidth<OpIteratorNext, width>(); 1039    auto& metadata = *reinterpret_cast<OpIteratorNext::Metadata*>(metadataPtr); 1040 1041    ASSERT(!GET(bytecode.m_next).jsValue()); 1042    JSObject* iterator = jsCast<JSObject*>(GET(bytecode.m_iterator).jsValue());; 1043    JSCell* iterable = GET(bytecode.m_iterable).jsValue().asCell(); 1044    if (auto arrayIterator = jsDynamicCast<JSArrayIterator*>(vm, iterator)) { 1045        if (auto array = jsDynamicCast<JSArray*>(vm, iterable)) { 1046            metadata.m_iterableProfile.observeStructureID(array->structureID()); 1047 1048            metadata.m_iterationMetadata.seenModes = metadata.m_iterationMetadata.seenModes | IterationMode::FastArray; 1049            auto& indexSlot = arrayIterator->internalField(JSArrayIterator::Field::Index); 1050            int64_t index = indexSlot.get().asAnyInt(); 1051            ASSERT(0 <= index && index <= maxSafeInteger()); 1052 1053            JSValue value; 1054            bool done = index == -1 || index >= array->length(); 1055            GET(bytecode.m_done) = jsBoolean(done); 1056            if (!done) { 1057                // No need for a barrier here because we know this is a primitive. 1058                indexSlot.setWithoutWriteBarrier(jsNumber(index + 1)); 1059                ASSERT(index == static_cast<unsigned>(index)); 1060                value = array->getIndex(globalObject, static_cast<unsigned>(index)); 1061                CHECK_EXCEPTION(); 1062                PROFILE_VALUE_IN(value, m_valueProfile); 1063            } else { 1064                // No need for a barrier here because we know this is a primitive. 1065                indexSlot.setWithoutWriteBarrier(jsNumber(-1)); 1066            } 1067 1068            GET(bytecode.m_value) = value; 1069            return encodeResult(pc, reinterpret_cast<void*>(IterationMode::FastArray)); 1070        } 1071    } 1072    RELEASE_ASSERT_NOT_REACHED(); 1073 1074    // Return to the bytecode to try in generic mode. 1075    metadata.m_iterationMetadata.seenModes = metadata.m_iterationMetadata.seenModes | IterationMode::Generic; 1076    return encodeResult(pc, reinterpret_cast<void*>(IterationMode::Generic)); 1077} 1078 1079SlowPathReturnType SLOW_PATH iterator_next_try_fast_narrow(CallFrame* callFrame, const Instruction* pc, void* metadataPtr) 1080{ 1081    return iterator_next_try_fast<Narrow>(callFrame, pc, metadataPtr); 1082} 1083 1084SlowPathReturnType SLOW_PATH iterator_next_try_fast_wide16(CallFrame* callFrame, const Instruction* pc, void* metadataPtr) 1085{ 1086    return iterator_next_try_fast<Wide16>(callFrame, pc, metadataPtr); 1087} 1088 1089SlowPathReturnType SLOW_PATH iterator_next_try_fast_wide32(CallFrame* callFrame, const Instruction* pc, void* metadataPtr) 1090{ 1091    return iterator_next_try_fast<Wide32>(callFrame, pc, metadataPtr); 1092} 1093 9681094SLOW_PATH_DECL(slow_path_del_by_val) 9691095{
  • TabularUnified trunk/Source/JavaScriptCore/runtime/CommonSlowPaths.h

    r255040 r260323   284284SLOW_PATH_HIDDEN_DECL(slow_path_spread); 285285 286template<OpcodeSize size> 287extern SlowPathReturnType SLOW_PATH iterator_open_try_fast(CallFrame*, const Instruction* pc, void* metadata); 288extern "C" SlowPathReturnType SLOW_PATH iterator_open_try_fast_narrow(CallFrame*, const Instruction* pc, void* metadata); 289extern "C" SlowPathReturnType SLOW_PATH iterator_open_try_fast_wide16(CallFrame*, const Instruction* pc, void* metadata); 290extern "C" SlowPathReturnType SLOW_PATH iterator_open_try_fast_wide32(CallFrame*, const Instruction* pc, void* metadata); 291 292template<OpcodeSize size> 293extern SlowPathReturnType SLOW_PATH iterator_next_try_fast(CallFrame*, const Instruction* pc, void* metadata); 294extern "C" SlowPathReturnType SLOW_PATH iterator_next_try_fast_narrow(CallFrame*, const Instruction* pc, void* metadata); 295extern "C" SlowPathReturnType SLOW_PATH iterator_next_try_fast_wide16(CallFrame*, const Instruction* pc, void* metadata); 296extern "C" SlowPathReturnType SLOW_PATH iterator_next_try_fast_wide32(CallFrame*, const Instruction* pc, void* metadata); 297 286298using SlowPathFunction = SlowPathReturnType(SLOW_PATH *)(CallFrame*, const Instruction*); 287299
  • TabularUnified trunk/Source/JavaScriptCore/runtime/Intrinsic.cpp

    r260181 r260323   341341} 342342 343Optional<IterationKind> interationKindForIntrinsic(Intrinsic intrinsic) 344{ 345    switch (intrinsic) { 346    case ArrayValuesIntrinsic: 347    case TypedArrayValuesIntrinsic: 348        return IterationKind::Values; 349    case ArrayKeysIntrinsic: 350    case TypedArrayKeysIntrinsic: 351        return IterationKind::Keys; 352    case ArrayEntriesIntrinsic: 353    case TypedArrayEntriesIntrinsic: 354        return IterationKind::Entries; 355    default: 356        return WTF::nullopt; 357    } 358} 359 360 343361} // namespace JSC 344362
  • TabularUnified trunk/Source/JavaScriptCore/runtime/Intrinsic.h

    r260181 r260323   193193}; 194194 195Optional<IterationKind> interationKindForIntrinsic(Intrinsic); 196 195197const char* intrinsicName(Intrinsic); 196198
  • TabularUnified trunk/Source/JavaScriptCore/runtime/JSArrayIterator.h

    r260181 r260323   6969 7070    static JSArrayIterator* create(VM&, Structure*, JSObject* iteratedObject, JSValue kind); 71    static JSArrayIterator* create(VM& vm, Structure* structure, JSObject* iteratedObject, IterationKind kind) 72    { 73        return create(vm, structure, iteratedObject, jsNumber(static_cast<unsigned>(kind))); 74    } 7175    static JSArrayIterator* createWithInitialValues(VM&, Structure*); 7276    static Structure* createStructure(VM&, JSGlobalObject*, JSValue);
  • TabularUnified trunk/Source/JavaScriptCore/runtime/JSCJSValue.h

    r259676 r260323   226226    bool isEmpty() const; 227227    bool isFunction(VM&) const; 228    bool isCallable(VM&) const; 228229    bool isCallable(VM&, CallType&, CallData&) const; 229230    bool isConstructor(VM&) const;
  • TabularUnified trunk/Source/JavaScriptCore/runtime/JSCJSValueInlines.h

    r254653 r260323   828828} 829829 830inline bool JSValue::isCallable(VM& vm) const 831{ 832    CallType unusedType; 833    CallData unusedData; 834    return isCallable(vm, unusedType, unusedData); 835} 836 830837inline bool JSValue::isCallable(VM& vm, CallType& callType, CallData& callData) const 831838{
  • TabularUnified trunk/Source/JavaScriptCore/runtime/JSCast.h

    r258387 r260323   6161#define FOR_EACH_JS_DYNAMIC_CAST_JS_TYPE_OVERLOAD(macro) \ 6262    macro(JSImmutableButterfly, JSType::JSImmutableButterflyType, JSType::JSImmutableButterflyType) \ 63    macro(JSArrayIterator, JSType::JSArrayIteratorType, JSType::JSArrayIteratorType) \ 6463    macro(JSStringIterator, JSType::JSStringIteratorType, JSType::JSStringIteratorType) \ 6564    macro(JSObject, FirstObjectType, LastObjectType) \   6867    macro(InternalFunction, JSType::InternalFunctionType, JSType::InternalFunctionType) \ 6968    macro(JSArray, JSType::ArrayType, JSType::DerivedArrayType) \ 69    macro(JSArrayIterator, JSType::JSArrayIteratorType, JSType::JSArrayIteratorType) \ 7070    macro(JSArrayBuffer, JSType::ArrayBufferType, JSType::ArrayBufferType) \ 7171    macro(JSArrayBufferView, FirstTypedArrayType, LastTypedArrayType) \
  • TabularUnified trunk/Source/JavaScriptCore/runtime/JSGlobalObject.h

    r260273 r260323   617617    JSFunction* arrayProtoToStringFunction() const { return m_arrayProtoToStringFunction.get(this); } 618618    JSFunction* arrayProtoValuesFunction() const { return m_arrayProtoValuesFunction.get(this); } 619    JSFunction* arrayProtoValuesFunctionConcurrently() const { return m_arrayProtoValuesFunction.getConcurrently(); } 619620    JSFunction* iteratorProtocolFunction() const { return m_iteratorProtocolFunction.get(this); } 620621    JSFunction* newPromiseCapabilityFunction() const;
  • TabularUnified trunk/Source/JavaScriptCore/runtime/OptionsList.h

    r260119 r260323   119119    v(Unsigned, shadowChickenLogSize, 1000, Normal, nullptr) \ 120120    v(Unsigned, shadowChickenMaxTailDeletedFramesSize, 128, Normal, nullptr) \ 121    \ 122    v(Bool, useIterationIntrinsics, true, Normal, nullptr) \ 121123    \ 122124    v(Bool, useOSLog, false, Normal, "Log dataLog()s to os_log instead of stderr") \
  • TabularUnified trunk/Source/JavaScriptCore/runtime/Structure.cpp

    r259463 r260323   12691269{ 12701270    out.print("%", string, ":", classInfo()->className); 1271    if (indexingType() & IndexingShapeMask) 1272        out.print(",", IndexingTypeDump(indexingType())); 12711273} 12721274
  • TabularUnified trunk/Source/WTF/ChangeLog

    r260311 r260323   12020-04-18  Keith Miller  <[email protected]> 2 3        Redesign how we do for-of iteration for JSArrays 4        https://bugs.webkit.org/show_bug.cgi?id=175454 5 6        Reviewed by Filip Pizlo. 7 8        * wtf/EnumClassOperatorOverloads.h: 9 1102020-04-18  Yusuke Suzuki  <[email protected]> 211
  • TabularUnified trunk/Source/WTF/wtf/EnumClassOperatorOverloads.h

    r254735 r260323   6060#define OVERLOAD_RELATIONAL_OPERATORS_FOR_ENUM_CLASS_WITH_INTEGRALS(enumName) OVERLOAD_RELATIONAL_OPERATORS_FOR_ENUM_CLASS_WHEN(enumName, std::is_integral_v<T>) 6161 62#define OVERLOAD_BITWISE_OPERATORS_FOR_ENUM_CLASS_WHEN(enumName, enableExpression) \ 63    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, |, enableExpression) \ 64    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, &, enableExpression) \ 65    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, ^, enableExpression) \ 66 67#define OVERLOAD_BITWISE_OPERATORS_FOR_ENUM_CLASS_WITH_INTERGRALS(enumName) OVERLOAD_BITWISE_OPERATORS_FOR_ENUM_CLASS_WHEN(enumName, std::is_integral_v<T>)
Note: See TracChangeset for help on using the changeset viewer.

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK