From 160315d9eb8b919b8f72229551b978fd6c4c5540 Mon Sep 17 00:00:00 2001 From: Benjie Gillam Date: Fri, 28 Apr 2023 16:58:49 +0100 Subject: [PATCH 01/28] Extract common logic from ExecuteQuery, ExecuteMutation and ExecuteSubscriptionEvent --- spec/Section 6 -- Execution.md | 44 +++++++++++++++++++++------------- 1 file changed, 27 insertions(+), 17 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 8184f95bb..97c74dde6 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -131,12 +131,8 @@ ExecuteQuery(query, schema, variableValues, initialValue): - Let {queryType} be the root Query type in {schema}. - Assert: {queryType} is an Object type. - Let {selectionSet} be the top level selection set in {query}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - queryType, initialValue, variableValues)} _normally_ (allowing - parallelization). -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, queryType, + selectionSet)}. ### Mutation @@ -153,11 +149,8 @@ ExecuteMutation(mutation, schema, variableValues, initialValue): - Let {mutationType} be the root Mutation type in {schema}. - Assert: {mutationType} is an Object type. - Let {selectionSet} be the top level selection set in {mutation}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - mutationType, initialValue, variableValues)} _serially_. -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, mutationType, + selectionSet, true)}. ### Subscription @@ -301,12 +294,8 @@ ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue): - Let {subscriptionType} be the root Subscription type in {schema}. - Assert: {subscriptionType} is an Object type. - Let {selectionSet} be the top level selection set in {subscription}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - subscriptionType, initialValue, variableValues)} _normally_ (allowing - parallelization). -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, + subscriptionType, selectionSet)}. Note: The {ExecuteSubscriptionEvent()} algorithm is intentionally similar to {ExecuteQuery()} since this is how each event result is produced. @@ -322,6 +311,27 @@ Unsubscribe(responseStream): - Cancel {responseStream}. +## Executing the Root Selection Set + +To execute the root selection set, the object value being evaluated and the +object type need to be known, as well as whether it must be executed serially, +or may be executed in parallel. + +Executing the root selection set works similarly for queries (parallel), +mutations (serial), and subscriptions (where it is executed for each event in +the underlying Source Stream). + +ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, +serial): + +- If {serial} is not provided, initialize it to {false}. +- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, + objectType, initialValue, variableValues)} _serially_ if {serial} is {true}, + _normally_ (allowing parallelization) otherwise. +- Let {errors} be the list of all _field error_ raised while executing the + selection set. +- Return an unordered map containing {data} and {errors}. + ## Executing Selection Sets To execute a _selection set_, the object value being evaluated and the object From c5c33a0508d47bcfad8337f1f1b72f4ce961f5f7 Mon Sep 17 00:00:00 2001 From: Benjie Gillam Date: Fri, 28 Apr 2023 17:20:43 +0100 Subject: [PATCH 02/28] Change ExecuteSelectionSet to ExecuteGroupedFieldSet --- spec/Section 6 -- Execution.md | 49 ++++++++++++++++++++-------------- 1 file changed, 29 insertions(+), 20 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 97c74dde6..5fc42d8fa 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -321,31 +321,34 @@ Executing the root selection set works similarly for queries (parallel), mutations (serial), and subscriptions (where it is executed for each event in the underlying Source Stream). +First, the selection set is turned into a grouped field set; then, we execute +this grouped field set and return the resulting {data} and {errors}. + ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, serial): - If {serial} is not provided, initialize it to {false}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, +- Let {groupedFieldSet} be the result of {CollectFields(objectType, + selectionSet, variableValues)}. +- Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, initialValue, variableValues)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - Let {errors} be the list of all _field error_ raised while executing the selection set. - Return an unordered map containing {data} and {errors}. -## Executing Selection Sets +## Executing a Grouped Field Set -To execute a _selection set_, the object value being evaluated and the object +To execute a grouped field set, the object value being evaluated and the object type need to be known, as well as whether it must be executed serially, or may be executed in parallel. -First, the selection set is turned into a grouped field set; then, each -represented field in the grouped field set produces an entry into a response -map. +Each represented field in the grouped field set produces an entry into a +response map. -ExecuteSelectionSet(selectionSet, objectType, objectValue, variableValues): +ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, +variableValues): -- Let {groupedFieldSet} be the result of {CollectFields(objectType, - selectionSet, variableValues)}. - Initialize {resultMap} to an empty ordered map. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value @@ -363,8 +366,8 @@ is explained in greater detail in the Field Collection section below. **Errors and Non-Null Fields** -If during {ExecuteSelectionSet()} a field with a non-null {fieldType} raises a -_field error_ then that error must propagate to this entire selection set, +If during {ExecuteGroupedFieldSet()} a field with a non-null {fieldType} raises +a _field error_ then that error must propagate to this entire selection set, either resolving to {null} if allowed or further propagated to a parent field. If this occurs, any sibling fields which have not yet executed or have not yet @@ -704,8 +707,9 @@ CompleteValue(fieldType, fields, result, variableValues): - Let {objectType} be {fieldType}. - Otherwise if {fieldType} is an Interface or Union type. - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - - Let {subSelectionSet} be the result of calling {MergeSelectionSets(fields)}. - - Return the result of evaluating {ExecuteSelectionSet(subSelectionSet, + - Let {groupedFieldSet} be the result of calling {CollectSubfields(objectType, + fields, variableValues)}. + - Return the result of evaluating {ExecuteGroupedFieldSet(groupedFieldSet, objectType, result, variableValues)} _normally_ (allowing for parallelization). @@ -752,9 +756,9 @@ ResolveAbstractType(abstractType, objectValue): **Merging Selection Sets** -When more than one field of the same name is executed in parallel, the -_selection set_ for each of the fields are merged together when completing the -value in order to continue execution of the sub-selection sets. +When more than one field of the same name is executed in parallel, during value +completion their selection sets are collected together to produce a single +grouped field set in order to continue execution of the sub-selection sets. An example operation illustrating parallel fields with the same name with sub-selections. @@ -773,14 +777,19 @@ sub-selections. After resolving the value for `me`, the selection sets are merged together so `firstName` and `lastName` can be resolved for one value. -MergeSelectionSets(fields): +CollectSubfields(objectType, fields, variableValues): -- Let {selectionSet} be an empty list. +- Let {groupedFieldSet} be an empty map. - For each {field} in {fields}: - Let {fieldSelectionSet} be the selection set of {field}. - If {fieldSelectionSet} is null or empty, continue to the next field. - - Append all selections in {fieldSelectionSet} to {selectionSet}. -- Return {selectionSet}. + - Let {subGroupedFieldSet} be the result of {CollectFields(objectType, + fieldSelectionSet, variableValues)}. + - For each {subGroupedFieldSet} as {responseKey} and {subfields}: + - Let {groupForResponseKey} be the list in {groupedFieldSet} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all fields in {subfields} to {groupForResponseKey}. +- Return {groupedFieldSet}. ### Handling Field Errors From 3488636235021100675d5eddf5d788447bb068eb Mon Sep 17 00:00:00 2001 From: Benjie Gillam Date: Mon, 21 Aug 2023 12:15:34 +0100 Subject: [PATCH 03/28] Correct reference to MergeSelectionSets --- spec/Section 5 -- Validation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index 473cf5457..44a7433b9 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -463,7 +463,7 @@ unambiguous. Therefore any two field selections which might both be encountered for the same object are only valid if they are equivalent. During execution, the simultaneous execution of fields with the same response -name is accomplished by {MergeSelectionSets()} and {CollectFields()}. +name is accomplished by {CollectSubfields()}. For simple hand-written GraphQL, this rule is obviously a clear developer error, however nested fragments can make this difficult to detect manually. From 0ffed6352a3a7471e4f517217884f90ef43d41bf Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 15 Feb 2024 22:23:30 +0200 Subject: [PATCH 04/28] moves Field Collection section earlier --- spec/Section 6 -- Execution.md | 212 ++++++++++++++++----------------- 1 file changed, 106 insertions(+), 106 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 5fc42d8fa..510142115 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -337,6 +337,112 @@ serial): selection set. - Return an unordered map containing {data} and {errors}. +### Field Collection + +Before execution, the _selection set_ is converted to a grouped field set by +calling {CollectFields()}. Each entry in the grouped field set is a list of +fields that share a response key (the alias if defined, otherwise the field +name). This ensures all fields with the same response key (including those in +referenced fragments) are executed at the same time. + +As an example, collecting the fields of this selection set would collect two +instances of the field `a` and one of field `b`: + +```graphql example +{ + a { + subfield1 + } + ...ExampleFragment +} + +fragment ExampleFragment on Query { + a { + subfield2 + } + b +} +``` + +The depth-first-search order of the field groups produced by {CollectFields()} +is maintained through execution, ensuring that fields appear in the executed +response in a stable and predictable order. + +CollectFields(objectType, selectionSet, variableValues, visitedFragments): + +- If {visitedFragments} is not provided, initialize it to the empty set. +- Initialize {groupedFields} to an empty ordered map of lists. +- For each {selection} in {selectionSet}: + - If {selection} provides the directive `@skip`, let {skipDirective} be that + directive. + - If {skipDirective}'s {if} argument is {true} or is a variable in + {variableValues} with the value {true}, continue with the next {selection} + in {selectionSet}. + - If {selection} provides the directive `@include`, let {includeDirective} be + that directive. + - If {includeDirective}'s {if} argument is not {true} and is not a variable + in {variableValues} with the value {true}, continue with the next + {selection} in {selectionSet}. + - If {selection} is a {Field}: + - Let {responseKey} be the response key of {selection} (the alias if + defined, otherwise the field name). + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append {selection} to the {groupForResponseKey}. + - If {selection} is a {FragmentSpread}: + - Let {fragmentSpreadName} be the name of {selection}. + - If {fragmentSpreadName} is in {visitedFragments}, continue with the next + {selection} in {selectionSet}. + - Add {fragmentSpreadName} to {visitedFragments}. + - Let {fragment} be the Fragment in the current Document whose name is + {fragmentSpreadName}. + - If no such {fragment} exists, continue with the next {selection} in + {selectionSet}. + - Let {fragmentType} be the type condition on {fragment}. + - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue + with the next {selection} in {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. + - Let {fragmentGroupedFieldSet} be the result of calling + {CollectFields(objectType, fragmentSelectionSet, variableValues, + visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. + - If {selection} is an {InlineFragment}: + - Let {fragmentType} be the type condition on {selection}. + - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, + fragmentType)} is {false}, continue with the next {selection} in + {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {selection}. + - Let {fragmentGroupedFieldSet} be the result of calling + {CollectFields(objectType, fragmentSelectionSet, variableValues, + visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. +- Return {groupedFields}. + +DoesFragmentTypeApply(objectType, fragmentType): + +- If {fragmentType} is an Object Type: + - If {objectType} and {fragmentType} are the same type, return {true}, + otherwise return {false}. +- If {fragmentType} is an Interface Type: + - If {objectType} is an implementation of {fragmentType}, return {true} + otherwise return {false}. +- If {fragmentType} is a Union: + - If {objectType} is a possible type of {fragmentType}, return {true} + otherwise return {false}. + +Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` +directives may be applied in either order since they apply commutatively. + ## Executing a Grouped Field Set To execute a grouped field set, the object value being evaluated and the object @@ -474,112 +580,6 @@ A correct executor must generate the following result for that _selection set_: } ``` -### Field Collection - -Before execution, the _selection set_ is converted to a grouped field set by -calling {CollectFields()}. Each entry in the grouped field set is a list of -fields that share a response key (the alias if defined, otherwise the field -name). This ensures all fields with the same response key (including those in -referenced fragments) are executed at the same time. - -As an example, collecting the fields of this selection set would collect two -instances of the field `a` and one of field `b`: - -```graphql example -{ - a { - subfield1 - } - ...ExampleFragment -} - -fragment ExampleFragment on Query { - a { - subfield2 - } - b -} -``` - -The depth-first-search order of the field groups produced by {CollectFields()} -is maintained through execution, ensuring that fields appear in the executed -response in a stable and predictable order. - -CollectFields(objectType, selectionSet, variableValues, visitedFragments): - -- If {visitedFragments} is not provided, initialize it to the empty set. -- Initialize {groupedFields} to an empty ordered map of lists. -- For each {selection} in {selectionSet}: - - If {selection} provides the directive `@skip`, let {skipDirective} be that - directive. - - If {skipDirective}'s {if} argument is {true} or is a variable in - {variableValues} with the value {true}, continue with the next {selection} - in {selectionSet}. - - If {selection} provides the directive `@include`, let {includeDirective} be - that directive. - - If {includeDirective}'s {if} argument is not {true} and is not a variable - in {variableValues} with the value {true}, continue with the next - {selection} in {selectionSet}. - - If {selection} is a {Field}: - - Let {responseKey} be the response key of {selection} (the alias if - defined, otherwise the field name). - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append {selection} to the {groupForResponseKey}. - - If {selection} is a {FragmentSpread}: - - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} is in {visitedFragments}, continue with the next - {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. - - Let {fragment} be the Fragment in the current Document whose name is - {fragmentSpreadName}. - - If no such {fragment} exists, continue with the next {selection} in - {selectionSet}. - - Let {fragmentType} be the type condition on {fragment}. - - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue - with the next {selection} in {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. - - If {selection} is an {InlineFragment}: - - Let {fragmentType} be the type condition on {selection}. - - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, - fragmentType)} is {false}, continue with the next {selection} in - {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {selection}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. -- Return {groupedFields}. - -DoesFragmentTypeApply(objectType, fragmentType): - -- If {fragmentType} is an Object Type: - - If {objectType} and {fragmentType} are the same type, return {true}, - otherwise return {false}. -- If {fragmentType} is an Interface Type: - - If {objectType} is an implementation of {fragmentType}, return {true} - otherwise return {false}. -- If {fragmentType} is a Union: - - If {objectType} is a possible type of {fragmentType}, return {true} - otherwise return {false}. - -Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` -directives may be applied in either order since they apply commutatively. - ## Executing Fields Each field requested in the grouped field set that is defined on the selected From ffbfd3ca043661272adeff3b6ed09f022605238b Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 15 Feb 2024 22:30:17 +0200 Subject: [PATCH 05/28] Introduce `@defer` directive --- spec/Section 6 -- Execution.md | 383 ++++++++++++++++++++++++++++----- 1 file changed, 332 insertions(+), 51 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 510142115..3028bca7e 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -252,12 +252,13 @@ CreateSourceEventStream(subscription, schema, variableValues, initialValue): - Let {groupedFieldSet} be the result of {CollectFields(subscriptionType, selectionSet, variableValues)}. - If {groupedFieldSet} does not have exactly one entry, raise a _request error_. -- Let {fields} be the value of the first entry in {groupedFieldSet}. -- Let {fieldName} be the name of the first entry in {fields}. Note: This value - is unaffected if an alias is used. -- Let {field} be the first entry in {fields}. +- Let {fieldDetailsList} be the value of the first entry in {groupedFieldSet}. +- Let {fieldDetails} be the first entry in {fieldDetailsList}. +- Let {field} be the corresponding entry on {fieldDetails}. +- Let {fieldName} be the name of {field}. Note: This value is unaffected if an + alias is used. - Let {argumentValues} be the result of {CoerceArgumentValues(subscriptionType, - field, variableValues)}. + node, variableValues)}. - Let {fieldStream} be the result of running {ResolveFieldEventStream(subscriptionType, initialValue, fieldName, argumentValues)}. @@ -328,14 +329,142 @@ ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, serial): - If {serial} is not provided, initialize it to {false}. -- Let {groupedFieldSet} be the result of {CollectFields(objectType, - selectionSet, variableValues)}. -- Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, - objectType, initialValue, variableValues)} _serially_ if {serial} is {true}, - _normally_ (allowing parallelization) otherwise. -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Let {groupedFieldSet} and {newDeferUsages} be the result of + {CollectFields(objectType, selectionSet, variableValues)}. +- Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet)}. +- Let {data} and {incrementalDataRecords} be the result of + {ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, initialValue, + variableValues, serial)}. +- Let {errors} be the list of all _field error_ raised while completing {data}. +- If {incrementalDataRecords} is empty, return an unordered map containing + {data} and {errors}. +- Let {incrementalResults} be the result of {YieldIncrementalResults(data, + errors, incrementalDataRecords)}. +- Wait for the first result in {incrementalResults} to be available. +- Let {initialResult} be that result. +- Return {initialResult} and {BatchIncrementalResults(incrementalResults)}. + +### Yielding Incremental Results + +The procedure for yielding incremental results is specified by the +{YieldIncrementalResults()} algorithm. + +YieldIncrementalResults(data, errors, incrementalDataRecords): + +- Initialize {graph} to an empty directed acyclic graph. +- For each {incrementalDataRecord} of {incrementalDataRecords}: + - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed + from the {pendingResults} that it completes, adding each of {pendingResults} + to {graph} as new nodes, if necessary, each directed from its {parent}, if + defined, recursively adding each {parent} as necessary. +- Prune root nodes of {graph} containing no direct child Incremental Data + Records, repeatedly if necessary, promoting any direct child Deferred + Fragments of the pruned nodes to root nodes. (This ensures that no empty + fragments are reported as pending). +- Let {newPendingResults} be the set of root nodes in {graph}. +- Let {pending} be the result of {GetPending(newPendingResults)}. +- Let {hasNext} be {true}. +- Yield an unordered map containing {data}, {errors}, {pending}, and {hasNext}. +- For each completed child Pending Incremental Data node of a root node in + {graph}: + - Let {incrementalDataRecord} be the Pending Incremental Data for that node; + let {result} be the corresponding completed result. + - If {data} on {result} is {null}: + - Initialize {completed} to an empty list. + - Let {parents} be the parent nodes of {deferredGroupedFieldSetRecord}. + - Initialize {completed} to an empty list. + - For each {pendingResult} of {parents}: + - Append {GetCompletedEntry(parent, errors)} to {completed}. + - Remove {pendingResult} and all of its descendant nodes from {graph}, + except for any descendant Incremental Data Record nodes with other + parents. + - Let {hasNext} be {false}, if {graph} is empty. + - Yield an unordered map containing {completed} and {hasNext}. + - Continue to the next completed child Incremental Data node in {graph}. + - Replace {node} in {graph} with a new node corresponding to the Completed + Incremental Data for {result}. + - Add each {incrementalDataRecord} of {incrementalDataRecords} on {result} to + {graph} via the same procedure as above. + - Let {completedDeferredFragments} be the set of root nodes in {graph} without + any child Pending Data nodes. + - Let {completedIncrementalDataNodes} be the set of completed Incremental Data + nodes that are children of {completedDeferredFragments}. + - If {completedIncrementalDataNodes} is empty, continue to the next completed + child Incremental Data node in {graph}. + - Initialize {incremental} to an empty list. + - For each {node} of {completedIncrementalDataNodes}: + - Let {incrementalDataRecord} be the corresponding record for {node}. + - Append {GetIncrementalEntry(incrementalDataRecord, graph)} to + {incremental}. + - Remove {node} from {graph}. + - Initialize {completed} to an empty list. + - For each {pendingResult} of {completedDeferredFragments}: + - Append {GetCompletedEntry(pendingResult)} to {completed}. + - Remove {pendingResult} from {graph}, promoting its child nodes to root + nodes. + - Prune root nodes of {graph} containing no direct child Incremental Data + Records, as above. + - Let {hasNext} be {false} if {graph} is empty. + - Let {incrementalResult} be an unordered map containing {hasNext}. + - If {incremental} is not empty, set the corresponding entry on + {incrementalResult} to {incremental}. + - If {completed} is not empty, set the corresponding entry on + {incrementalResult} to {completed}. + - Let {newPendingResults} be the set of new root nodes in {graph}, promoted by + the above steps. + - If {newPendingResults} is not empty: + - Let {pending} be the result of {GetPending(newPendingResults)}. + - Set the corresponding entry on {incrementalResult} to {pending}. + - Yield {incrementalResult}. +- Complete this incremental result stream. + +GetPending(newPendingResults): + +- Initialize {pending} to an empty list. +- For each {newPendingResult} of {newPendingResults}: + - Let {id} be a unique identifier for {newPendingResult}. + - Let {path} and {label} be the corresponding entries on {newPendingResult}. + - Let {pendingEntry} be an unordered map containing {id}, {path}, and {label}. + - Append {pendingEntry} to {pending}. +- Return {pending}. + +GetIncrementalEntry(incrementalDataRecord, graph): + +- Let {deferredFragments} be the Deferred Fragments incrementally completed by + {incrementalDataRecord} at {path}. +- Let {result} be the result of {incrementalDataRecord}. +- Let {data} and {errors} be the corresponding entries on {result}. +- Let {releasedDeferredFragments} be the members of {deferredFragments} that are + root nodes in {graph}. +- Let {bestDeferredFragment} be the member of {releasedDeferredFragments} with + the shortest {path} entry. +- Let {subPath} be the portion of {path} not contained by the {path} entry of + {bestDeferredFragment}. +- Let {id} be the unique identifier for {bestDeferredFragment}. +- Return an unordered map containing {id}, {subPath}, {data}, and {errors}. + +GetCompletedEntry(pendingResult, errors): + +- Let {id} be the unique identifier for {pendingResult}. +- Let {completedEntry} be an unordered map containing {id}. +- If {errors} is not empty, set the corresponding entry on {completedEntry} to + {errors}. +- Return {completedEntry}. + +### Batching Incremental Results + +BatchIncrementalResults(incrementalResults): + +- Return a new stream {batchedIncrementalResults} which yields events as + follows: +- While {incrementalResults} is not closed: + - Let {availableIncrementalResults} be a list of one or more Incremental + Results available on {incrementalResults}. + - Let {batchedIncrementalResult} be an unordered map created by merging the + items in {availableIncrementalResults} into a single unordered map, + concatenating list entries as necessary, and setting {hasNext} to the value + of {hasNext} on the final item in the list. + - Yield {batchedIncrementalResult}. ### Field Collection @@ -368,10 +497,12 @@ The depth-first-search order of the field groups produced by {CollectFields()} is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. -CollectFields(objectType, selectionSet, variableValues, visitedFragments): +CollectFields(objectType, selectionSet, variableValues, deferUsage, +visitedFragments): - If {visitedFragments} is not provided, initialize it to the empty set. - Initialize {groupedFields} to an empty ordered map of lists. +- Initialize {newDeferUsages} to an empty list. - For each {selection} in {selectionSet}: - If {selection} provides the directive `@skip`, let {skipDirective} be that directive. @@ -386,14 +517,24 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - If {selection} is a {Field}: - Let {responseKey} be the response key of {selection} (the alias if defined, otherwise the field name). + - Let {fieldDetails} be a new unordered map containing {deferUsage}. + - Set the entry for {field} on {fieldDetails} to {selection}. and + {deferUsage}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - - Append {selection} to the {groupForResponseKey}. + - Append {fieldDetails} to the {groupForResponseKey}. - If {selection} is a {FragmentSpread}: - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} is in {visitedFragments}, continue with the next - {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. + - If {fragmentSpreadName} provides the directive `@defer` and its {if} + argument is not {false} and is not a variable in {variableValues} with the + value {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is not defined: + - If {fragmentSpreadName} is in {visitedFragments}, continue with the next + {selection} in {selectionSet}. + - Add {fragmentSpreadName} to {visitedFragments}. - Let {fragment} be the Fragment in the current Document whose name is {fragmentSpreadName}. - If no such {fragment} exists, continue with the next {selection} in @@ -402,31 +543,45 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. + - If {deferDirective} is defined, let {fragmentDeferUsage} be + {deferDirective} and append it to {newDeferUsages}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - Append all items in {fragmentGroup} to {groupForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. - If {selection} is an {InlineFragment}: - Let {fragmentType} be the type condition on {selection}. - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {selection}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. + - If {InlineFragment} provides the directive `@defer` and its {if} argument + is not {false} and is not a variable in {variableValues} with the value + {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is defined, let {fragmentDeferUsage} be + {deferDirective} and append it to {newDeferUsages}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - Append all items in {fragmentGroup} to {groupForResponseKey}. -- Return {groupedFields}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. +- Return {groupedFields} and {newDeferUsages}. DoesFragmentTypeApply(objectType, fragmentType): @@ -443,6 +598,105 @@ DoesFragmentTypeApply(objectType, fragmentType): Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` directives may be applied in either order since they apply commutatively. +### Field Plan Generation + +BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): + +- If {parentDeferUsages} is not provided, initialize it to the empty set. +- Initialize {fieldPlan} to an empty ordered map. +- For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: + - Let {deferUsageSet} be the result of + {GetDeferUsageSet(groupForResponseKey)}. + - Let {groupedFieldSet} be the entry in {fieldPlan} for any equivalent set to + {deferUsageSet}; if no such map exists, create it as an empty ordered map. + - Set the entry for {responseKey} in {groupedFieldSet} to + {groupForResponseKey}. +- Return {fieldPlan}. + +GetDeferUsageSet(fieldDetailsList): + +- Let {deferUsageSet} be the set containing the {deferUsage} entry from each + item in {fieldDetailsList}. +- For each {deferUsage} of {deferUsageSet}: + - Let {ancestors} be the set of {deferUsage} entries that are ancestors of + {deferUsage}, collected by recursively following the {parent} entry on + {deferUsage}. + - If any of {ancestors} is contained by {deferUsageSet}, remove {deferUsage} + from {deferUsageSet}. +- Return {deferUsageSet}. + +## Executing a Field Plan + +To execute a field plan, the object value being evaluated and the object type +need to be known, as well as whether the non-deferred grouped field set must be +executed serially, or may be executed in parallel. + +ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, objectValue, +variableValues, serial, path, deferUsageSet, deferMap): + +- If {path} is not provided, initialize it to an empty list. +- Let {newDeferMap} be the result of {GetNewDeferMap(newDeferUsages, path, + deferMap)}. +- Let {groupedFieldSet} be the entry in {fieldPlan} for the set equivalent to + {deferUsageSet}. +- Let {newGroupedFieldSets} be the remaining portion of {fieldPlan}. +- Allowing for parallelization, perform the following steps: + - Let {data} and {nestedIncrementalDataRecords} be the result of running + {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is + {true}, _normally_ (allowing parallelization) otherwise. + - Let {incrementalDataRecords} be the result of + {ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, + newGroupedFieldSets, path, newDeferMap)}. +- Append all items in {nestedIncrementalDataRecords} to + {incrementalDataRecords}. +- Return {data} and {incrementalDataRecords}. + +GetNewDeferMap(newDeferUsages, path, deferMap): + +- If {newDeferUsages} is empty, return {deferMap}: +- Let {newDeferMap} be a new unordered map containing all entries in {deferMap}. +- For each {deferUsage} in {newDeferUsages}: + - Let {parentDeferUsage} and {label} be the corresponding entries on + {deferUsage}. + - Let {parent} be the entry in {deferMap} for {parentDeferUsage}. + - Let {newDeferredFragment} be an unordered map containing {parent}, {path} + and {label}. + - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. +- Return {newDeferMap}. + +ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, +newGroupedFieldSets, path, deferMap): + +- Initialize {incrementalDataRecords} to an empty list. +- For each {deferUsageSet} and {groupedFieldSet} in {newGroupedFieldSets}: + - Let {deferredFragments} be an empty list. + - For each {deferUsage} in {deferUsageSet}: + - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. + - Append {deferredFragment} to {deferredFragments}. + - Let {incrementalDataRecord} represent the future execution of + {ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, deferredFragments, path, deferUsageSet, deferMap)}, + incrementally completing {deferredFragments} at {path}. + - Append {incrementalDataRecord} to {incrementalDataRecords}. + - Schedule initiation of execution of {incrementalDataRecord} following any + implementation specific deferral. +- Return {incrementalDataRecords}. + +Note: {incrementalDataRecord} can be safely initiated without blocking +higher-priority data once any of {deferredFragments} are released as pending. + +ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, +variableValues, path, deferUsageSet, deferMap): + +- Let {data} and {incrementalDataRecords} be the result of running + {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, path, deferUsageSet, deferMap)} _normally_ (allowing + parallelization). +- Let {errors} be the list of all _field error_ raised while completing {data}. +- Return an unordered map containing {data}, {errors}, and + {incrementalDataRecords}. + ## Executing a Grouped Field Set To execute a grouped field set, the object value being evaluated and the object @@ -452,23 +706,27 @@ be executed in parallel. Each represented field in the grouped field set produces an entry into a response map. -ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, -variableValues): +ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, +path, deferUsageSet, deferMap): - Initialize {resultMap} to an empty ordered map. +- Initialize {incrementalDataRecords} to an empty list. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value is unaffected if an alias is used. - Let {fieldType} be the return type defined for the field {fieldName} of {objectType}. - If {fieldType} is defined: - - Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType, - fields, variableValues)}. + - Let {responseValue} and {fieldIncrementalDataRecords} be the result of + {ExecuteField(objectType, objectValue, fieldType, fields, variableValues, + path)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. -- Return {resultMap}. + - Append all items in {fieldIncrementalDataRecords} to + {incrementalDataRecords}. +- Return {resultMap} and {incrementalDataRecords}. Note: {resultMap} is ordered by which fields appear first in the operation. This -is explained in greater detail in the Field Collection section below. +is explained in greater detail in the Field Collection section above. **Errors and Non-Null Fields** @@ -588,16 +846,19 @@ coerces any provided argument values, then resolves a value for the field, and finally completes that value either by recursively executing another selection set or coercing a scalar value. -ExecuteField(objectType, objectValue, fieldType, fields, variableValues): +ExecuteField(objectType, objectValue, fieldType, fieldDetailsList, +variableValues, path, deferUsageSet, deferMap): -- Let {field} be the first entry in {fields}. +- Let {fieldDetails} be the first entry in {fieldDetailsList}. +- Let {field} be the corresponding entry on {fieldDetails}. - Let {fieldName} be the field name of {field}. +- Append {fieldName} to {path}. - Let {argumentValues} be the result of {CoerceArgumentValues(objectType, field, variableValues)}. - Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. - Return the result of {CompleteValue(fieldType, fields, resolvedValue, - variableValues)}. + variableValues, path, deferUsageSet, deferMap)}. ### Coercing Field Arguments @@ -684,22 +945,22 @@ After resolving the value for a field, it is completed by ensuring it adheres to the expected return type. If the return type is another Object type, then the field execution process continues recursively. -CompleteValue(fieldType, fields, result, variableValues): +CompleteValue(fieldType, fieldDetailsList, result, variableValues, path, +deferUsageSet, deferMap): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. - - Let {completedResult} be the result of calling {CompleteValue(innerType, - fields, result, variableValues)}. + - Let {completedResult} and {incrementalDataRecords} be the result of calling + {CompleteValue(innerType, fields, result, variableValues, path)}. - If {completedResult} is {null}, raise a _field error_. - - Return {completedResult}. + - Return {completedResult} and {incrementalDataRecords}. - If {result} is {null} (or another internal value similar to {null} such as {undefined}), return {null}. - If {fieldType} is a List type: - If {result} is not a collection of values, raise a _field error_. - Let {innerType} be the inner type of {fieldType}. - - Return a list where each list item is the result of calling - {CompleteValue(innerType, fields, resultItem, variableValues)}, where - {resultItem} is each item in {result}. + - Return the result of {CompleteListValue(innerType, fieldDetailsList, result, + variableValues, path, deferUsageSet, deferMap)}. - If {fieldType} is a Scalar or Enum type: - Return the result of {CoerceResult(fieldType, result)}. - If {fieldType} is an Object, Interface, or Union type: @@ -707,11 +968,28 @@ CompleteValue(fieldType, fields, result, variableValues): - Let {objectType} be {fieldType}. - Otherwise if {fieldType} is an Interface or Union type. - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - - Let {groupedFieldSet} be the result of calling {CollectSubfields(objectType, - fields, variableValues)}. - - Return the result of evaluating {ExecuteGroupedFieldSet(groupedFieldSet, - objectType, result, variableValues)} _normally_ (allowing for - parallelization). + - Let {groupedFieldSet} and {newDeferUsages} be the result of calling + {CollectSubfields(objectType, fieldDetailsList, variableValues)}. + - Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet, + deferUsageSet)}. + - Return the result of {ExecuteFieldPlan(newDeferUsages, fieldPlan, + objectType, result, variableValues, false, path, deferUsageSet, deferMap)}. + +CompleteListValue(innerType, fieldDetailsList, result, variableValues, path, +deferUsageSet, deferMap): + +- Initialize {items} and {incrementalDataRecords} to empty lists. +- Let {index} be {0}. +- For each {resultItem} of {result}: + - Let {itemPath} be {path} with {index} appended. + - Let {completedItem} and {itemIncrementalDataRecords} be the result of + calling {CompleteValue(innerType, fieldDetailsList, item, variableValues, + itemPath)}. + - Append {completedItem} to {items}. + - Append all items in {itemIncrementalDataRecords} to + {incrementalDataRecords}. + - Increment {index} by {1}. +- Return {items} and {incrementalDataRecords}. **Coercing Results** @@ -777,18 +1055,21 @@ sub-selections. After resolving the value for `me`, the selection sets are merged together so `firstName` and `lastName` can be resolved for one value. -CollectSubfields(objectType, fields, variableValues): +CollectSubfields(objectType, fieldDetailsList, variableValues): -- Let {groupedFieldSet} be an empty map. -- For each {field} in {fields}: +- Initialize {groupedFieldSet} to an empty ordered map of lists. +- Initialize {newDeferUsages} to an empty list. +- For each {fieldDetails} in {fieldDetailsList}: + - Let {field} and {deferUsage} be the corresponding entries on {fieldDetails}. - Let {fieldSelectionSet} be the selection set of {field}. - If {fieldSelectionSet} is null or empty, continue to the next field. - - Let {subGroupedFieldSet} be the result of {CollectFields(objectType, - fieldSelectionSet, variableValues)}. + - Let {subGroupedFieldSet} and {subNewDeferUsages} be the result of + {CollectFields(objectType, fieldSelectionSet, variableValues, deferUsage)}. - For each {subGroupedFieldSet} as {responseKey} and {subfields}: - Let {groupForResponseKey} be the list in {groupedFieldSet} for {responseKey}; if no such list exists, create it as an empty list. - Append all fields in {subfields} to {groupForResponseKey}. + - Append all defer usages in {subNewDeferUsages} to {newDeferUsages}. - Return {groupedFieldSet}. ### Handling Field Errors From 5ce10a571fd5084222dcb45c06aa59f1e51c5e61 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 13 Jun 2024 15:04:00 +0300 Subject: [PATCH 06/28] refactor a few lines out of YieldSubsequentResults --- spec/Section 6 -- Execution.md | 80 ++++++++++++++++++++++------------ 1 file changed, 51 insertions(+), 29 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 3028bca7e..b5c3c331f 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -356,15 +356,12 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed from the {pendingResults} that it completes, adding each of {pendingResults} to {graph} as new nodes, if necessary, each directed from its {parent}, if - defined, recursively adding each {parent} as necessary. -- Prune root nodes of {graph} containing no direct child Incremental Data - Records, repeatedly if necessary, promoting any direct child Deferred - Fragments of the pruned nodes to root nodes. (This ensures that no empty - fragments are reported as pending). -- Let {newPendingResults} be the set of root nodes in {graph}. -- Let {pending} be the result of {GetPending(newPendingResults)}. -- Let {hasNext} be {true}. -- Yield an unordered map containing {data}, {errors}, {pending}, and {hasNext}. + defined, recursively adding each {parent} as necessary until + {incrementalDataRecord} is connected to {graph}. +- Let {pendingResults} be the result of {GetNonEmptyNewPending(graph)}. +- Prune root nodes from {graph} not in {pendingResults}, repeating as necessary + until all root nodes in {graph} are also in {pendingResults}. +- Yield the result of {GetInitialResult(data, errors, pending)}. - For each completed child Pending Incremental Data node of a root node in {graph}: - Let {incrementalDataRecord} be the Pending Incremental Data for that node; @@ -380,7 +377,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): parents. - Let {hasNext} be {false}, if {graph} is empty. - Yield an unordered map containing {completed} and {hasNext}. - - Continue to the next completed child Incremental Data node in {graph}. + - Continue to the next completed Pending Incremental Data node. - Replace {node} in {graph} with a new node corresponding to the Completed Incremental Data for {result}. - Add each {incrementalDataRecord} of {incrementalDataRecords} on {result} to @@ -390,7 +387,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Let {completedIncrementalDataNodes} be the set of completed Incremental Data nodes that are children of {completedDeferredFragments}. - If {completedIncrementalDataNodes} is empty, continue to the next completed - child Incremental Data node in {graph}. + Pending Incremental Data Node. - Initialize {incremental} to an empty list. - For each {node} of {completedIncrementalDataNodes}: - Let {incrementalDataRecord} be the corresponding record for {node}. @@ -402,32 +399,57 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Append {GetCompletedEntry(pendingResult)} to {completed}. - Remove {pendingResult} from {graph}, promoting its child nodes to root nodes. - - Prune root nodes of {graph} containing no direct child Incremental Data - Records, as above. - - Let {hasNext} be {false} if {graph} is empty. - - Let {incrementalResult} be an unordered map containing {hasNext}. - - If {incremental} is not empty, set the corresponding entry on - {incrementalResult} to {incremental}. - - If {completed} is not empty, set the corresponding entry on - {incrementalResult} to {completed}. - - Let {newPendingResults} be the set of new root nodes in {graph}, promoted by - the above steps. - - If {newPendingResults} is not empty: - - Let {pending} be the result of {GetPending(newPendingResults)}. - - Set the corresponding entry on {incrementalResult} to {pending}. - - Yield {incrementalResult}. + - Let {newPendingResults} be a new set containing the result of + {GetNonEmptyNewPending(graph, pendingResults)}. + - Add all nodes in {newPendingResults} to {pendingResults}. + - Prune root nodes from {graph} not in {pendingResults}, repeating as + necessary until all root nodes in {graph} are also in {pendingResults}. + - Let {pending} be the result of {GetPendingEntry(newPendingResults)}. + - Yield the result of {GetIncrementalResult(graph, incremental, completed, + pending)}. - Complete this incremental result stream. -GetPending(newPendingResults): +GetNonEmptyNewPending(graph, oldPendingResults): + +- If not provided, initialize {oldPendingResults} to the empty set. +- Let {rootNodes} be the set of root nodes in {graph}. +- For each {rootNode} of {rootNodes}: + - If {rootNodes} is in {oldPendingResults}: + - Continue to the next {rootNode}. + - If {rootNode} has no children Pending Incremental Data nodes: + - Let {children} be the set of child Deferred Fragment nodes of {rootNode}. + - Remove {rootNode} from {rootNodes}. + - Add each of the nodes in {children} to {rootNodes}. +- Return {rootNodes}. + +GetInitialResult(data, errors, pendingResults): + +- Let {pending} be the result of {GetPendingEntry(pendingResults)}. +- Let {hasNext} be {true}. +- Return an unordered map containing {data}, {errors}, {pending}, and {hasNext}. + +GetPendingEntry(pendingResults): - Initialize {pending} to an empty list. -- For each {newPendingResult} of {newPendingResults}: - - Let {id} be a unique identifier for {newPendingResult}. - - Let {path} and {label} be the corresponding entries on {newPendingResult}. +- For each {pendingResult} of {pendingResult}: + - Let {id} be a unique identifier for {pendingResult}. + - Let {path} and {label} be the corresponding entries on {pendingResult}. - Let {pendingEntry} be an unordered map containing {id}, {path}, and {label}. - Append {pendingEntry} to {pending}. - Return {pending}. +GetIncrementalResult(graph, incremental, completed, pending): + +- Let {hasNext} be {false} if {graph} is empty, otherwise, {true}. +- Let {incrementalResult} be an unordered map containing {hasNext}. +- If {incremental} is not empty: + - Set the corresponding entry on {incrementalResult} to {incremental}. +- If {completed} is not empty: + - Set the corresponding entry on {incrementalResult} to {completed}. +- If {pending} is not empty: + - Set the corresponding entry on {incrementalResult} to {pending}. +- Return {incrementalResult}. + GetIncrementalEntry(incrementalDataRecord, graph): - Let {deferredFragments} be the Deferred Fragments incrementally completed by From b9a2500c3d9e14f168577501495b8139369267e5 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 18 Jun 2024 22:37:22 +0300 Subject: [PATCH 07/28] add a word or two about which child nodes are being promoted --- spec/Section 6 -- Execution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index b5c3c331f..fadeb0de8 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -397,8 +397,8 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Initialize {completed} to an empty list. - For each {pendingResult} of {completedDeferredFragments}: - Append {GetCompletedEntry(pendingResult)} to {completed}. - - Remove {pendingResult} from {graph}, promoting its child nodes to root - nodes. + - Remove {pendingResult} from {graph}, promoting its child Deferred Fragment + nodes to root nodes. - Let {newPendingResults} be a new set containing the result of {GetNonEmptyNewPending(graph, pendingResults)}. - Add all nodes in {newPendingResults} to {pendingResults}. From c7d5ccdb54159ce5512f81a5e17aae5bb9e0586f Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 18 Jun 2024 22:58:32 +0300 Subject: [PATCH 08/28] be more graphy --- spec/Section 6 -- Execution.md | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index fadeb0de8..4776b6e82 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -351,17 +351,10 @@ The procedure for yielding incremental results is specified by the YieldIncrementalResults(data, errors, incrementalDataRecords): -- Initialize {graph} to an empty directed acyclic graph. -- For each {incrementalDataRecord} of {incrementalDataRecords}: - - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed - from the {pendingResults} that it completes, adding each of {pendingResults} - to {graph} as new nodes, if necessary, each directed from its {parent}, if - defined, recursively adding each {parent} as necessary until - {incrementalDataRecord} is connected to {graph}. +- Let {graph} be the result of {BuildGraph(incrementalDataRecords)}. - Let {pendingResults} be the result of {GetNonEmptyNewPending(graph)}. -- Prune root nodes from {graph} not in {pendingResults}, repeating as necessary - until all root nodes in {graph} are also in {pendingResults}. -- Yield the result of {GetInitialResult(data, errors, pending)}. +- Update {graph} to the subgraph rooted at nodes in {pendingResults}. +- Yield the result of {GetInitialResult(data, errors, pendingResults)}. - For each completed child Pending Incremental Data node of a root node in {graph}: - Let {incrementalDataRecord} be the Pending Incremental Data for that node; @@ -402,13 +395,24 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Let {newPendingResults} be a new set containing the result of {GetNonEmptyNewPending(graph, pendingResults)}. - Add all nodes in {newPendingResults} to {pendingResults}. - - Prune root nodes from {graph} not in {pendingResults}, repeating as - necessary until all root nodes in {graph} are also in {pendingResults}. + - Update {graph} to the subgraph rooted at nodes in {pendingResults}. - Let {pending} be the result of {GetPendingEntry(newPendingResults)}. - Yield the result of {GetIncrementalResult(graph, incremental, completed, pending)}. - Complete this incremental result stream. +BuildGraph(incrementalDataRecords): + +- Initialize {graph} to an empty directed acyclic graph, where the root nodes + represent the Subsequent Result nodes that have been released as pending. +- For each {incrementalDataRecord} of {incrementalDataRecords}: + - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed + from the {pendingResults} that it completes, adding each of {pendingResults} + to {graph} as new nodes, if necessary, each directed from its {parent}, if + defined, recursively adding each {parent} as necessary until + {incrementalDataRecord} is connected to {graph}. +- Return {graph}. + GetNonEmptyNewPending(graph, oldPendingResults): - If not provided, initialize {oldPendingResults} to the empty set. From bfe47f3c09adc2eca3ca49e12720c1a978db759a Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:09:52 +0300 Subject: [PATCH 09/28] fix timing --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 4776b6e82..3e18aef98 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -404,7 +404,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): BuildGraph(incrementalDataRecords): - Initialize {graph} to an empty directed acyclic graph, where the root nodes - represent the Subsequent Result nodes that have been released as pending. + represent the pending Subsequent Results. - For each {incrementalDataRecord} of {incrementalDataRecords}: - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed from the {pendingResults} that it completes, adding each of {pendingResults} From 587589c322224482aaae39c1d0920c98173dee93 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:16:58 +0300 Subject: [PATCH 10/28] reuse function --- spec/Section 6 -- Execution.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 3e18aef98..f53b4237f 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -373,8 +373,8 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Continue to the next completed Pending Incremental Data node. - Replace {node} in {graph} with a new node corresponding to the Completed Incremental Data for {result}. - - Add each {incrementalDataRecord} of {incrementalDataRecords} on {result} to - {graph} via the same procedure as above. + - Let {resultIncrementalDataRecords} be {incrementalDataRecords} on {result}. + - Update {graph} to {BuildGraph(resultIncrementalDataRecords, graph)}. - Let {completedDeferredFragments} be the set of root nodes in {graph} without any child Pending Data nodes. - Let {completedIncrementalDataNodes} be the set of completed Incremental Data @@ -401,17 +401,17 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): pending)}. - Complete this incremental result stream. -BuildGraph(incrementalDataRecords): +BuildGraph(incrementalDataRecords, graph): -- Initialize {graph} to an empty directed acyclic graph, where the root nodes - represent the pending Subsequent Results. +- Let {newGraph} be a new directed acyclic graph containing all of the nodes and + edges in {graph}. - For each {incrementalDataRecord} of {incrementalDataRecords}: - - Add {incrementalDataRecord} to {graph} as a new Pending Data node directed - from the {pendingResults} that it completes, adding each of {pendingResults} - to {graph} as new nodes, if necessary, each directed from its {parent}, if - defined, recursively adding each {parent} as necessary until - {incrementalDataRecord} is connected to {graph}. -- Return {graph}. + - Add {incrementalDataRecord} to {newGraph} as a new Pending Data node + directed from the {pendingResults} that it completes, adding each of + {pendingResults} to {newGraph} as new nodes, if necessary, each directed + from its {parent}, if defined, recursively adding each {parent} as necessary + until {incrementalDataRecord} is connected to {newGraph}. +- Return {newGraph}. GetNonEmptyNewPending(graph, oldPendingResults): From e8368ed6cd24003368f473e19ef8e5742e919f8e Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:21:19 +0300 Subject: [PATCH 11/28] fix --- spec/Section 6 -- Execution.md | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index f53b4237f..5095f4ee7 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -392,8 +392,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Append {GetCompletedEntry(pendingResult)} to {completed}. - Remove {pendingResult} from {graph}, promoting its child Deferred Fragment nodes to root nodes. - - Let {newPendingResults} be a new set containing the result of - {GetNonEmptyNewPending(graph, pendingResults)}. + - Let {newPendingResults} be the result of {GetNonEmptyNewPending(graph)}. - Add all nodes in {newPendingResults} to {pendingResults}. - Update {graph} to the subgraph rooted at nodes in {pendingResults}. - Let {pending} be the result of {GetPendingEntry(newPendingResults)}. @@ -413,18 +412,17 @@ BuildGraph(incrementalDataRecords, graph): until {incrementalDataRecord} is connected to {newGraph}. - Return {newGraph}. -GetNonEmptyNewPending(graph, oldPendingResults): +GetNonEmptyNewPending(graph): -- If not provided, initialize {oldPendingResults} to the empty set. -- Let {rootNodes} be the set of root nodes in {graph}. +- Initialize {newPendingResults} to the empty set. +- Initialize {rootNodes} to the set of root nodes in {graph}. - For each {rootNode} of {rootNodes}: - - If {rootNodes} is in {oldPendingResults}: - - Continue to the next {rootNode}. - If {rootNode} has no children Pending Incremental Data nodes: - Let {children} be the set of child Deferred Fragment nodes of {rootNode}. - - Remove {rootNode} from {rootNodes}. - Add each of the nodes in {children} to {rootNodes}. -- Return {rootNodes}. + - Continue to the next {rootNode} of {rootNodes}. + - Add {rootNode} to {newPendingResults}. +- Return {newPendingResults}. GetInitialResult(data, errors, pendingResults): From 7d8b9d085299d7cc10c0c8b83fd412a4b6403540 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:23:15 +0300 Subject: [PATCH 12/28] rename BuildGraph to GraphFromRecords --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 5095f4ee7..91b8f0179 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -351,7 +351,7 @@ The procedure for yielding incremental results is specified by the YieldIncrementalResults(data, errors, incrementalDataRecords): -- Let {graph} be the result of {BuildGraph(incrementalDataRecords)}. +- Let {graph} be the result of {GraphFromRecords(incrementalDataRecords)}. - Let {pendingResults} be the result of {GetNonEmptyNewPending(graph)}. - Update {graph} to the subgraph rooted at nodes in {pendingResults}. - Yield the result of {GetInitialResult(data, errors, pendingResults)}. @@ -374,7 +374,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): - Replace {node} in {graph} with a new node corresponding to the Completed Incremental Data for {result}. - Let {resultIncrementalDataRecords} be {incrementalDataRecords} on {result}. - - Update {graph} to {BuildGraph(resultIncrementalDataRecords, graph)}. + - Update {graph} to {GraphFromRecords(resultIncrementalDataRecords, graph)}. - Let {completedDeferredFragments} be the set of root nodes in {graph} without any child Pending Data nodes. - Let {completedIncrementalDataNodes} be the set of completed Incremental Data @@ -400,7 +400,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): pending)}. - Complete this incremental result stream. -BuildGraph(incrementalDataRecords, graph): +GraphFromRecords(incrementalDataRecords, graph): - Let {newGraph} be a new directed acyclic graph containing all of the nodes and edges in {graph}. From a4b506cba72aa411dffc77a216c9b4ec12216ecf Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 19 Jun 2024 06:25:31 +0300 Subject: [PATCH 13/28] reword recursive abort case --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 91b8f0179..d6094e06d 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -407,9 +407,9 @@ GraphFromRecords(incrementalDataRecords, graph): - For each {incrementalDataRecord} of {incrementalDataRecords}: - Add {incrementalDataRecord} to {newGraph} as a new Pending Data node directed from the {pendingResults} that it completes, adding each of - {pendingResults} to {newGraph} as new nodes, if necessary, each directed - from its {parent}, if defined, recursively adding each {parent} as necessary - until {incrementalDataRecord} is connected to {newGraph}. + {pendingResults} to {newGraph} as a new node directed from its {parent}, + recursively adding each {parent} until {incrementalDataRecord} is connected + to {newGraph}, or the {parent} is not defined. - Return {newGraph}. GetNonEmptyNewPending(graph): From c796f03eaf54948ddbe7f638292eb221106c4d4b Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 17 Jul 2024 22:51:33 +0300 Subject: [PATCH 14/28] bring BuildFieldPlan in line with implementation --- spec/Section 6 -- Execution.md | 55 +++++++++++++++++++++------------- 1 file changed, 34 insertions(+), 21 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index d6094e06d..cdc8d9295 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -627,27 +627,41 @@ directives may be applied in either order since they apply commutatively. BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): - If {parentDeferUsages} is not provided, initialize it to the empty set. -- Initialize {fieldPlan} to an empty ordered map. +- Initialize {groupedFieldSet} to an empty ordered map. +- Initialize {newGroupedFieldSets} to an empty unordered map. +- Let {fieldPlan} be an unordered map containing {groupedFieldSet} and + {newGroupedFieldSets}. - For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: - - Let {deferUsageSet} be the result of - {GetDeferUsageSet(groupForResponseKey)}. - - Let {groupedFieldSet} be the entry in {fieldPlan} for any equivalent set to - {deferUsageSet}; if no such map exists, create it as an empty ordered map. - - Set the entry for {responseKey} in {groupedFieldSet} to - {groupForResponseKey}. + - Let {filteredDeferUsageSet} be the result of + {GetFilteredDeferUsageSet(groupForResponseKey)}. + - If {filteredDeferUsageSet} is the equivalent set to {parentDeferUsages}: + - Set the entry for {responseKey} in {groupedFieldSet} to + {groupForResponseKey}. + - Otherwise: + - Let {newGroupedFieldSet} be the entry in {newGroupedFieldSets} for any + equivalent set to {deferUsageSet}; if no such map exists, create it as an + empty ordered map. + - Set the entry for {responseKey} in {newGroupedFieldSet} to + {groupForResponseKey}. - Return {fieldPlan}. -GetDeferUsageSet(fieldDetailsList): - -- Let {deferUsageSet} be the set containing the {deferUsage} entry from each - item in {fieldDetailsList}. -- For each {deferUsage} of {deferUsageSet}: - - Let {ancestors} be the set of {deferUsage} entries that are ancestors of - {deferUsage}, collected by recursively following the {parent} entry on - {deferUsage}. - - If any of {ancestors} is contained by {deferUsageSet}, remove {deferUsage} - from {deferUsageSet}. -- Return {deferUsageSet}. +GetFilteredDeferUsageSet(fieldGroup): + +- Initialize {filteredDeferUsageSet} to the empty set. +- For each {fieldDetails} of {fieldGroup}: + - Let {deferUsage} be the corresponding entry on {fieldDetails}. + - If {deferUsage} is not defined: + - Remove all entries from {filteredDeferUsageSet}. + - Return {filteredDeferUsageSet}. + - Add {deferUsage} to {filteredDeferUsageSet}. +- For each {deferUsage} in {filteredDeferUsageSet}: + - Let {parentDeferUsage} be the corresponding entry on {deferUsage}. + - While {parentDeferUsage} is defined: + - If {parentDeferUsage} is contained by {filteredDeferUsageSet}: + - Remove {deferUsage} from {filteredDeferUsageSet}. + - Continue to the next {deferUsage} in {filteredDeferUsageSet}. + - Reset {parentDeferUsage} to the corresponding entry on {parentDeferUsage}. +- Return {filteredDeferUsageSet}. ## Executing a Field Plan @@ -661,9 +675,8 @@ variableValues, serial, path, deferUsageSet, deferMap): - If {path} is not provided, initialize it to an empty list. - Let {newDeferMap} be the result of {GetNewDeferMap(newDeferUsages, path, deferMap)}. -- Let {groupedFieldSet} be the entry in {fieldPlan} for the set equivalent to - {deferUsageSet}. -- Let {newGroupedFieldSets} be the remaining portion of {fieldPlan}. +- Let {groupedFieldSet} and {newGroupedFieldSets} be the corresponding entries + on {fieldPlan}. - Allowing for parallelization, perform the following steps: - Let {data} and {nestedIncrementalDataRecords} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, From f0ebc12ab1f5e31778a7f306878f05456ec3a343 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 17 Jul 2024 23:01:18 +0300 Subject: [PATCH 15/28] rename "deferred grouped field set record" to "execution group" --- spec/Section 6 -- Execution.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index cdc8d9295..bc3113d7c 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -361,7 +361,7 @@ YieldIncrementalResults(data, errors, incrementalDataRecords): let {result} be the corresponding completed result. - If {data} on {result} is {null}: - Initialize {completed} to an empty list. - - Let {parents} be the parent nodes of {deferredGroupedFieldSetRecord}. + - Let {parents} be the parent nodes of {executionGroup}. - Initialize {completed} to an empty list. - For each {pendingResult} of {parents}: - Append {GetCompletedEntry(parent, errors)} to {completed}. @@ -683,7 +683,7 @@ variableValues, serial, path, deferUsageSet, deferMap): variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - Let {incrementalDataRecords} be the result of - {ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, + {ExecuteExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, newDeferMap)}. - Append all items in {nestedIncrementalDataRecords} to {incrementalDataRecords}. @@ -702,7 +702,7 @@ GetNewDeferMap(newDeferUsages, path, deferMap): - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. - Return {newDeferMap}. -ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, +ExecuteExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, deferMap): - Initialize {incrementalDataRecords} to an empty list. @@ -712,7 +712,7 @@ newGroupedFieldSets, path, deferMap): - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. - Append {deferredFragment} to {deferredFragments}. - Let {incrementalDataRecord} represent the future execution of - {ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, + {ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, deferredFragments, path, deferUsageSet, deferMap)}, incrementally completing {deferredFragments} at {path}. - Append {incrementalDataRecord} to {incrementalDataRecords}. @@ -723,8 +723,8 @@ newGroupedFieldSets, path, deferMap): Note: {incrementalDataRecord} can be safely initiated without blocking higher-priority data once any of {deferredFragments} are released as pending. -ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, -variableValues, path, deferUsageSet, deferMap): +ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, +path, deferUsageSet, deferMap): - Let {data} and {incrementalDataRecords} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, From 4b862500c2c11f25adb322d928f6c1a3ffcecf56 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 17 Jul 2024 23:02:43 +0300 Subject: [PATCH 16/28] rename ExecuteExecutionGroup to CollectExecutionGroup --- spec/Section 6 -- Execution.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index bc3113d7c..0371cfa52 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -683,7 +683,7 @@ variableValues, serial, path, deferUsageSet, deferMap): variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - Let {incrementalDataRecords} be the result of - {ExecuteExecutionGroups(objectType, objectValue, variableValues, + {CollectExecutionGroup(objectType, objectValue, variableValues, newGroupedFieldSets, path, newDeferMap)}. - Append all items in {nestedIncrementalDataRecords} to {incrementalDataRecords}. @@ -702,7 +702,7 @@ GetNewDeferMap(newDeferUsages, path, deferMap): - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. - Return {newDeferMap}. -ExecuteExecutionGroups(objectType, objectValue, variableValues, +CollectExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, deferMap): - Initialize {incrementalDataRecords} to an empty list. @@ -712,7 +712,7 @@ newGroupedFieldSets, path, deferMap): - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. - Append {deferredFragment} to {deferredFragments}. - Let {incrementalDataRecord} represent the future execution of - {ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, + {CollectExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, deferredFragments, path, deferUsageSet, deferMap)}, incrementally completing {deferredFragments} at {path}. - Append {incrementalDataRecord} to {incrementalDataRecords}. @@ -723,7 +723,7 @@ newGroupedFieldSets, path, deferMap): Note: {incrementalDataRecord} can be safely initiated without blocking higher-priority data once any of {deferredFragments} are released as pending. -ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, +CollectExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap): - Let {data} and {incrementalDataRecords} be the result of running From db54ad8cefbd9c5a8ee2cc732c404b2b12828bb5 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 17:26:26 +0300 Subject: [PATCH 17/28] properly initialize deferUsages with their parents --- spec/Section 6 -- Execution.md | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 0371cfa52..a53183f0b 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -567,8 +567,11 @@ visitedFragments): - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - If {deferDirective} is defined, let {fragmentDeferUsage} be - {deferDirective} and append it to {newDeferUsages}. + - If {deferDirective} is defined: + - Let {path} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {path} and + {parentDeferUsage}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result of calling {CollectFields(objectType, fragmentSelectionSet, @@ -592,8 +595,11 @@ visitedFragments): - Let {deferDirective} be that directive. - If this execution is for a subscription operation, raise a _field error_. - - If {deferDirective} is defined, let {fragmentDeferUsage} be - {deferDirective} and append it to {newDeferUsages}. + - If {deferDirective} is defined: + - Let {path} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {path} and + {parentDeferUsage}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result of calling {CollectFields(objectType, fragmentSelectionSet, From a2516e2891861364c107298685cb233e69bb1513 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 17:27:41 +0300 Subject: [PATCH 18/28] move Field Collection back to where it was mostly to reduce the diff. --- spec/Section 6 -- Execution.md | 358 ++++++++++++++++----------------- 1 file changed, 179 insertions(+), 179 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index a53183f0b..942c24416 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -490,185 +490,6 @@ BatchIncrementalResults(incrementalResults): of {hasNext} on the final item in the list. - Yield {batchedIncrementalResult}. -### Field Collection - -Before execution, the _selection set_ is converted to a grouped field set by -calling {CollectFields()}. Each entry in the grouped field set is a list of -fields that share a response key (the alias if defined, otherwise the field -name). This ensures all fields with the same response key (including those in -referenced fragments) are executed at the same time. - -As an example, collecting the fields of this selection set would collect two -instances of the field `a` and one of field `b`: - -```graphql example -{ - a { - subfield1 - } - ...ExampleFragment -} - -fragment ExampleFragment on Query { - a { - subfield2 - } - b -} -``` - -The depth-first-search order of the field groups produced by {CollectFields()} -is maintained through execution, ensuring that fields appear in the executed -response in a stable and predictable order. - -CollectFields(objectType, selectionSet, variableValues, deferUsage, -visitedFragments): - -- If {visitedFragments} is not provided, initialize it to the empty set. -- Initialize {groupedFields} to an empty ordered map of lists. -- Initialize {newDeferUsages} to an empty list. -- For each {selection} in {selectionSet}: - - If {selection} provides the directive `@skip`, let {skipDirective} be that - directive. - - If {skipDirective}'s {if} argument is {true} or is a variable in - {variableValues} with the value {true}, continue with the next {selection} - in {selectionSet}. - - If {selection} provides the directive `@include`, let {includeDirective} be - that directive. - - If {includeDirective}'s {if} argument is not {true} and is not a variable - in {variableValues} with the value {true}, continue with the next - {selection} in {selectionSet}. - - If {selection} is a {Field}: - - Let {responseKey} be the response key of {selection} (the alias if - defined, otherwise the field name). - - Let {fieldDetails} be a new unordered map containing {deferUsage}. - - Set the entry for {field} on {fieldDetails} to {selection}. and - {deferUsage}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append {fieldDetails} to the {groupForResponseKey}. - - If {selection} is a {FragmentSpread}: - - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} provides the directive `@defer` and its {if} - argument is not {false} and is not a variable in {variableValues} with the - value {false}: - - Let {deferDirective} be that directive. - - If this execution is for a subscription operation, raise a _field - error_. - - If {deferDirective} is not defined: - - If {fragmentSpreadName} is in {visitedFragments}, continue with the next - {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. - - Let {fragment} be the Fragment in the current Document whose name is - {fragmentSpreadName}. - - If no such {fragment} exists, continue with the next {selection} in - {selectionSet}. - - Let {fragmentType} be the type condition on {fragment}. - - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue - with the next {selection} in {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - If {deferDirective} is defined: - - Let {path} be the corresponding entry on {deferDirective}. - - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {path} and - {parentDeferUsage}. - - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result - of calling {CollectFields(objectType, fragmentSelectionSet, - variableValues, fragmentDeferUsage, visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. - - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. - - If {selection} is an {InlineFragment}: - - Let {fragmentType} be the type condition on {selection}. - - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, - fragmentType)} is {false}, continue with the next {selection} in - {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {selection}. - - If {InlineFragment} provides the directive `@defer` and its {if} argument - is not {false} and is not a variable in {variableValues} with the value - {false}: - - Let {deferDirective} be that directive. - - If this execution is for a subscription operation, raise a _field - error_. - - If {deferDirective} is defined: - - Let {path} be the corresponding entry on {deferDirective}. - - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {path} and - {parentDeferUsage}. - - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result - of calling {CollectFields(objectType, fragmentSelectionSet, - variableValues, fragmentDeferUsage, visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. - - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. -- Return {groupedFields} and {newDeferUsages}. - -DoesFragmentTypeApply(objectType, fragmentType): - -- If {fragmentType} is an Object Type: - - If {objectType} and {fragmentType} are the same type, return {true}, - otherwise return {false}. -- If {fragmentType} is an Interface Type: - - If {objectType} is an implementation of {fragmentType}, return {true} - otherwise return {false}. -- If {fragmentType} is a Union: - - If {objectType} is a possible type of {fragmentType}, return {true} - otherwise return {false}. - -Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` -directives may be applied in either order since they apply commutatively. - -### Field Plan Generation - -BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): - -- If {parentDeferUsages} is not provided, initialize it to the empty set. -- Initialize {groupedFieldSet} to an empty ordered map. -- Initialize {newGroupedFieldSets} to an empty unordered map. -- Let {fieldPlan} be an unordered map containing {groupedFieldSet} and - {newGroupedFieldSets}. -- For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: - - Let {filteredDeferUsageSet} be the result of - {GetFilteredDeferUsageSet(groupForResponseKey)}. - - If {filteredDeferUsageSet} is the equivalent set to {parentDeferUsages}: - - Set the entry for {responseKey} in {groupedFieldSet} to - {groupForResponseKey}. - - Otherwise: - - Let {newGroupedFieldSet} be the entry in {newGroupedFieldSets} for any - equivalent set to {deferUsageSet}; if no such map exists, create it as an - empty ordered map. - - Set the entry for {responseKey} in {newGroupedFieldSet} to - {groupForResponseKey}. -- Return {fieldPlan}. - -GetFilteredDeferUsageSet(fieldGroup): - -- Initialize {filteredDeferUsageSet} to the empty set. -- For each {fieldDetails} of {fieldGroup}: - - Let {deferUsage} be the corresponding entry on {fieldDetails}. - - If {deferUsage} is not defined: - - Remove all entries from {filteredDeferUsageSet}. - - Return {filteredDeferUsageSet}. - - Add {deferUsage} to {filteredDeferUsageSet}. -- For each {deferUsage} in {filteredDeferUsageSet}: - - Let {parentDeferUsage} be the corresponding entry on {deferUsage}. - - While {parentDeferUsage} is defined: - - If {parentDeferUsage} is contained by {filteredDeferUsageSet}: - - Remove {deferUsage} from {filteredDeferUsageSet}. - - Continue to the next {deferUsage} in {filteredDeferUsageSet}. - - Reset {parentDeferUsage} to the corresponding entry on {parentDeferUsage}. -- Return {filteredDeferUsageSet}. - ## Executing a Field Plan To execute a field plan, the object value being evaluated and the object type @@ -881,6 +702,185 @@ A correct executor must generate the following result for that _selection set_: } ``` +### Field Collection + +Before execution, the _selection set_ is converted to a grouped field set by +calling {CollectFields()}. Each entry in the grouped field set is a list of +fields that share a response key (the alias if defined, otherwise the field +name). This ensures all fields with the same response key (including those in +referenced fragments) are executed at the same time. + +As an example, collecting the fields of this selection set would collect two +instances of the field `a` and one of field `b`: + +```graphql example +{ + a { + subfield1 + } + ...ExampleFragment +} + +fragment ExampleFragment on Query { + a { + subfield2 + } + b +} +``` + +The depth-first-search order of the field groups produced by {CollectFields()} +is maintained through execution, ensuring that fields appear in the executed +response in a stable and predictable order. + +CollectFields(objectType, selectionSet, variableValues, deferUsage, +visitedFragments): + +- If {visitedFragments} is not provided, initialize it to the empty set. +- Initialize {groupedFields} to an empty ordered map of lists. +- Initialize {newDeferUsages} to an empty list. +- For each {selection} in {selectionSet}: + - If {selection} provides the directive `@skip`, let {skipDirective} be that + directive. + - If {skipDirective}'s {if} argument is {true} or is a variable in + {variableValues} with the value {true}, continue with the next {selection} + in {selectionSet}. + - If {selection} provides the directive `@include`, let {includeDirective} be + that directive. + - If {includeDirective}'s {if} argument is not {true} and is not a variable + in {variableValues} with the value {true}, continue with the next + {selection} in {selectionSet}. + - If {selection} is a {Field}: + - Let {responseKey} be the response key of {selection} (the alias if + defined, otherwise the field name). + - Let {fieldDetails} be a new unordered map containing {deferUsage}. + - Set the entry for {field} on {fieldDetails} to {selection}. and + {deferUsage}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append {fieldDetails} to the {groupForResponseKey}. + - If {selection} is a {FragmentSpread}: + - Let {fragmentSpreadName} be the name of {selection}. + - If {fragmentSpreadName} provides the directive `@defer` and its {if} + argument is not {false} and is not a variable in {variableValues} with the + value {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is not defined: + - If {fragmentSpreadName} is in {visitedFragments}, continue with the next + {selection} in {selectionSet}. + - Add {fragmentSpreadName} to {visitedFragments}. + - Let {fragment} be the Fragment in the current Document whose name is + {fragmentSpreadName}. + - If no such {fragment} exists, continue with the next {selection} in + {selectionSet}. + - Let {fragmentType} be the type condition on {fragment}. + - If {DoesFragmentTypeApply(objectType, fragmentType)} is {false}, continue + with the next {selection} in {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. + - If {deferDirective} is defined: + - Let {path} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {path} and + {parentDeferUsage}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. + - If {selection} is an {InlineFragment}: + - Let {fragmentType} be the type condition on {selection}. + - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, + fragmentType)} is {false}, continue with the next {selection} in + {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {selection}. + - If {InlineFragment} provides the directive `@defer` and its {if} argument + is not {false} and is not a variable in {variableValues} with the value + {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is defined: + - Let {path} be the corresponding entry on {deferDirective}. + - Let {parentDeferUsage} be {deferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {path} and + {parentDeferUsage}. + - Otherwise, let {fragmentDeferUsage} be {deferUsage}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. +- Return {groupedFields} and {newDeferUsages}. + +DoesFragmentTypeApply(objectType, fragmentType): + +- If {fragmentType} is an Object Type: + - If {objectType} and {fragmentType} are the same type, return {true}, + otherwise return {false}. +- If {fragmentType} is an Interface Type: + - If {objectType} is an implementation of {fragmentType}, return {true} + otherwise return {false}. +- If {fragmentType} is a Union: + - If {objectType} is a possible type of {fragmentType}, return {true} + otherwise return {false}. + +Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` +directives may be applied in either order since they apply commutatively. + +### Field Plan Generation + +BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): + +- If {parentDeferUsages} is not provided, initialize it to the empty set. +- Initialize {groupedFieldSet} to an empty ordered map. +- Initialize {newGroupedFieldSets} to an empty unordered map. +- Let {fieldPlan} be an unordered map containing {groupedFieldSet} and + {newGroupedFieldSets}. +- For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: + - Let {filteredDeferUsageSet} be the result of + {GetFilteredDeferUsageSet(groupForResponseKey)}. + - If {filteredDeferUsageSet} is the equivalent set to {parentDeferUsages}: + - Set the entry for {responseKey} in {groupedFieldSet} to + {groupForResponseKey}. + - Otherwise: + - Let {newGroupedFieldSet} be the entry in {newGroupedFieldSets} for any + equivalent set to {deferUsageSet}; if no such map exists, create it as an + empty ordered map. + - Set the entry for {responseKey} in {newGroupedFieldSet} to + {groupForResponseKey}. +- Return {fieldPlan}. + +GetFilteredDeferUsageSet(fieldGroup): + +- Initialize {filteredDeferUsageSet} to the empty set. +- For each {fieldDetails} of {fieldGroup}: + - Let {deferUsage} be the corresponding entry on {fieldDetails}. + - If {deferUsage} is not defined: + - Remove all entries from {filteredDeferUsageSet}. + - Return {filteredDeferUsageSet}. + - Add {deferUsage} to {filteredDeferUsageSet}. +- For each {deferUsage} in {filteredDeferUsageSet}: + - Let {parentDeferUsage} be the corresponding entry on {deferUsage}. + - While {parentDeferUsage} is defined: + - If {parentDeferUsage} is contained by {filteredDeferUsageSet}: + - Remove {deferUsage} from {filteredDeferUsageSet}. + - Continue to the next {deferUsage} in {filteredDeferUsageSet}. + - Reset {parentDeferUsage} to the corresponding entry on {parentDeferUsage}. +- Return {filteredDeferUsageSet}. + ## Executing Fields Each field requested in the grouped field set that is defined on the selected From 4b19cf5ab7775cf099f1a0fb7bc5ccfee9209824 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 17:28:37 +0300 Subject: [PATCH 19/28] f --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 942c24416..b54f12674 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -590,7 +590,7 @@ path, deferUsageSet, deferMap): - Return {resultMap} and {incrementalDataRecords}. Note: {resultMap} is ordered by which fields appear first in the operation. This -is explained in greater detail in the Field Collection section above. +is explained in greater detail in the Field Collection section below. **Errors and Non-Null Fields** From 313aaa65042ab6542bb21457ff99ff13281dac40 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 17:30:44 +0300 Subject: [PATCH 20/28] use fieldDetailsList consistently instead of sometimes fieldGroup, for consistency and so as to remove another "Group" term --- spec/Section 6 -- Execution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index b54f12674..47cbc84df 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -863,10 +863,10 @@ BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): {groupForResponseKey}. - Return {fieldPlan}. -GetFilteredDeferUsageSet(fieldGroup): +GetFilteredDeferUsageSet(fieldDetailsList): - Initialize {filteredDeferUsageSet} to the empty set. -- For each {fieldDetails} of {fieldGroup}: +- For each {fieldDetails} of {fieldDetailsList}: - Let {deferUsage} be the corresponding entry on {fieldDetails}. - If {deferUsage} is not defined: - Remove all entries from {filteredDeferUsageSet}. From 4571da97d25c12a9adf8feae3ba08a870b40c690 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jul 2024 23:12:42 +0300 Subject: [PATCH 21/28] add info re: data structures --- spec/Section 6 -- Execution.md | 38 ++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 47cbc84df..68662c638 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -733,6 +733,40 @@ The depth-first-search order of the field groups produced by {CollectFields()} is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. +The {CollectFields()} algorithm makes use of the following data types: + +Defer Usage Records are unordered maps representing the usage of a `@defer` +directive within a given operation. Defer Usages are "abstract" in that they +include information about the `@defer` directive from the AST of the GraphQL +document. A single Defer Usage may be used to create many "concrete" Delivery +Groups when a `@defer` is included within a list type. + +Defer Usages contain the following information: + +- {label}: the `label` argument provided by the given `@defer` directive, if + any, otherwise {undefined}. +- {parentDeferUsage}: a Defer Usage corresponding to the `@defer` directive + enclosing this `@defer` directive, if any, otherwise {undefined}. + +The {parentDeferUsage} entry is used to build distinct Execution Groups as +discussed within the Field Plan Generation section below. + +Field Details Records are unordered maps containing the following entries: + +- {field}: the Field selection. +- {deferUsage}: the Defer Usage enclosing the selection, if any, otherwise + {undefined}. + +A Grouped Field Set is an ordered map of keys to lists of Field Details. The +keys are the same as that of the response, the alias for the field, if defined, +otherwise the field name. + +The {CollectFields()} algorithm returns: + +- {groupedFieldSet}: the Grouped Field Set for the fields in the selection set. +- {newDeferUsages}: a list of new Defer Usages encountered during this field + collection. + CollectFields(objectType, selectionSet, variableValues, deferUsage, visitedFragments): @@ -840,6 +874,10 @@ DoesFragmentTypeApply(objectType, fragmentType): Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` directives may be applied in either order since they apply commutatively. +Note: When completing a List field, the {CollectFields} algorithm is invoked +with the same arguments for each element of the list. GraphQL Services may +choose to memoize their implementations of {CollectFields}. + ### Field Plan Generation BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): From 3556851e332f779114840825730e63558246f5e7 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Sat, 20 Jul 2024 21:43:11 +0300 Subject: [PATCH 22/28] rename FieldPlan to ExecutionPlan --- spec/Section 6 -- Execution.md | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 68662c638..6bae848ec 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -331,9 +331,9 @@ serial): - If {serial} is not provided, initialize it to {false}. - Let {groupedFieldSet} and {newDeferUsages} be the result of {CollectFields(objectType, selectionSet, variableValues)}. -- Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet)}. +- Let {executionPlan} be the result of {BuildExecutionPlan(groupedFieldSet)}. - Let {data} and {incrementalDataRecords} be the result of - {ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, initialValue, + {ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, initialValue, variableValues, serial)}. - Let {errors} be the list of all _field error_ raised while completing {data}. - If {incrementalDataRecords} is empty, return an unordered map containing @@ -490,20 +490,20 @@ BatchIncrementalResults(incrementalResults): of {hasNext} on the final item in the list. - Yield {batchedIncrementalResult}. -## Executing a Field Plan +## Executing an Execution Plan -To execute a field plan, the object value being evaluated and the object type -need to be known, as well as whether the non-deferred grouped field set must be -executed serially, or may be executed in parallel. +To execute a execution plan, the object value being evaluated and the object +type need to be known, as well as whether the non-deferred grouped field set +must be executed serially, or may be executed in parallel. -ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, objectValue, +ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, objectValue, variableValues, serial, path, deferUsageSet, deferMap): - If {path} is not provided, initialize it to an empty list. - Let {newDeferMap} be the result of {GetNewDeferMap(newDeferUsages, path, deferMap)}. - Let {groupedFieldSet} and {newGroupedFieldSets} be the corresponding entries - on {fieldPlan}. + on {executionPlan}. - Allowing for parallelization, perform the following steps: - Let {data} and {nestedIncrementalDataRecords} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, @@ -749,7 +749,7 @@ Defer Usages contain the following information: enclosing this `@defer` directive, if any, otherwise {undefined}. The {parentDeferUsage} entry is used to build distinct Execution Groups as -discussed within the Field Plan Generation section below. +discussed within the Execution Plan Generation section below. Field Details Records are unordered maps containing the following entries: @@ -878,14 +878,14 @@ Note: When completing a List field, the {CollectFields} algorithm is invoked with the same arguments for each element of the list. GraphQL Services may choose to memoize their implementations of {CollectFields}. -### Field Plan Generation +### Execution Plan Generation -BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): +BuildExecutionPlan(originalGroupedFieldSet, parentDeferUsages): - If {parentDeferUsages} is not provided, initialize it to the empty set. - Initialize {groupedFieldSet} to an empty ordered map. - Initialize {newGroupedFieldSets} to an empty unordered map. -- Let {fieldPlan} be an unordered map containing {groupedFieldSet} and +- Let {executionPlan} be an unordered map containing {groupedFieldSet} and {newGroupedFieldSets}. - For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: - Let {filteredDeferUsageSet} be the result of @@ -899,7 +899,7 @@ BuildFieldPlan(originalGroupedFieldSet, parentDeferUsages): empty ordered map. - Set the entry for {responseKey} in {newGroupedFieldSet} to {groupForResponseKey}. -- Return {fieldPlan}. +- Return {executionPlan}. GetFilteredDeferUsageSet(fieldDetailsList): @@ -1051,9 +1051,9 @@ deferUsageSet, deferMap): - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - Let {groupedFieldSet} and {newDeferUsages} be the result of calling {CollectSubfields(objectType, fieldDetailsList, variableValues)}. - - Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet, + - Let {executionPlan} be the result of {BuildExecutionPlan(groupedFieldSet, deferUsageSet)}. - - Return the result of {ExecuteFieldPlan(newDeferUsages, fieldPlan, + - Return the result of {ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, result, variableValues, false, path, deferUsageSet, deferMap)}. CompleteListValue(innerType, fieldDetailsList, result, variableValues, path, From a950a968d02e437b3c0071e741389b53ee7ebd54 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 24 Jul 2024 20:31:30 +0300 Subject: [PATCH 23/28] path => label --- spec/Section 6 -- Execution.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 6bae848ec..3c03f94cb 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -814,9 +814,9 @@ visitedFragments): with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - If {deferDirective} is defined: - - Let {path} be the corresponding entry on {deferDirective}. + - Let {label} be the corresponding entry on {deferDirective}. - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {path} and + - Let {fragmentDeferUsage} be an unordered map containing {label} and {parentDeferUsage}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result @@ -842,9 +842,9 @@ visitedFragments): - If this execution is for a subscription operation, raise a _field error_. - If {deferDirective} is defined: - - Let {path} be the corresponding entry on {deferDirective}. + - Let {label} be the corresponding entry on {deferDirective}. - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {path} and + - Let {fragmentDeferUsage} be an unordered map containing {label} and {parentDeferUsage}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result From afacc0ac85016bba6f0d7173704dd5a1937d2ab3 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 25 Jul 2024 19:46:19 +0300 Subject: [PATCH 24/28] add missing arguments --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 3c03f94cb..955e2344c 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -583,7 +583,7 @@ path, deferUsageSet, deferMap): - If {fieldType} is defined: - Let {responseValue} and {fieldIncrementalDataRecords} be the result of {ExecuteField(objectType, objectValue, fieldType, fields, variableValues, - path)}. + path, deferUsageSet, deferMap)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. - Append all items in {fieldIncrementalDataRecords} to {incrementalDataRecords}. From 8677044c54ce88d934eac1080a1dcfbb99eab11e Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 25 Jul 2024 19:50:45 +0300 Subject: [PATCH 25/28] add missing return value --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 955e2344c..2eb4a9ed2 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -1151,7 +1151,7 @@ CollectSubfields(objectType, fieldDetailsList, variableValues): {responseKey}; if no such list exists, create it as an empty list. - Append all fields in {subfields} to {groupForResponseKey}. - Append all defer usages in {subNewDeferUsages} to {newDeferUsages}. -- Return {groupedFieldSet}. +- Return {groupedFieldSet} and {newDeferUsages}. ### Handling Field Errors From 5e9ea96e44f3233ec6f28b40121b4e1a22d78eae Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 25 Jul 2024 20:16:26 +0300 Subject: [PATCH 26/28] fix some renaming around CollectExecutionGroups and ExecuteExecutionGroup --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 2eb4a9ed2..ebfff81bd 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -510,7 +510,7 @@ variableValues, serial, path, deferUsageSet, deferMap): variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - Let {incrementalDataRecords} be the result of - {CollectExecutionGroup(objectType, objectValue, variableValues, + {CollectExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, newDeferMap)}. - Append all items in {nestedIncrementalDataRecords} to {incrementalDataRecords}. @@ -539,7 +539,7 @@ newGroupedFieldSets, path, deferMap): - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. - Append {deferredFragment} to {deferredFragments}. - Let {incrementalDataRecord} represent the future execution of - {CollectExecutionGroup(groupedFieldSet, objectType, objectValue, + {ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, deferredFragments, path, deferUsageSet, deferMap)}, incrementally completing {deferredFragments} at {path}. - Append {incrementalDataRecord} to {incrementalDataRecords}. @@ -550,7 +550,7 @@ newGroupedFieldSets, path, deferMap): Note: {incrementalDataRecord} can be safely initiated without blocking higher-priority data once any of {deferredFragments} are released as pending. -CollectExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, +ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap): - Let {data} and {incrementalDataRecords} be the result of running From b40c76b3877c994ce2d71f91526a8d7cae0475e6 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 25 Jul 2024 20:47:01 +0300 Subject: [PATCH 27/28] lazily create Deferred Fragment Records corresponds to https://github.com/graphql/graphql-js/pull/4153 --- spec/Section 6 -- Execution.md | 147 ++++++++++++++++----------------- 1 file changed, 69 insertions(+), 78 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index ebfff81bd..aa31b9997 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -329,12 +329,12 @@ ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, serial): - If {serial} is not provided, initialize it to {false}. -- Let {groupedFieldSet} and {newDeferUsages} be the result of - {CollectFields(objectType, selectionSet, variableValues)}. +- Let {groupedFieldSet} be the result of {CollectFields(objectType, + selectionSet, variableValues)}. - Let {executionPlan} be the result of {BuildExecutionPlan(groupedFieldSet)}. - Let {data} and {incrementalDataRecords} be the result of - {ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, initialValue, - variableValues, serial)}. + {ExecuteExecutionPlan(executionPlan, objectType, initialValue, variableValues, + serial)}. - Let {errors} be the list of all _field error_ raised while completing {data}. - If {incrementalDataRecords} is empty, return an unordered map containing {data} and {errors}. @@ -405,11 +405,28 @@ GraphFromRecords(incrementalDataRecords, graph): - Let {newGraph} be a new directed acyclic graph containing all of the nodes and edges in {graph}. - For each {incrementalDataRecord} of {incrementalDataRecords}: + - Let {deferUsageSet} be the Defer Usages incrementally completed by + {incrementalDataRecord} at {path}. + - For each {deferUsage} of {deferUsageSet}: + - If {newGraph} does not contain a Deferred Fragment node representing the + completion of {deferUsage} at {path}, reset {newGraph} to the result of + {GraphWithDeferredFragmentRecord(deferUsage, path, newGraph)}. - Add {incrementalDataRecord} to {newGraph} as a new Pending Data node - directed from the {pendingResults} that it completes, adding each of - {pendingResults} to {newGraph} as a new node directed from its {parent}, - recursively adding each {parent} until {incrementalDataRecord} is connected - to {newGraph}, or the {parent} is not defined. + directed from the {deferredFragments} that it completes. +- Return {newGraph}. + +GraphWithDeferredFragmentRecord(deferUsage, path, graph): + +- Let {parentDeferUsage} and {label} be the corresponding entries on + {deferUsage}. +- If {parentDeferUsage} is defined and {graph} does not contain a Deferred + Fragment node representing the completion of {parentDeferUsage} at {path}, let + {newGraph} be the result of {GraphWithDeferredFragmentRecord(parentDeferUsage, + path, newGraph)}; otherwise, let {newGraph} be a new directed acyclic graph + containing all of the nodes and edges in {graph}. +- Let {deferredFragment} be a new unordered map containing {path} and {label}. +- Add {deferredFragment} to {newGraph} as a new Deferred Fragment node directed + from {parent}. - Return {newGraph}. GetNonEmptyNewPending(graph): @@ -454,8 +471,8 @@ GetIncrementalResult(graph, incremental, completed, pending): GetIncrementalEntry(incrementalDataRecord, graph): -- Let {deferredFragments} be the Deferred Fragments incrementally completed by - {incrementalDataRecord} at {path}. +- Let {deferredFragments} be the Deferred Fragment nodes within {graph} + incrementally completed by {incrementalDataRecord} at {path}. - Let {result} be the result of {incrementalDataRecord}. - Let {data} and {errors} be the corresponding entries on {result}. - Let {releasedDeferredFragments} be the members of {deferredFragments} that are @@ -496,19 +513,17 @@ To execute a execution plan, the object value being evaluated and the object type need to be known, as well as whether the non-deferred grouped field set must be executed serially, or may be executed in parallel. -ExecuteExecutionPlan(newDeferUsages, executionPlan, objectType, objectValue, -variableValues, serial, path, deferUsageSet, deferMap): +ExecuteExecutionPlan(executionPlan, objectType, objectValue, variableValues, +serial, path, deferUsageSet): - If {path} is not provided, initialize it to an empty list. -- Let {newDeferMap} be the result of {GetNewDeferMap(newDeferUsages, path, - deferMap)}. - Let {groupedFieldSet} and {newGroupedFieldSets} be the corresponding entries on {executionPlan}. - Allowing for parallelization, perform the following steps: - Let {data} and {nestedIncrementalDataRecords} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, - variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is - {true}, _normally_ (allowing parallelization) otherwise. + variableValues, path, deferUsageSet)} _serially_ if {serial} is {true}, + _normally_ (allowing parallelization) otherwise. - Let {incrementalDataRecords} be the result of {CollectExecutionGroups(objectType, objectValue, variableValues, newGroupedFieldSets, path, newDeferMap)}. @@ -516,47 +531,30 @@ variableValues, serial, path, deferUsageSet, deferMap): {incrementalDataRecords}. - Return {data} and {incrementalDataRecords}. -GetNewDeferMap(newDeferUsages, path, deferMap): - -- If {newDeferUsages} is empty, return {deferMap}: -- Let {newDeferMap} be a new unordered map containing all entries in {deferMap}. -- For each {deferUsage} in {newDeferUsages}: - - Let {parentDeferUsage} and {label} be the corresponding entries on - {deferUsage}. - - Let {parent} be the entry in {deferMap} for {parentDeferUsage}. - - Let {newDeferredFragment} be an unordered map containing {parent}, {path} - and {label}. - - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. -- Return {newDeferMap}. - CollectExecutionGroups(objectType, objectValue, variableValues, -newGroupedFieldSets, path, deferMap): +newGroupedFieldSets, path): - Initialize {incrementalDataRecords} to an empty list. - For each {deferUsageSet} and {groupedFieldSet} in {newGroupedFieldSets}: - - Let {deferredFragments} be an empty list. - - For each {deferUsage} in {deferUsageSet}: - - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. - - Append {deferredFragment} to {deferredFragments}. - Let {incrementalDataRecord} represent the future execution of {ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, - variableValues, deferredFragments, path, deferUsageSet, deferMap)}, - incrementally completing {deferredFragments} at {path}. + variableValues, path, deferUsageSet)}, incrementally completing + {deferUsageSet} at {path}. - Append {incrementalDataRecord} to {incrementalDataRecords}. - Schedule initiation of execution of {incrementalDataRecord} following any implementation specific deferral. - Return {incrementalDataRecords}. Note: {incrementalDataRecord} can be safely initiated without blocking -higher-priority data once any of {deferredFragments} are released as pending. +higher-priority data once any of {deferUsageSet} at {path} are released as +pending. ExecuteExecutionGroup(groupedFieldSet, objectType, objectValue, variableValues, -path, deferUsageSet, deferMap): +path, deferUsageSet): - Let {data} and {incrementalDataRecords} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, - variableValues, path, deferUsageSet, deferMap)} _normally_ (allowing - parallelization). + variableValues, path, deferUsageSet)} _normally_ (allowing parallelization). - Let {errors} be the list of all _field error_ raised while completing {data}. - Return an unordered map containing {data}, {errors}, and {incrementalDataRecords}. @@ -571,7 +569,7 @@ Each represented field in the grouped field set produces an entry into a response map. ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, -path, deferUsageSet, deferMap): +path, deferUsageSet): - Initialize {resultMap} to an empty ordered map. - Initialize {incrementalDataRecords} to an empty list. @@ -583,7 +581,7 @@ path, deferUsageSet, deferMap): - If {fieldType} is defined: - Let {responseValue} and {fieldIncrementalDataRecords} be the result of {ExecuteField(objectType, objectValue, fieldType, fields, variableValues, - path, deferUsageSet, deferMap)}. + path, deferUsageSet)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. - Append all items in {fieldIncrementalDataRecords} to {incrementalDataRecords}. @@ -747,6 +745,8 @@ Defer Usages contain the following information: any, otherwise {undefined}. - {parentDeferUsage}: a Defer Usage corresponding to the `@defer` directive enclosing this `@defer` directive, if any, otherwise {undefined}. +- {depth}: the depth within the overall result corresponding to the deferred + fields. The {parentDeferUsage} entry is used to build distinct Execution Groups as discussed within the Execution Plan Generation section below. @@ -761,18 +761,12 @@ A Grouped Field Set is an ordered map of keys to lists of Field Details. The keys are the same as that of the response, the alias for the field, if defined, otherwise the field name. -The {CollectFields()} algorithm returns: - -- {groupedFieldSet}: the Grouped Field Set for the fields in the selection set. -- {newDeferUsages}: a list of new Defer Usages encountered during this field - collection. - -CollectFields(objectType, selectionSet, variableValues, deferUsage, +CollectFields(objectType, selectionSet, variableValues, deferUsage, depth, visitedFragments): +- If {depth} is not provided, initialize it to {0}. - If {visitedFragments} is not provided, initialize it to the empty set. - Initialize {groupedFields} to an empty ordered map of lists. -- Initialize {newDeferUsages} to an empty list. - For each {selection} in {selectionSet}: - If {selection} provides the directive `@skip`, let {skipDirective} be that directive. @@ -816,19 +810,18 @@ visitedFragments): - If {deferDirective} is defined: - Let {label} be the corresponding entry on {deferDirective}. - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {label} and - {parentDeferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {label}, + {parentDeferUsage}, and {depth}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result - of calling {CollectFields(objectType, fragmentSelectionSet, - variableValues, fragmentDeferUsage, visitedFragments)}. + - Let {fragmentGroupedFieldSet} be the result of calling + {CollectFields(objectType, fragmentSelectionSet, variableValues, + fragmentDeferUsage, depth, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - Append all items in {fragmentGroup} to {groupForResponseKey}. - - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. - If {selection} is an {InlineFragment}: - Let {fragmentType} be the type condition on {selection}. - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, @@ -844,20 +837,19 @@ visitedFragments): - If {deferDirective} is defined: - Let {label} be the corresponding entry on {deferDirective}. - Let {parentDeferUsage} be {deferUsage}. - - Let {fragmentDeferUsage} be an unordered map containing {label} and - {parentDeferUsage}. + - Let {fragmentDeferUsage} be an unordered map containing {label}, + {parentDeferUsage}, and {depth}. - Otherwise, let {fragmentDeferUsage} be {deferUsage}. - - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result - of calling {CollectFields(objectType, fragmentSelectionSet, - variableValues, fragmentDeferUsage, visitedFragments)}. + - Let {fragmentGroupedFieldSet} be the result of calling + {CollectFields(objectType, fragmentSelectionSet, variableValues, + fragmentDeferUsage, depth, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - Append all items in {fragmentGroup} to {groupForResponseKey}. - - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. -- Return {groupedFields} and {newDeferUsages}. +- Return {groupedFields}. DoesFragmentTypeApply(objectType, fragmentType): @@ -928,7 +920,7 @@ finally completes that value either by recursively executing another selection set or coercing a scalar value. ExecuteField(objectType, objectValue, fieldType, fieldDetailsList, -variableValues, path, deferUsageSet, deferMap): +variableValues, path, deferUsageSet): - Let {fieldDetails} be the first entry in {fieldDetailsList}. - Let {field} be the corresponding entry on {fieldDetails}. @@ -939,7 +931,7 @@ variableValues, path, deferUsageSet, deferMap): - Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. - Return the result of {CompleteValue(fieldType, fields, resolvedValue, - variableValues, path, deferUsageSet, deferMap)}. + variableValues, path, deferUsageSet)}. ### Coercing Field Arguments @@ -1027,7 +1019,7 @@ the expected return type. If the return type is another Object type, then the field execution process continues recursively. CompleteValue(fieldType, fieldDetailsList, result, variableValues, path, -deferUsageSet, deferMap): +deferUsageSet): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. @@ -1041,7 +1033,7 @@ deferUsageSet, deferMap): - If {result} is not a collection of values, raise a _field error_. - Let {innerType} be the inner type of {fieldType}. - Return the result of {CompleteListValue(innerType, fieldDetailsList, result, - variableValues, path, deferUsageSet, deferMap)}. + variableValues, path, deferUsageSet)}. - If {fieldType} is a Scalar or Enum type: - Return the result of {CoerceResult(fieldType, result)}. - If {fieldType} is an Object, Interface, or Union type: @@ -1049,15 +1041,16 @@ deferUsageSet, deferMap): - Let {objectType} be {fieldType}. - Otherwise if {fieldType} is an Interface or Union type. - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - - Let {groupedFieldSet} and {newDeferUsages} be the result of calling - {CollectSubfields(objectType, fieldDetailsList, variableValues)}. + - Let {depth} be the length of {path}. + - Let {groupedFieldSet} be the result of calling {CollectSubfields(objectType, + fieldDetailsList, variableValues, depth)}. - Let {executionPlan} be the result of {BuildExecutionPlan(groupedFieldSet, deferUsageSet)}. - - Return the result of {ExecuteExecutionPlan(newDeferUsages, executionPlan, - objectType, result, variableValues, false, path, deferUsageSet, deferMap)}. + - Return the result of {ExecuteExecutionPlan(executionPlan, objectType, + result, variableValues, false, path, deferUsageSet)}. CompleteListValue(innerType, fieldDetailsList, result, variableValues, path, -deferUsageSet, deferMap): +deferUsageSet): - Initialize {items} and {incrementalDataRecords} to empty lists. - Let {index} be {0}. @@ -1136,22 +1129,20 @@ sub-selections. After resolving the value for `me`, the selection sets are merged together so `firstName` and `lastName` can be resolved for one value. -CollectSubfields(objectType, fieldDetailsList, variableValues): +CollectSubfields(objectType, fieldDetailsList, variableValues, depth): - Initialize {groupedFieldSet} to an empty ordered map of lists. -- Initialize {newDeferUsages} to an empty list. - For each {fieldDetails} in {fieldDetailsList}: - Let {field} and {deferUsage} be the corresponding entries on {fieldDetails}. - Let {fieldSelectionSet} be the selection set of {field}. - If {fieldSelectionSet} is null or empty, continue to the next field. - - Let {subGroupedFieldSet} and {subNewDeferUsages} be the result of - {CollectFields(objectType, fieldSelectionSet, variableValues, deferUsage)}. + - Let {subGroupedFieldSet} be the result of {CollectFields(objectType, + fieldSelectionSet, variableValues, deferUsage, depth)}. - For each {subGroupedFieldSet} as {responseKey} and {subfields}: - Let {groupForResponseKey} be the list in {groupedFieldSet} for {responseKey}; if no such list exists, create it as an empty list. - Append all fields in {subfields} to {groupForResponseKey}. - - Append all defer usages in {subNewDeferUsages} to {newDeferUsages}. -- Return {groupedFieldSet} and {newDeferUsages}. +- Return {groupedFieldSet}. ### Handling Field Errors From 57a673b04f3873779b6d25de80c91cea12501ea8 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 26 Jul 2024 07:54:06 +0300 Subject: [PATCH 28/28] add missing piece --- spec/Section 6 -- Execution.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index aa31b9997..aa4603239 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -408,9 +408,12 @@ GraphFromRecords(incrementalDataRecords, graph): - Let {deferUsageSet} be the Defer Usages incrementally completed by {incrementalDataRecord} at {path}. - For each {deferUsage} of {deferUsageSet}: + - Let {depth} be the corresponding entry on {deferUsage}. + - Let {deferUsagePath} be the initial {depth} segments of {path}. - If {newGraph} does not contain a Deferred Fragment node representing the - completion of {deferUsage} at {path}, reset {newGraph} to the result of - {GraphWithDeferredFragmentRecord(deferUsage, path, newGraph)}. + completion of {deferUsage} at {deferUsagePath}, reset {newGraph} to the + result of {GraphWithDeferredFragmentRecord(deferUsage, deferUsagePath, + newGraph)}. - Add {incrementalDataRecord} to {newGraph} as a new Pending Data node directed from the {deferredFragments} that it completes. - Return {newGraph}.