rrt/docs/control-loop-atlas/multiplayer-session-and-transport-flow.md

54 KiB
Raw Blame History

Multiplayer Session and Transport Flow

  • Roots: multiplayer_window_init_globals at 0x004efe80, multiplayer_window_service_loop at 0x004f03f0, the Multiplayer.win session-event callback table built by multiplayer_register_session_event_callbacks at 0x0046a900, the active session-event transport object at 0x006cd970, the Multiplayer preview dataset object at 0x006cd8d8, and the lower pending-template and text-stream helpers around 0x00597880..0x0059caf0.
  • Trigger/Cadence: shell-owned Multiplayer.win frame service plus event-driven session callbacks and repeated transport pump steps.
  • Key Dispatchers: session-event wrappers for actions 1, 2, 4, 7, 8; multiplayer_register_session_event_callbacks; multiplayer_dispatch_requested_action; multiplayer_reset_preview_dataset_and_request_action; multiplayer_preview_dataset_service_frame; multiplayer_load_selected_map_preview_surface; multiplayer_flush_session_event_transport; multiplayer_transport_service_frame; multiplayer_transport_service_worker_once; multiplayer_transport_service_route_callback_tables; multiplayer_transport_service_status_and_live_routes; multiplayer_transport_text_stream_service_io; multiplayer_transport_dispatch_pending_template_node; multiplayer_transport_service_pending_template_dispatch_store.
  • State Anchors: live session globals at 0x006d40d0, active session-event transport object at 0x006cd970, status latch at 0x006cd974, session-event mode latch at 0x006cd978, retry counter at 0x006cd984, preview dataset object at 0x006cd8d8, preview-valid flag at 0x006ce9bc, staged preview strings at 0x006ce670 and [0x006cd8d8+0x8f48], Multiplayer.win backing block at 0x006d1270, selector-view store root at [transport+0xab4], selector-view generation counters at [transport+0xab8]..[transport+0xac0], selector-view backing arrays around [transport+0xad0]..[transport+0xae4], selector-view entry probe-request id at [entry+0x50], live-state gate at [entry+0x58], third-slot flag word at [entry+0x64], probe-schedule tick at [entry+0x68], last-success tick at [entry+0x6c], pending-probe latch at [entry+0x74], success generation at [entry+0x78], consecutive-failure counter at [entry+0x7c], rolling average at [entry+0x80], recent sample window at [entry+0x84..], bounded sample-count at [entry+0x94], total-success count at [entry+0x98], pending-template list at [transport+0x550], dispatch store at [worker+0x54c], and text-stream buffers rooted under [worker+0x1c].
  • Subsystem Handoffs: the Multiplayer.win initializer seeds the backing block at 0x006d1270, later reset paths construct the separate preview dataset at 0x006cd8d8, and the shell-owned active-mode frame services that dataset every frame through multiplayer_preview_dataset_service_frame. That preview side publishes roster and status controls through the Multiplayer window control paths, loads .gmt preview surfaces through multiplayer_load_selected_map_preview_surface, and is even reused by the map/save coordinators mode-11 .gmt path when the dataset already exists. The preview owner now has a tighter internal request split too. The fixed selector-descriptor list at [0x006cd8d8+0x8f28] is built through multiplayer_preview_dataset_append_named_callback_descriptor 0x00469930, whose current grounded owner is the fixed registration block multiplayer_preview_dataset_register_fixed_named_callback_descriptors_1_to_10 0x00473a60; the variable-size named UI-request list at [0x006cd8d8+0x8f2c] is built through multiplayer_preview_dataset_append_named_ui_request_record 0x00469a50; and the staged profile/path strings live at [0x006cd8d8+0x8e10] and [0x006cd8d8+0x8f48], with the broader action-2 staging path now bounded by multiplayer_preview_dataset_stage_profile_text_selected_path_and_sync_session_state 0x0046a030. The companion action-3 commit path is tighter too: multiplayer_preview_dataset_match_named_field_slot_copy_profile_text_and_probe_selection 0x0046a110 now sits directly above the lazily initialized 0x80 * 0x11c field-slot table at [0x006cd8d8+0x10], whose owner is multiplayer_preview_dataset_ensure_field_slot_table_initialized 0x0046a1a0. The shared submitter above those lists is now explicit too: multiplayer_preview_dataset_submit_transport_request 0x00469d30 accepts the callers request-id, selector, payload pointer and length, flag word, and optional auxiliary pointer, optionally allocates one sidecar object, and then either sends the request directly through the session-event transport or takes the slower packed branch through 0x00553000/0x00552ff0 into 0x00521dc0. The constructor and teardown side are tighter now too. multiplayer_preview_dataset_construct_reset_globals_and_seed_callback_owners 0x0046be80 is the real reset owner above the action-2/3 branches in multiplayer_dispatch_requested_action: it re-enters the paired release body multiplayer_preview_dataset_release_owned_lists_transients_and_session_side_state 0x0046bc40, clears the surrounding launcher globals, allocates the field-slot table and the keyed request/descriptor owners, seeds the render target and staging buffers, and then calls multiplayer_preview_dataset_reset_global_callback_state_and_register_selector_handlers 0x00473280 plus the smaller fixed-name block multiplayer_preview_dataset_register_fixed_named_callback_descriptors_1_to_10 0x00473a60. That broader registration pass 0x00473280 now bounds the selector-handler family too: it zeroes the global scratch bands 0x006ce808..0x006ce994 and 0x006ce290, seeds default callback root [0x006cd8d8+0x08] with 0x0046f4a0, and populates the keyed selector table at [0x006cd8d8+0x04] through multiplayer_preview_dataset_register_selector_callback_if_absent 0x00468b10 for selectors 1..0x83. The concrete callback bodies under that table are tighter now too. The small split is real rather than guessed: 0x0046c390 is the direct control-0x109 publish leaf through 0x00469d30; 0x0046c3c0 is the fixed-session-token plus launch-latch arm path; and the large sibling 0x0046c420 is the real apply-owner for one incoming session payload slab copied into 0x006cec74 before the shell-refresh follow-ons. A second callback cluster now owns live session-entry flags instead of just relaying transport payloads: 0x0046c7c0 rewrites the elapsed-pair dwords [entry+0x54/+0x58], 0x0046c840 sets bit 0x08 in [entry+0x5c], 0x0046c870 toggles bit 0x01 from payload byte [msg+0x08], 0x0046cf10 sets bit 0x04, and 0x0046cf70 is the list-wide clear of that same bit-0x01 lane across every live session entry. The broader callback 0x0046c8c0 sits above those single-field leaves and applies either one or many embedded session-field update records before republishing the list. The pending-state side is separated now too: 0x0046cce0, 0x0046cd10, 0x0046ce90, and 0x0046d230 are the current small latch/state owners under 0x006cd91c, 0x006d1280, 0x006d1284, and 0x006ce9c8, while 0x0046cd30 and 0x0046ce10 are the paired current-session string/scalar submit-and-apply handlers over [entry+0x258c/+0x268c/+0x2690/+0x2694]. The same callback table also owns one small fixed transfer-progress family rooted at 0x006ce2e8: 0x0046cfe0 allocates the first free 0x5c slot and optionally formats one label string, 0x0046d090 appends progress payload into the matched slot while publishing one percent string through 0x005387a0, and 0x0046d1d0 clears the finished slot and frees that optional label. On the release side, 0x0046bc40 now clearly owns the request-list, descriptor-list, semicolon-name, pooled-span, render-target, and auxiliary-owner teardown, while multiplayer_shutdown_preview_dataset_session_object_and_global_helper 0x0046c230 is the final wrapper that additionally drops the live session object at 0x006d40d0 and the shared helper at 0x00d973b4. The small tuple staging below that family is bounded too: multiplayer_preview_dataset_touch_current_session_year_bucket_and_return_staged_tuple 0x0046b6d0 now owns the keyed session-bucket touch under [session+0x2580] for the staged tuple at [0x006cd8d8+0x9048], and the companion multiplayer_preview_dataset_stage_optional_selected_token_from_source_ptr 0x0046d610 writes the optional derived token into [0x006cd8d8+0x8f38]. The next service layer under the same owner is tighter too: multiplayer_preview_dataset_prune_session_buckets_below_current_year_key 0x0046b910 now bounds the older keyed-bucket drain under [session+0x2580], while multiplayer_preview_dataset_service_current_session_buckets_and_publish_selector0x67 0x0046b9f0 owns the current-bucket walk, the local counters [+0x987c/+0x9890], and the later selector-0x67 publish branch back through 0x00469d30. Its local batch-publish child multiplayer_preview_dataset_publish_accumulated_selector0x71_record_batch 0x00473bf0 is now bounded too: it packages the fixed-width records at 0x006cd990, prefixes (0, count), sends them as selector 0x71, and then clears the live record count 0x006ce9a0. The immediate local helpers under that same band are now explicit too: 0x0046d260 is the fixed-record append primitive that grows 0x006cd990 up to 0x80 entries, while 0x0046d240 is the paired release of the optional selector-0x71 staging blob at 0x006ce994. The first fixed named-descriptor block under multiplayer_preview_dataset_register_fixed_named_callback_descriptors_1_to_10 0x00473a60 is tighter in the same way now. Its concrete roots 0x0046db10, 0x0046db50, 0x0046db90, 0x0046dbd0, 0x0046dd10, 0x0046de40, 0x0046de80, 0x0046e030, and 0x0046e250 all return small static records rooted at 0x006ce9dc..0x006cea3c; each record carries a leading tag byte plus the same derived world-year key from [0x006cec78+0x0d], while the heavier siblings add collection-backed totals from 0x0062be10, 0x006ceb9c, 0x0062b26c, 0x006cfca8, or 0x006cfcbc together with local overlay, geometry, or object-metric sample paths. The immediate helper strip beneath that same first named block is tighter now too. 0x0046d980 is the direct 0x006ceb9c name-to-index lookup over [entry+0x08], while 0x0046d9e0 is the heavier companion that first resolves the current live session from 0x006d40d0, matches that session-side string against the same 0x006ceb9c table, and returns the matching entry index or 0xff. Above those, 0x0046f8f0 resolves and validates one 0x0062be10 entry, then keeps it only when its profile-index field [entry+0x3b] equals the live-session-backed 0x006ceb9c index from 0x0046d9e0; and 0x0046f870 is the paired rounded-delta leaf that takes one float plus one collection byte index and writes the rounded positive/negative pair into metric ids 0x0f and 0x0d through 0x0042a040. So this first descriptor band is no longer just a bag of static record roots: it also owns a real local helper family for 0x006ceb9c name matching, live-session profile matching, and one narrow metric-pair apply path beneath the higher callback bodies. The next selector layer above that helper strip is tighter now too. Selector 0x12 is the small validate-or-error wrapper above selector 0x13, and selector 0x13 body 0x004706b0 resolves the same live-session company match, attempts the placed-structure apply path through 0x004197e0, 0x004134d0, 0x0040eba0, 0x0052eb90, and 0x0040ef10, and otherwise falls back to a hashed selector-0x6e publish over the first 0x1c payload bytes. The same pattern appears again one pair later: selector 0x16 is the thin validate-or-error wrapper above selector 0x17, and selector 0x17 consumes a count plus 0x33-stride adjunct record band, resolves one live train-side entry under 0x006cfcbc, re-enters 0x004a77b0, 0x0052e720, 0x0040eba0, 0x0052eb90, and 0x0040ef10, and again falls back to hashed selector 0x6e publish when the live apply path does not land. The later status pair is bounded too: selector 0x19 is another thin wrapper, and selector 0x1a either derives a status code from 0x0046ed30 and current live-session name matching or treats the four-byte payload directly as that status code before publishing localized status text 0x0b7f/0x0b80. One selector-pair above the metric leaf is now explicit too: the earlier front edge of the same callback table is tighter now too. Selector 0x02 compares staged profile text against the shell profile band at [0x006cec7c+0x11], can advance the requested-action fields 0x006d1280/0x006d1284, can queue selector 0x53, and on the success path syncs the larger shell profile block rooted at [0x006cec7c+0x44]. The next small strip is also grounded: selector 0x0a clears [world+0x19], seeds [world+0x66ae], mirrors peer byte [peer+0x2690] into named-profile byte [entry+0x15c], refreshes 0x006ce98c, and optionally republishes one local status path. Selector 0x0b is then the small token-staging wrapper above selector 0x0c, and selector 0x0c itself forwards one signed byte pair into 0x00434680 before adjusting dataset counter 0x006cd8e8. The other small early leaves are now bounded too: selector 0x0f pops one node from the current session queue at [session+0x64], publishes that node through 0x00469d30, and releases it; selector 0x10 looks one payload key up in the session-side store [session+0x60] and forwards the result plus dataset string root [0x006cd8d8+0x8f48] into 0x00521790. One selector-pair above the metric leaf is now explicit too: selector-0x15 body 0x00470950 consumes the same compact (float delta, company-index byte) payload shape, resolves the matching live-session company entry through 0x0046f8f0, submits selector 0x6b through 0x00469d30, and then immediately re-enters 0x0046f870 for the local apply. The neighboring name-match lane is now explicit too: selector-0x61 body 0x00472700 scans 0x0062be10 for a company-name match against the caller string at [payload+0x08] and then either submits selector 0x62 with the original payload or falls back to the paired error-style 0x21 branch. The next registered band around selectors 0x1c..0x5d is tighter now too. Selector-adjacent helpers 0x00470ed0 and 0x00470fa0 are the paired global preset passes beneath that strip: both walk the guarded named-profile table 0x006ceb9c, add opposite signed integer presets into qword field [profile+0x154] through 0x00476050, then walk 0x0062be10 and write opposite preset scalars into metric id 0x0d through 0x0042a040. Above them, selectors 0x1d, 0x1f, 0x21, 0x23, 0x25, 0x27, 0x29, 0x2b, 0x2d, 0x2f, 0x31, 0x33, 0x35, 0x37, and 0x39 are now explicit token-staging forwarders into selectors 0x1e, 0x20, 0x22, 0x24, 0x26, 0x28, 0x2a, 0x2c, 0x2e, 0x30, 0x32, 0x34, 0x36, 0x38, and 0x3a. The next live-train strip is also grounded: selectors 0x3b, 0x3d, 0x3f, 0x41, and 0x43 resolve live train ids from 0x006cfcbc, stage the current token, and republish into selectors 0x3c, 0x3e, 0x40, 0x42, and 0x44; selectors 0x3c, 0x3e, and 0x40 then dispatch directly into 0x004abd70, 0x004b2f00, and 0x004b3000, while selector 0x42 is the heavier train-adjunct branch through 0x004b2b70, 0x004b3160, 0x004b2c10, 0x004a9460, and 0x004ab980. The prompt or bitmap cluster is tighter too: selector 0x48 consumes one 12-byte record, either marks the current-session bit directly or opens localized prompt 0x0b81 through 0x00469a50 and callback 0x004719c0, and then republishes selector 0x49; selector 0x49 turns that 12-byte result into one keyed bitset object and republishes selector 0x47; selector 0x47 consumes the resulting ten-slot masks and drops straight into 0x004eb230 plus shell_resolve_merger_vote_and_commit_outcome 0x004ebd10. The same pattern repeats one size smaller for selectors 0x4c, 0x4d, and 0x4b: selector 0x4c consumes one 10-byte record, either marks the current-session bit directly or opens localized prompt 0x0b82 through 0x00469a50 and callback 0x00471d50, selector 0x4d folds that 10-byte result into the keyed bitset object, and selector 0x4b turns the resulting masks into one ten-dword cache setter at 0x0050c4e0 rooted at 0x006d1a08 plus the paired outcome resolver 0x0050c940. The direct setter strip after that is explicit too: selector 0x4f republishes selector 0x6f when the live session object exists and dataset gate 0x006cd91c is clear, selector 0x50 copies [dataset+0x9058] into [dataset+0x9054], selector 0x51 derives one small session-status code and either republishes selector 0x52 or shell control 0x109, selectors 0x55, 0x56, and 0x57 directly store dataset field 0x9860, a 0x006ceb9c inline name, and guard field [entry+0x1e1], selector 0x58 flushes the deferred 16-slot named-profile clear queue [dataset+0x9864..+0x9873], selector 0x59 derives one roster or capacity status and republishes selector 0x5a, selector 0x5b is another token-staging forwarder into selector 0x5c, selector 0x5c gates a 0x00493960 dispatch plus optional local 0x0046f870 apply, and selector 0x5d validates one payload string before republishing selector 0x5e. The next registered band around selectors 0x5e..0x7d is tighter too. Selector 0x5e updates the named-profile side table 0x006ceb9c, mirrors the same string into the resolved live session object, and when the session-side guard is active hashes that string back into [session+0x48] and dataset field [0x006cd8d8+0x8f48]; selector 0x5f then stages the current year-derived token and republishes into selector 0x60, whose body writes one guarded byte field into the same 0x006ceb9c entry family. The 0x62..0x64 strip forms the same kind of pair over 0x0062be10: selector 0x62 copies one fixed 0x32-byte band into the matched company entry, selector 0x63 rejects duplicate field-0x37 values before forwarding, and selector 0x64 applies that same dword field directly into the matched entry or one live fallback owner. The receive-side correction is explicit now too: selector 0x6b is the tiny local metric-apply wrapper 0x00472db0 -> 0x0046f870, selector 0x6c is the separate train-record wrapper 0x00472dc0 -> 0x0046d780, selector 0x6d formats localized status 0x0f4e into the grounded world outcome-text buffer [world+0x4b47], and selector 0x6e walks the current keyed bucket under [session+0x2580] and marks the first matching companion record by payload hash. The later wrappers are cleaner too: selectors 0x71, 0x73, 0x75, 0x77, 0x79, 0x7b, and 0x7d are all token-staging forwarders into selectors 0x72, 0x74, 0x76, 0x78, 0x7a, 0x7c, and 0x7e. Beneath them, selector 0x72 is the heavier counted live-world apply path over 0x0062b2fc, 0x0062b26c, and 0x0062bae0; selector 0x74 dispatches a resolved company-entry id into 0x0062b26c under the small latch 0x006ce9a8; selectors 0x76, 0x7a, and 0x7c resolve one company-style entry and then tail into narrower local handlers; and selector 0x78 is the broader projection-or-notify body over 0x0044b160, 0x00538e00, and the local-session refresh strip. The next adjacent owner 0x0046e5c0 is broader but still belongs to the same structural neighborhood: in mode 1 it serializes a dual-collection metric blob from 0x0062be10 and 0x006ceb9c, writing the two row counts into local header bytes and then packing one 0x24-stride band plus one 0x2c-stride band behind a 0x10-byte header; the opposite branch starts validating and applying those packed metrics back into live entries. So that first named block is no longer just a string-name registry; it already includes a real typed static-record family beneath the preview dataset, and it now sits directly beside one broader fixed-record callback-successor strip. Right after the selector-0x71 batch publisher, the local 0x005ce418 fixed-record family becomes explicit: 0x00473ce0 constructs one 0x187-byte record from two copied strings, shell-profile bytes [0x006cec78+0x0d/+0x0f/+0x11], owner field [this+0x04], and monotonic sequence dword [this+0x14] seeded from 0x006cea48; 0x00473dc0, 0x00473e70, 0x00473ee0, and 0x00473f30 are the max-sequence, min-sequence, previous-sequence, and next-sequence queries over the same live collection; 0x00473e20 is the boolean scan for any inactive record with a nonzero owner dword; and 0x00473f80 is the real allocator or constructor owner, trimming the collection down to 0x19 entries, allocating one live record through 0x00518900, and then dispatching the heavier constructor. So the callback registry now leads directly into a concrete preview-side fixed-record collection, not into another anonymous transport helper band. The broader snapshot/apply owner still sits over the same multiplayer collection state. In parallel, multiplayer_register_session_event_callbacks allocates and registers the separate session-event transport object at 0x006cd970. The shell-side bridge into that deeper transport cadence is now tighter: multiplayer_window_service_loop and neighboring reset or initializer branches call multiplayer_flush_session_event_transport, which forces one status flush and then drops into multiplayer_transport_flush_and_maybe_shutdown. That wrapper in turn runs multiplayer_transport_service_frame, the recurring pump that services one worker step through multiplayer_transport_service_worker_once, sweeps the transport-owned callback tables and field caches through multiplayer_transport_service_route_callback_tables, services both the auxiliary status route and the current live route through multiplayer_transport_service_status_and_live_routes, and then descends one layer farther into the shared GameSpy route helper multiplayer_gamespy_route_service_frame at 0x58d040. The transport also owns a separate selector-view sidecar beneath that route cadence. multiplayer_transport_ensure_selector_view_store allocates the keyed selector-view store at [transport+0xab4], multiplayer_transport_find_selector_view_entry_by_name resolves entries from that store, multiplayer_transport_upsert_selector_name_entry marks per-slot activity and flags inside each entry, and multiplayer_transport_mark_selector_slot_views_dirty plus multiplayer_transport_reset_selector_view_entry_runtime_state manage the dirty or refresh fields at +0xa0, +0xa4, and +0xa8. That selector-view maintenance path is now split more cleanly too. The recurring owner is multiplayer_transport_service_selector_view_refresh_cycle, which first runs the fast deferred-probe lane: multiplayer_transport_collect_refreshable_selector_view_entries walks the store through multiplayer_transport_filter_insert_refreshable_selector_view_entry, which now shows that [entry+0x64] is not a generic flag bucket but the third selector-slot flag word, parallel to [entry+0x5c] and [entry+0x60]. In that collector, the g and a mode-letter bits produced by multiplayer_transport_parse_selector_mode_letters become mask 0x0c in the slot-local flag words, and any third-slot entry carrying those bits at [entry+0x64] is excluded from the refreshable set. Eligible entries then pass slot-aware retry timing on [entry+0x68], [entry+0x6c], [entry+0x78], and [entry+0x7c], after which the service loop schedules refresh probes through multiplayer_transport_schedule_selector_view_entry_refresh_probe. That fast lane is narrower now too: the entry-side match key [entry+0x50] is no longer just an opaque request field. The profile-key callback lanes feed multiplayer_transport_parse_selector_view_probe_marker, which decodes one local X%sX|%d marker into a probe request id plus displayed version/build integer, and multiplayer_transport_arm_selector_view_probe_tracking stores those into [entry+0x50] and [entry+0x54] before arming the live probe gate at [entry+0x58]. The current-selector callback root at 0x59f8b0 is now bounded as well: it resolves and upserts the active selector name, optionally reuses a cached username marker to arm probe tracking immediately, then submits the same profile-key bundle with selector context and forwards that selector through callback slot 17, with the status-route side able to force route-mode transitions 2 -> 3 -> 4 afterward. One layer lower, multiplayer_transport_handle_profile_key_query_result at 0x596970 now bounds the per-key result path itself. It treats username as the probe-marker field, b_flags as the selector mode-letter field, and (END) as a real sentinel that publishes a zeroed slot-22 payload instead of a marker pair. The same helper also hashes the selector-name, key-name, and resolved value text back into the caller table, so the profile-key bundle now looks like a real bounded handoff rather than an anonymous callback cloud. The deferred callback shim multiplayer_transport_dispatch_selector_view_refresh_probe_result then walks the keyed store through multiplayer_transport_finish_selector_view_refresh_probe_if_matching, which only completes entries whose pending latch [entry+0x74] is still armed and whose parsed marker request id [entry+0x50] matches the finished request. A failed result -1 clears the pending latch and increments the consecutive-failure counter at [entry+0x7c]. A nonfailure result clears the pending latch, increments the success generation at [entry+0x78] and total-success count [entry+0x98], clears [entry+0x7c], stamps the last-success tick at [entry+0x6c], appends the returned sample into the short rolling history at [entry+0x84..], grows the bounded sample-count [entry+0x94] up to four, computes the current average into [entry+0x80], and then publishes that averaged ms sample through multiplayer_transport_enqueue_callback_slot24_record. So the publication boundary is explicit and the request-id ownership is explicit: [entry+0x80] now reads as the averaged millisecond probe sample and [entry+0x54] as the displayed version/build companion integer. The adjacent route-callback side is tighter too, but it is now kept separate: the staged route-callback path at 0x5958e0 and the later compatibility gate at multiplayer_transport_route_binding_matches_route_callback_descriptor_tuple 0x595d00 operate on a compact GameSpy-style server or route descriptor family with a primary endpoint tuple at [descriptor+0x00]/[+0x04], an optional secondary endpoint tuple at [descriptor+0x08]/[+0x0c], string-key lookups such as hostname and gamever, and numeric-key lookups such as numplayers, numservers, numwaiting, and gsi_am_rating. The route-binding side uses that descriptor family's primary dword and host-order port plus the optional secondary tuple against route-binding offsets +0x54/+0x58 and route word +0x30. Current evidence still does not prove that descriptor tuple is the same field family as the selector-view marker companion integer at [entry+0x54]. The higher compact decode owners are tighter now too: 0x5907d0 is the allocate-and-append lane for one self-consistent compact payload, while 0x590d00 is the keyed upsert-by-primary-endpoint lane that reuses an existing descriptor when possible and then notifies the owner callback. The route-callback-table runtime above that decode side is tighter now too: multiplayer_transport_route_callback_table_construct 0x5905e0 seeds one transport-owned table block, multiplayer_transport_route_callback_table_release_decoded_schema_dictionary 0x5906f0 tears down the decoded schema dictionary rooted at [this+0x08], multiplayer_route_callback_runtime_acquire_shared_string_copy 0x590540 and multiplayer_route_callback_runtime_release_shared_string_copy 0x5905a0 now bound the shared string pool used by that decoded schema, and the higher bring-up owner 0x596090 now clearly splits between [transport+0xba4] with owner callback 0x595a40, the local field-cache family [transport+0x1724] seeded through 0x5a08f0/0x595b60, and [transport+0x1164] with owner callback 0x595bc0, while multiplayer_transport_route_callback_table_service_receive_decode_state_machine 0x5908c0 is the current live receive/decode state machine serviced by 0x591290 in table states 2/3. The callback-owner mode split above that runtime is now explicit too: append-notify 0x590370 publishes mode 0, compact upsert 0x590d00 publishes mode 1, remove-notify 0x590430 publishes mode 2, and the live receive/decode path 0x5908c0 publishes modes 6, 5, and 3. The route-handle lifecycle above that decode path is tighter now too: 0x590740 cleanly resets one table's live route plus decode-side runtime without destroying the outer object; 0x5907a0 is the broader destroy path that also releases the active descriptor collection; 0x590ed0 opens the live route handle into [this+0x4a0], stages the initial outbound request, and seeds state 3 plus the staged receive buffer; 0x5911e0 is the state-2/3 socket-service wrapper that reads new bytes, grows the staged buffer, and re-enters 0x5908c0; 0x5912c0 is the one-shot send-with-reopen-retry helper; and 0x590ea0 is the shared disconnect publication and reset tail. The recurring service helper 0x591290 is tighter too: it now first clears the staged intrusive descriptor list through 0x590490 before entering the state-driven seed-or- receive branch. The upstream owners are tighter too: 0x5962e0 is now the field-subscription route-table opener above [transport+0xba4], while 0x596530 is the gsi_am_rating reopen path above [transport+0x18bc]. On that latter branch, 0x590dc0 is now bounded as the state-0 raw-endpoint seed pass over the live route handle, repeatedly pulling endpoint tuples through 0x58bc7e record type 0x1f3 before stamping descriptor flag byte 0x15 with 0x11. That makes the remaining source-flag meaning narrower too: current evidence now supports reading byte-0x15 bit 0x1 as a primary-endpoint-seed or endpoint-only marker. In the gsi_am_rating dispatcher, clear-bit descriptors can take the richer direct transition lane, while set-bit descriptors are staged through the queued enrichment path and still suppress that direct transition even after the ready bit arrives. The adjacent capacity-descriptor side is tighter too: 0x595bc0 now clearly publishes a descriptor block from live descriptor properties hostname, numwaiting, maxwaiting, numservers, and numplayers plus three carried sidecar scalars. That sidecar at [transport+0x1778] is tighter now too: current evidence says it behaves as one cached pointer into the transient work-record family at [transport+0x1780], because every meaningful branch in 0x595bc0 reads the same +0x0c/+0x10/+0x18 metadata triplet and replay modes later consume the pointer through 0x5933a0. The negative result is stronger too: local text-side xrefs still show no direct store to [transport+0x1778], and a wider local sweep also failed to show any obvious lea-based replay-band writer. The transient-request lifecycle tightens that further: 0x593330/0x593370/0x593380/0x5934e0/0x5933a0 now fully bound [transport+0x1780], 0x1784, and 0x1788 without ever touching [transport+0x1778], and the neighboring active-opcode reset helper 0x5929a0 is likewise scoped only to [transport+0x17fc]. A broader constructor and teardown pass tightened that further too: 0x596090, 0x5961b0, and 0x5962e0 all touch the neighboring replay-band fields without ever seeding [transport+0x1778]. A full-binary literal-offset sweep tightens it further still: the only direct 0x1778 hit in RT3.exe is the read in 0x595bc0. One nearby ambiguity is now closed too: the mode-5 mirror path in 0x595a40 and 0x595e10 does not target [transport+0x1778]; it writes [transport+0x54] and mirrors the same staged route companion dword only into queue-side slot [transport+0x1724+0x24] through 0x005a0940. So the sidecar writer remains upstream of this leaf publisher. Mode 0 is now also tied more cleanly to the generic descriptor append-notify lane at 0x590370, while mode 2 stays outside this helper as the separate remove-notify-and-stage path at 0x590430. The opcode-2 payload boundary is tighter too: 0x592ae0 now grounds that payload as a seven-dword block with an owned string slot at +0x08, so live mode supplies a populated payload while modes 3 and 5 deliberately enqueue an all-zero payload and reuse only the wrapper-side sidecar metadata. Those two modes are tighter now too: they are the live receive-state owner callbacks emitted by 0x5911e0 -> 0x5908c0, not loose generic replay guesses. So those paths are better read as delayed metadata replays over one cached work record, not over a separate anonymous cache blob. The neighboring capacity-owner split is tighter now too: 0x595bc0 only stages descriptor records for modes 0, 3, and 5; the upstream route-callback-table owner still delivers modes 1, 2, and 6, but those are explicit no-ops in this capacity leaf. So the owner wiring itself is no longer the open edge; only the upstream sidecar producer remains unresolved. The neighboring work queue is tighter too: 0x593330/0x593370/0x593380 now bound [transport+0x1780] as the construct/clear/destroy owner family, while 0x5933a0, 0x5934e0, and 0x593570 ground the remove, allocate, and completion side over that same collection. The small sibling 0x593400 is tighter too: it is a pure work-record uniqueness predicate over field +0x0c. Its caller is tighter now too: 0x58d720 is an immediate-drain quiescence gate over one transport context id, using 0x593400 for the queued work family at [transport+0x1780] and 0x592970 for the active opcode-record collection at [transport+0x17fc]. The strongest current read is that 0x5934c0 seeds that shared drain context id first, then the transport copies it into queued work field +0x0c and active opcode-record field +0x14 before the immediate-drain roots wait on one shared disappearance test rather than on a vague settle loop. The currently grounded roots are 0x58df20, the neighboring formatted selector-text publish path at 0x58dfb0, and callback-table registration at 0x58e200. The active-opcode side is tighter too: 0x5927b0 now bounds the per-record service-and-retire path, 0x592800 the wider context-or-idle sweep, 0x5929a0 the remove-by-opcode-type sweep, and 0x5929f0 the narrower opcode-3 field-snapshot removal keyed by the subscribed callback-pair payload. That also corrects 0x595b80, whose final cleanup is an active field-snapshot purge rather than a queued-work drain. The adjacent route-callback descriptor-table lifecycle is tighter too: 0x590410 now grounds [table+0x5bc] as the staged intrusive descriptor-list head, 0x590430 is the generic remove-notify-and-stage lane, 0x590490 releases the staged list, and 0x5904d0 releases the active descriptor collection before tearing that staged list down. That also makes the earlier 0x5962e0 “release active descriptors” step explicit. The callback-table attach side now constrains the same work-record metadata family a little further too: 0x593650 deliberately duplicates its first caller metadata dword into both fields +0x0c and +0x10, while carrying the second caller metadata dword in +0x18. The lower opcode wrappers are tighter now too: 0x592a40 turned out to be the explicit opcode-1 trigger wrapper whose constructor is a no-op and whose active-side service is 0x5913c0, while 0x592c40 is the real 0x08-byte explicit opcode-5 binding leaf. The earlier opcode-4 read was just the table-indexing mistake in 0x5928a0: selector 4 lands on the row at 0x5e2044, not the row at 0x5e2034. The producer side is tighter too: bound-route requests, selector-text route requests, and the type-9 text fastpath also stage that same triplet through 0x5934e0, and the fastpath shim 0x593d00 now gives the cleanest proof of the callback split by only re-emitting the follow-on lane when +0x10 is nonnull and then forwarding (+0x10, +0x18, +0x0c) into 0x593170 as callback function, callback companion, and trailing drain context. So the replay-side triplet is clearly a broader transport callback-wrapper family, not one fixed route-only tuple. The nearby field-subscription side is tighter too: 0x592b50 now clearly uses [transport+0x1774] as a cached progress percentage under [transport+0xba4], and 0x5962e0 seeds that percentage to 1 just before the first immediate mode-3 snapshot. The nearby route-callback-table lifecycle is tighter now too: 0x596090 seeds [transport+0xba0] as the callback-plumbing enable latch, clears staged payload slot [transport+0xb50], and constructs the three owner branches rooted at [transport+0xba4], [transport+0x1164], and [transport+0x18bc]; 0x596210 is the recurring service sweep over those same three tables plus the local field cache and queued-descriptor family; 0x596060 is the explicit gsi_am_rating reset; and 0x596530 is the reopen-from-stored-label sibling above that same am-rating table. The matching local cleanup is tighter too: 0x5962c0 is the explicit staged route-callback payload clear on [transport+0xb50], while 0x595ce0 now clearly resets only the capacity-descriptor route callback table at [transport+0x1164], not the field-subscription table at [transport+0xba4]. The remaining gap on the capacity side is therefore narrower: the carried sidecar fields themselves now read more cleanly as the cached callback-wrapper triplet reused elsewhere (drain context id +0x0c, callback fn +0x10, callback companion +0x18), and the negative result is stronger too: nearby replay-band fields [transport+0x176c], [transport+0x1770], [transport+0x1774], [transport+0x177c], [transport+0x1780], and [transport+0x1784] all have direct local owners while [transport+0x1778] still appears only as the single read in 0x595bc0; even the broader callback-owner lifecycle now skips it while seeding, servicing, resetting, reopening, or tearing down those neighboring tables and caches. The constructor now closes that local search further: 0x58dc50 bulk-zeroes the full transport body and still never writes a nonzero value into [transport+0x1778] before later explicit neighbor initialization. The callback-binding owner stack now tightens that boundary too: 0x5934e0 stages the shared work-record metadata triplet, 0x593650 binds it into the callback-table worker path, and 0x593570 later consumes and republishes it, while [transport+0x1778] still appears only as the borrowed sidecar read in 0x595bc0. So this edge is now locally closed, and the remaining producer looks like an upstream callback or worker handoff rather than one missing ordinary field store in the local cluster. The adjacent staged-route callback side is tighter too: 0x595860 is now bounded as the submit-result handler beneath 0x5958e0, and the old [transport+0xac0] ambiguity there is now gone. That branch is using the already-grounded third selector-generation counter at [0xac0] together with target [0xb48] to decide whether staged route-callback traffic can push the multiplayer route-mode ladder from 2 into 3 and later 4. The selector-view counter beneath that gate is tighter now too: 0x594e30 counts slot-2 entries whose flag dword carries bit 0x20, optionally filtered by the current transport name buffer. The selector-view mutation family under that same lane is tighter too: 0x594a30 is now the direct keyed-store remover, 0x594fb0 clears one selector-slot ownership pointer plus its slot-local flag dword and drops the whole entry when no slots remain, 0x595010 rekeys one selector-view entry under a new name while preserving the 0x40.. runtime band, and callback root 0x59f9c0 now reads as the sibling lane that clears one named selector-view slot, publishes callback slot 18, and may still re-enter the route-mode setter from the same slot-2 status and generation gates. The neighboring callback roots are tighter now too: 0x5950a0 clears one selector slot from every selector-view entry in the keyed store, 0x59fab0 is the rename or relabel sibling above 0x595010, 0x59faf0 updates one selector slot's fixed sample-text buffer and refreshes the active selector object when present, and 0x59fb60 replaces one selector slot's name set, requests the default profile-key bundle for that slot, and publishes callback slot 20. Slot 16 is tighter now too: current grounded caller 0x59f440 forwards the staged route-callback payload handle from [transport+0xb50] through 0x592ea0 just before route mode 5. The last adjacent callback root in that block is tighter now too: 0x59fbd0 is the built-in per-slot profile-key query sibling. It resolves the caller selector name into one slot index, forwards the caller trio into 0x596b90, and then publishes callback slot 28; that lower helper indexes one slot-specific built-in string pair from [transport+0x189c] and [transport+0x18ac], reuses the generic per-key handler 0x596970, and only republishes slot 28 when that lower query path succeeds. The compact-header side is tighter now too: 0x58fe20 and 0x58ff20 now show that compact payloads always carry the primary IPv4 dword and that header bit 0x10 only gates whether the primary port word is inline or inherited from the owner default port. 0x58fe90 now validates the 0x40 inline keyed-property vector against the owner schema, and 0x58fe50 validates the signed-0x80 trailing string-pair tail before decode. 0x58ff60 then grounds bit 0x02 as the inline secondary IPv4 dword branch, bit 0x20 as the paired secondary-port word branch with owner-port fallback, bit 0x08 as one still-unresolved auxiliary dword stored at [descriptor+0x10], bit 0x40 as one inline keyed-property vector decoded through the property-store writers, and signed bit 0x80 as one trailing string-pair tail. The descriptor-state side is tighter now too: the shared queue helper at 0x005a09a0 stamps pending state 0x4 for the local field-cache family [transport+0x1724] and pending state 0x8 for the gsi_am_rating queued-descriptor family [transport+0x1e7c], while later service through 0x005a0c80 promotes those pending tags into ready bits 0x1 and 0x2 in descriptor byte [entry+0x14]. That makes the current transport-side tests cleaner: 0x58d1c0 is the field-cache ready gate, 0x58d1d0 is the gsi_am_rating queued-descriptor ready gate, and 0x58d230 is the remaining flag-byte split between direct primary-endpoint handling at [transport+0x18bc] and the queued path at [transport+0x1e7c]. That byte-0x14 story is no longer queue-only either: 0x58ff60 can also OR in bit 0x1 after the inline keyed-property vector and bit 0x2 after the signed string-pair tail. The flag-byte split is no longer purely behavioral either: current evidence now says byte [descriptor+0x15] bit 0x1 is a source-side descriptor header bit, explicitly seeded during the primary-endpoint table refresh around 0x590dc0 and preserved by the compact descriptor decode path at 0x58ff60, rather than a queue-generated runtime state. The gsi_am_rating dispatcher side is tighter too: that same bit no longer just looks like a direct-versus-queued routing split, because 0x595e10 also uses it to suppress the direct 0x595dc0 transition even after queued ready bit 0x2 is present. The descriptor body is tighter too: [descriptor+0x20] is now the intrusive next-link used by the transport-owned primary-endpoint list headed at [table+0x5bc], and [descriptor+0x1c] is now the special numeric scalar behind the current queryid/ping fallback pair. The compact-only auxiliary dword at [descriptor+0x10] is tighter in a negative way too: local xref scans now only show it being preserved by later generic helpers like generic_record_0x1c_deep_copy_with_owned_string_at_0x08 0x591410 and the adjacent callback-marshaling wrappers 0x591480 and 0x591510, not read through any dedicated semantic accessor yet. The route-event dispatcher side is tighter too: the mode-5 tails in both callback families do not copy a descriptor-local field but instead mirror the transport-staged companion dword at [this+0x490] into [this+0x54] and queue-side slot [this+0x1724+0x24]. The gsi_am_rating maintenance lane is tighter now too: after pruning failed descriptors it sorts the surviving primary-endpoint table through 0x590310 in mode 1 with key gsi_am_rating, then selects the new head through 0x590480 before re-entering the route-transition path. The same service loop also owns a slower sidecar lane keyed off [entry+0xa4]: multiplayer_transport_select_stale_selector_view_progress_entry walks the store through multiplayer_transport_pick_stale_selector_view_progress_entry, picks one stale entry whose progress latch [entry+0x9c] is clear and whose last progress tick [entry+0x70] is old enough, and then hands it to multiplayer_transport_stage_selector_view_progress_snapshot. That helper now looks more bounded too: it rebuilds the core X%sX marker text from [entry+0x50] through multiplayer_transport_format_selector_view_probe_marker_core, formats one PNG %s %d line around that marker and the entry-local averaged millisecond sample at [entry+0x80], appends bounded selector-slot PNG fragments for live overlapping slots, marks progress-snapshot state in flight at [entry+0x9c], and stamps both [entry+0x70] and the transport-wide throttle tick [transport+0xaec]. So the selector-view sidecar no longer looks like one undifferentiated refresh bucket: it has a faster deferred-probe lane plus a slower progress-snapshot lane, both still under the shell-owned multiplayer transport cadence. The two descriptor lanes installed by multiplayer_transport_attach_callback_table_descriptor are now tighter too. The first lane rooted at 0x59f5c0 can arm deferred-close state on the owner transport and then forward through callback slot 23. The second lane is no longer just a loose selector-view bucket: multiplayer_transport_callback_dispatch_selector_name_payload_lane at 0x59f650 classifies selector payloads through multiplayer_transport_is_selector_control_line, routes @@@NFO control lines into multiplayer_transport_sync_selector_view_nfo_r_flag, and otherwise publishes either callback slot 13 or the split token-plus-tail callback slot 14 through multiplayer_transport_split_selector_payload_token_and_tail. That local @@@NFO helper is now bounded more tightly too: it only accepts lines ending in the literal X\ tail, searches for the field marker \$flags$\, and then sets or clears bit 0x2 in the third selector-slot flag word at [entry+0x64] depending on whether that field contains the letter r before the next backslash. The sibling multiplayer_transport_callback_dispatch_current_selector_payload_lane at 0x59f720 first resolves the active selector through 0x5951a0, then handles the current-selector variants of the same control vocabulary: @@@NFO continues into the same local r-flag sync helper, @@@GML plus mode-3/4 payloads feed the shared control-token helper multiplayer_transport_handle_gml_or_png_selector_control, and the remaining non-control payloads publish callback slot 9. That shared helper now narrows the GML and PNG split materially: when the token is GML and the tail is Disconnected, it requires the active selector-view entry to pass multiplayer_transport_selector_view_entry_has_gml_disconnect_gate, which currently means the entry participates in the third selector slot and has bit 0x20 set in [entry+0x64], before it forces one status-pump pass, emits callback slot 16, and re-enters route mode 5. Its sibling PNG branch resolves a named selector-view entry from the tail text and, when that entry overlaps the active selector-slot ownership, refreshes selector-view runtime state through 0x5948f0 and republishes callback slot 25. Alongside those dispatchers, one grounded branch at 0x59f560 still updates selector-view runtime state through 0x5948f0 and forwards the same selector-name pair through callback slot 25, while 0x59f850 resets selector text state through 0x5954b0, forwards through callback slot 19, and when selector 2 is active in a nonterminal route mode re-enters multiplayer_transport_set_route_mode with mode 1. The low-level route helper still looks like a two-part cycle: multiplayer_gamespy_route_service_retry_and_keepalive_timers handles challenge or retry pressure and periodic outbound control traffic around the master.gamespy.com, PING, natneg, localport, localip%d, and statechanged strings, while multiplayer_gamespy_route_drain_inbound_packets drains inbound datagrams and dispatches semicolon lines, backslash-delimited key bundles, and 0xfe 0xfd GameSpy control packets. The transport-owned callback story is now narrower too. The shared route constructor multiplayer_gamespy_route_construct_and_seed_callback_vector seeds [route+0x88] through [route+0x9c] from the caller-supplied transport callback table, records the owner transport at [route+0x104], and explicitly zeroes [route+0xa0], [route+0xa4], and [route+0xd4] before any later patch-up. For the transport-owned status route, multiplayer_transport_try_connect_status_route then patches [route+0xa0] through multiplayer_gamespy_route_set_extended_payload_callback to point at multiplayer_transport_forward_validated_extended_route_payload 0x00597330, which simply forwards the validated payload wrapper into the owner callback at [transport+0x17f4] with context [transport+0x17f8]. The grounded live-route connect path at multiplayer_transport_try_connect_live_route does not currently perform any matching post-construction patch for [route+0xa0], [route+0xa4], or [route+0xd4], and the higher route-mode state machine now looks consistent with that: multiplayer_transport_set_route_mode latches the requested small mode at [this+0x18b8], then uses mode 0 for the direct-versus queued gsi_am_rating split, mode 1 for the ready-bit plus queued fallback, mode 2 for pending-descriptor cleanup, mode 3 for the empty-table fallback, mode 4 for deferred route-status recovery, and mode 5 for copying the staged route companion dword into [this+0x54] and queue-side slot [this+0x1724+0x24]. The current grounded mode transitions still switch by releasing route objects through multiplayer_gamespy_route_release_and_free and rebuilding them through multiplayer_transport_try_connect_live_route, not by mutating callback slots in place. The parser behavior is now tighter as well: semicolon lines only dispatch when [route+0xd4] is non-null, and the subtype-6 raw fallback only dispatches when [route+0xa4] is non-null. For the currently grounded transport-owned status and live routes, those two branches therefore remain optional and can cleanly no-op instead of implying a hidden mandatory callback path. Inside the packet parser, subtype 4 is no longer an unknown callback hop: after cookie validation it dispatches through [route+0x9c], which the transport-owned status and live routes seed to multiplayer_transport_handle_validated_route_cookie_event 0x005972c0. That helper either marks route progress and re-enters multiplayer_transport_set_route_mode, or forwards the event id plus payload into the owner callback at [transport+0x17f0] with context [transport+0x17f8]. The surrounding status-route callback vector is tighter now too: 0x005970e0 publishes either the active selector text or the averaged probe sample at [entry+0x80] and otherwise falls back to owner callback [transport+0x17e0]; 0x00597180 is a straight owner-forwarding lane through [transport+0x17e4]; 0x005971b0 seeds the local status-control id list and can then notify owner callback [transport+0x17e8]; and 0x00597270 returns the third selector-slot generation counter [transport+0xac0] on its bounded local branch before falling back to owner callback [transport+0x17ec]. Subtype 6 still validates the same cookie, dedupes one 32-bit cookie or packet id, and then dispatches the trailing payload through the natneg-or-raw callback layer rooted at [route+0xa0] and [route+0xa4]. This separates the shell-frame preview refresh at 0x006cd8d8 from the actual transport cadence at 0x006cd970, and also separates that transport cadence from the lower GameSpy route-service and packet-parser layer beneath it.
  • Evidence: function-map.csv, pending-template-store-management.md, pending-template-store-functions.csv, plus objdump caller traces showing multiplayer_window_service_loop reaching multiplayer_flush_session_event_transport and the transport pump chain multiplayer_flush_session_event_transport -> multiplayer_transport_flush_and_maybe_shutdown -> multiplayer_transport_service_frame -> multiplayer_transport_service_worker_once -> multiplayer_transport_drain_request_text_queue.
  • Open Questions: unresolved request-id semantics for 1, 2, 4, and 7; whether any non-mode-path helper outside the currently grounded release-and-rebuild flow ever patches [route+0xa4] and [route+0xd4] for the transport-owned status/live routes, or whether those packet families are simply unused in this stack; and how far the Multiplayer preview-dataset machinery is reused outside Multiplayer.win beyond the currently grounded .gmt save-mode hook.