Add iteration trace diagnostics for seed43

This commit is contained in:
Jan Petykiewicz 2026-04-02 16:15:02 -07:00
commit 3b7568d6c9
14 changed files with 2700 additions and 13 deletions

35
DOCS.md
View file

@ -130,6 +130,7 @@ Use `RoutingProblem.initial_paths` to provide semantic per-net seeds. Seeds are
| `capture_expanded` | `False` | Record expanded nodes for diagnostics and visualization. |
| `capture_conflict_trace` | `False` | Capture authoritative post-reverify conflict trace entries for debugging negotiated-congestion failures. |
| `capture_frontier_trace` | `False` | Run an analysis-only reroute for reached-but-colliding nets and capture prune causes near their final conflict hotspots. |
| `capture_iteration_trace` | `False` | Capture per-iteration and per-net route-attempt attribution for negotiated-congestion diagnosis. |
## 7. Conflict Trace
@ -186,7 +187,37 @@ Use `scripts/record_frontier_trace.py` to capture JSON and Markdown frontier-pru
Separately from the observational trace tooling, the router may run a bounded post-loop pair-local scratch reroute before refinement when the restored best snapshot ends with final two-net reached-target dynamic conflicts. That repair phase is part of normal routing behavior and is reported through the `pair_local_search_*` counters below.
## 9. RouteMetrics
## 9. Iteration Trace
`RoutingRunResult.iteration_trace` is an immutable tuple of negotiated-congestion iteration summaries. It is empty unless `RoutingOptions.diagnostics.capture_iteration_trace=True`.
Trace types:
- `IterationTraceEntry`
- `iteration`
- `congestion_penalty`: Penalty in effect for that iteration
- `routed_net_ids`: Nets rerouted during that iteration, in routing order
- `completed_nets`
- `conflict_edges`
- `total_dynamic_collisions`
- `nodes_expanded`
- `congestion_check_calls`
- `congestion_candidate_ids`
- `congestion_exact_pair_checks`
- `net_attempts`: Per-net attribution for that iteration
- `IterationNetAttemptTrace`
- `net_id`
- `reached_target`
- `nodes_expanded`
- `congestion_check_calls`
- `pruned_closed_set`
- `pruned_cost`
- `pruned_hard_collision`
- `guidance_seed_present`
Use `scripts/record_iteration_trace.py` to capture JSON and Markdown iteration-attribution artifacts. Its default comparison target is the solved seed-42 no-warm canary versus the pathological seed-43 no-warm canary.
## 10. RouteMetrics
`RoutingRunResult.metrics` is an immutable per-run snapshot.
@ -272,7 +303,7 @@ Separately from the observational trace tooling, the router may run a bounded po
Lower-level search and collision modules are semi-private implementation details. They remain accessible through deep imports for advanced use, but they are unstable and may change without notice. The stable supported entrypoint is `route(problem, options=...)`.
The current implementation structure is summarized in **[docs/architecture.md](docs/architecture.md)**. The committed example-corpus counter baseline is tracked in **[docs/performance.md](docs/performance.md)**.
Use `scripts/diff_performance_baseline.py` to compare a fresh local run against that baseline. Use `scripts/record_conflict_trace.py` for opt-in conflict-hotspot traces, `scripts/record_frontier_trace.py` for hotspot-adjacent prune traces, and `scripts/characterize_pair_local_search.py` to sweep example_07-style no-warm runs for pair-local repair behavior. The counter baseline is currently observational and is not enforced as a CI gate.
Use `scripts/diff_performance_baseline.py` to compare a fresh local run against that baseline. Use `scripts/record_conflict_trace.py` for opt-in conflict-hotspot traces, `scripts/record_frontier_trace.py` for hotspot-adjacent prune traces, `scripts/record_iteration_trace.py` for per-iteration negotiated-congestion attribution, and `scripts/characterize_pair_local_search.py` to sweep example_07-style no-warm runs for pair-local repair behavior. The counter baseline is currently observational and is not enforced as a CI gate.
## 11. Tuning Notes

2111
docs/iteration_trace.json Normal file

File diff suppressed because it is too large Load diff

85
docs/iteration_trace.md Normal file
View file

@ -0,0 +1,85 @@
# Iteration Trace
Generated at 2026-04-02T16:11:39-07:00 by `scripts/record_iteration_trace.py`.
## example_07_large_scale_routing_no_warm_start
Results: 10 valid / 10 reached / 10 total.
| Iteration | Penalty | Routed Nets | Completed | Conflict Edges | Dynamic Collisions | Nodes | Congestion Checks | Candidate Ids | Exact Pairs |
| --: | --: | --: | --: | --: | --: | --: | --: | --: | --: |
| 0 | 100.0 | 10 | 1 | 16 | 50 | 571 | 0 | 0 | 0 |
| 1 | 140.0 | 10 | 2 | 12 | 54 | 253 | 974 | 2378 | 1998 |
| 2 | 196.0 | 10 | 4 | 5 | 22 | 253 | 993 | 1928 | 1571 |
| 3 | 274.4 | 10 | 6 | 2 | 10 | 100 | 437 | 852 | 698 |
| 4 | 384.2 | 10 | 6 | 2 | 10 | 126 | 517 | 961 | 812 |
| 5 | 537.8 | 10 | 6 | 2 | 10 | 461 | 1704 | 3805 | 3043 |
Top nets by iteration-attributed nodes expanded:
- `net_03`: 383
- `net_06`: 292
- `net_09`: 260
- `net_00`: 210
- `net_02`: 190
- `net_08`: 168
- `net_01`: 162
- `net_07`: 61
- `net_04`: 19
- `net_05`: 19
Top nets by iteration-attributed congestion checks:
- `net_03`: 1242
- `net_06`: 1080
- `net_02`: 674
- `net_01`: 534
- `net_08`: 262
- `net_00`: 229
- `net_07`: 228
- `net_09`: 176
- `net_04`: 100
- `net_05`: 100
## example_07_large_scale_routing_no_warm_start_seed43
Results: 10 valid / 10 reached / 10 total.
| Iteration | Penalty | Routed Nets | Completed | Conflict Edges | Dynamic Collisions | Nodes | Congestion Checks | Candidate Ids | Exact Pairs |
| --: | --: | --: | --: | --: | --: | --: | --: | --: | --: |
| 0 | 100.0 | 10 | 1 | 16 | 50 | 571 | 0 | 0 | 0 |
| 1 | 140.0 | 10 | 1 | 13 | 53 | 269 | 961 | 2562 | 2032 |
| 2 | 196.0 | 10 | 4 | 3 | 15 | 140 | 643 | 1610 | 1224 |
| 3 | 274.4 | 10 | 4 | 3 | 15 | 84 | 382 | 801 | 651 |
| 4 | 384.2 | 10 | 6 | 2 | 10 | 170 | 673 | 1334 | 1072 |
| 5 | 537.8 | 10 | 6 | 2 | 10 | 457 | 1671 | 3718 | 2992 |
| 6 | 753.0 | 10 | 4 | 4 | 8 | 22288 | 89671 | 218513 | 171925 |
| 7 | 1054.1 | 10 | 4 | 4 | 8 | 15737 | 29419 | 34309 | 28603 |
| 8 | 1475.8 | 10 | 4 | 4 | 8 | 21543 | 41803 | 49314 | 41198 |
Top nets by iteration-attributed nodes expanded:
- `net_06`: 31604
- `net_03`: 27532
- `net_02`: 763
- `net_09`: 286
- `net_07`: 239
- `net_00`: 233
- `net_08`: 218
- `net_05`: 134
- `net_01`: 132
- `net_04`: 118
Top nets by iteration-attributed congestion checks:
- `net_06`: 83752
- `net_03`: 75019
- `net_02`: 3270
- `net_07`: 844
- `net_08`: 540
- `net_01`: 441
- `net_05`: 425
- `net_04`: 398
- `net_09`: 288
- `net_00`: 246

View file

@ -3629,3 +3629,14 @@ Findings:
| example_07_large_scale_routing_no_warm_start | pair_local_search_attempts | - | 2.0000 | - |
| example_07_large_scale_routing_no_warm_start | pair_local_search_accepts | - | 2.0000 | - |
| example_07_large_scale_routing_no_warm_start | pair_local_search_nodes_expanded | - | 68.0000 | - |
## Step 64 seed-43 iteration-trace diagnosis
Measured on 2026-04-02T16:11:39-07:00.
Findings:
- Added `capture_iteration_trace` plus `scripts/record_iteration_trace.py` and tracked the first `seed 42` vs `seed 43` no-warm comparison in `docs/iteration_trace.json` and `docs/iteration_trace.md`.
- The pathological `seed 43` basin is not front-loaded. It matches the solved `seed 42` path through iteration `5`, then falls into three extra iterations with only `4` completed nets and `4` conflict edges.
- The late blowup is concentrated in two nets, not the whole routing set: `net_06` contributes `31604` attributed nodes and `83752` congestion checks, while `net_03` contributes `27532` nodes and `75019` congestion checks.
- This points the next optimization work at late-iteration reroute behavior for a small subset of nets rather than another global congestion or pair-local-search change.

View file

@ -9,6 +9,7 @@ The default baseline table below covers the standard example corpus only. The he
Use `scripts/characterize_pair_local_search.py` when you want a small parameter sweep over example_07-style no-warm runs instead of a single canary reading.
The current tracked sweep output lives in `docs/pair_local_characterization.json` and `docs/pair_local_characterization.md`.
Use `scripts/record_iteration_trace.py` when you want a seed-42 vs seed-43 negotiated-congestion attribution run; the current tracked output lives in `docs/iteration_trace.json` and `docs/iteration_trace.md`.
| Scenario | Duration (s) | Total | Valid | Reached | Iter | Nets Routed | Nodes | Ray Casts | Moves Gen | Moves Added | Dyn Tree | Visibility Builds | Congestion Checks | Verify Calls |
| :-- | --: | --: | --: | --: | --: | --: | --: | --: | --: | --: | --: | --: | --: | --: |
@ -36,6 +37,8 @@ For the current accepted branch, the most important performance-only canary is `
The latest tracked characterization sweep confirms there is no smaller stable pair-local smoke case under the `<=1.0s` rule, so the 10-net no-warm-start canary remains the primary regression target for this behavior.
The tracked iteration trace adds one more diagnosis target: `example_07_large_scale_routing_no_warm_start_seed43`. That seed now remains performance-only, and its blowup is concentrated in late iterations rather than the initial basin. In the current trace, seed `43` matches seed `42` through iteration `5`, then spends three extra iterations with `4` completed nets while `net_03` and `net_06` dominate both `nodes_expanded` and `congestion_check_calls`.
Tracked metric keys:
nodes_expanded, moves_generated, moves_added, pruned_closed_set, pruned_hard_collision, pruned_cost, route_iterations, nets_routed, nets_reached_target, warm_start_paths_built, warm_start_paths_used, refine_path_calls, timeout_events, iteration_reverify_calls, iteration_reverified_nets, iteration_conflicting_nets, iteration_conflict_edges, nets_carried_forward, score_component_calls, score_component_total_ns, path_cost_calls, danger_map_lookup_calls, danger_map_cache_hits, danger_map_cache_misses, danger_map_query_calls, danger_map_total_ns, move_cache_abs_hits, move_cache_abs_misses, move_cache_rel_hits, move_cache_rel_misses, guidance_match_moves, guidance_match_moves_straight, guidance_match_moves_bend90, guidance_match_moves_sbend, guidance_bonus_applied, guidance_bonus_applied_straight, guidance_bonus_applied_bend90, guidance_bonus_applied_sbend, static_safe_cache_hits, hard_collision_cache_hits, congestion_cache_hits, congestion_cache_misses, congestion_presence_cache_hits, congestion_presence_cache_misses, congestion_presence_skips, congestion_candidate_precheck_hits, congestion_candidate_precheck_misses, congestion_candidate_precheck_skips, congestion_grid_net_cache_hits, congestion_grid_net_cache_misses, congestion_grid_span_cache_hits, congestion_grid_span_cache_misses, congestion_candidate_nets, congestion_net_envelope_cache_hits, congestion_net_envelope_cache_misses, dynamic_path_objects_added, dynamic_path_objects_removed, dynamic_tree_rebuilds, dynamic_grid_rebuilds, static_tree_rebuilds, static_raw_tree_rebuilds, static_net_tree_rebuilds, visibility_corner_index_builds, visibility_builds, visibility_corner_pairs_checked, visibility_corner_queries_exact, visibility_corner_hits_exact, visibility_point_queries, visibility_point_cache_hits, visibility_point_cache_misses, visibility_tangent_candidate_scans, visibility_tangent_candidate_corner_checks, visibility_tangent_candidate_ray_tests, ray_cast_calls, ray_cast_calls_straight_static, ray_cast_calls_expand_snap, ray_cast_calls_expand_forward, ray_cast_calls_visibility_build, ray_cast_calls_visibility_query, ray_cast_calls_visibility_tangent, ray_cast_calls_other, ray_cast_candidate_bounds, ray_cast_exact_geometry_checks, congestion_check_calls, congestion_lazy_resolutions, congestion_lazy_requeues, congestion_candidate_ids, congestion_exact_pair_checks, verify_path_report_calls, verify_static_buffer_ops, verify_dynamic_candidate_nets, verify_dynamic_exact_pair_checks, refinement_windows_considered, refinement_static_bounds_checked, refinement_dynamic_bounds_checked, refinement_candidate_side_extents, refinement_candidates_built, refinement_candidates_verified, refinement_candidates_accepted, pair_local_search_pairs_considered, pair_local_search_attempts, pair_local_search_accepts, pair_local_search_nodes_expanded

View file

@ -18,6 +18,8 @@ from .results import ( # noqa: PLC0414
ComponentConflictTrace as ComponentConflictTrace,
ConflictTraceEntry as ConflictTraceEntry,
FrontierPruneSample as FrontierPruneSample,
IterationNetAttemptTrace as IterationNetAttemptTrace,
IterationTraceEntry as IterationTraceEntry,
NetConflictTrace as NetConflictTrace,
NetFrontierTrace as NetFrontierTrace,
RoutingResult as RoutingResult,
@ -47,6 +49,7 @@ def route(
expanded_nodes=tuple(finder.accumulated_expanded_nodes),
conflict_trace=tuple(finder.conflict_trace),
frontier_trace=tuple(finder.frontier_trace),
iteration_trace=tuple(finder.iteration_trace),
)
__all__ = [
@ -62,6 +65,8 @@ __all__ = [
"PathSeed",
"Port",
"FrontierPruneSample",
"IterationNetAttemptTrace",
"IterationTraceEntry",
"RefinementOptions",
"RoutingOptions",
"RoutingProblem",

View file

@ -107,6 +107,7 @@ class DiagnosticsOptions:
capture_expanded: bool = False
capture_conflict_trace: bool = False
capture_frontier_trace: bool = False
capture_iteration_trace: bool = False
@dataclass(frozen=True, slots=True)

View file

@ -78,6 +78,33 @@ class NetFrontierTrace:
samples: tuple[FrontierPruneSample, ...] = ()
@dataclass(frozen=True, slots=True)
class IterationNetAttemptTrace:
net_id: str
reached_target: bool
nodes_expanded: int
congestion_check_calls: int
pruned_closed_set: int
pruned_cost: int
pruned_hard_collision: int
guidance_seed_present: bool
@dataclass(frozen=True, slots=True)
class IterationTraceEntry:
iteration: int
congestion_penalty: float
routed_net_ids: tuple[str, ...]
completed_nets: int
conflict_edges: int
total_dynamic_collisions: int
nodes_expanded: int
congestion_check_calls: int
congestion_candidate_ids: int
congestion_exact_pair_checks: int
net_attempts: tuple[IterationNetAttemptTrace, ...] = ()
@dataclass(frozen=True, slots=True)
class RouteMetrics:
nodes_expanded: int
@ -231,3 +258,4 @@ class RoutingRunResult:
expanded_nodes: tuple[tuple[int, int, int], ...] = ()
conflict_trace: tuple[ConflictTraceEntry, ...] = ()
frontier_trace: tuple[NetFrontierTrace, ...] = ()
iteration_trace: tuple[IterationTraceEntry, ...] = ()

View file

@ -11,6 +11,8 @@ from inire.results import (
ComponentConflictTrace,
ConflictTraceEntry,
FrontierPruneSample,
IterationNetAttemptTrace,
IterationTraceEntry,
NetConflictTrace,
NetFrontierTrace,
RoutingOutcome,
@ -65,6 +67,22 @@ class _PairLocalTarget:
net_ids: tuple[str, str]
_ITERATION_TRACE_TOTALS = (
"nodes_expanded",
"congestion_check_calls",
"congestion_candidate_ids",
"congestion_exact_pair_checks",
)
_ATTEMPT_TRACE_TOTALS = (
"nodes_expanded",
"congestion_check_calls",
"pruned_closed_set",
"pruned_cost",
"pruned_hard_collision",
)
class PathFinder:
__slots__ = (
"context",
@ -73,6 +91,7 @@ class PathFinder:
"accumulated_expanded_nodes",
"conflict_trace",
"frontier_trace",
"iteration_trace",
)
def __init__(
@ -90,6 +109,16 @@ class PathFinder:
self.accumulated_expanded_nodes: list[tuple[int, int, int]] = []
self.conflict_trace: list[ConflictTraceEntry] = []
self.frontier_trace: list[NetFrontierTrace] = []
self.iteration_trace: list[IterationTraceEntry] = []
def _metric_total(self, metric_name: str) -> int:
return int(getattr(self.metrics, f"total_{metric_name}"))
def _capture_metric_totals(self, metric_names: tuple[str, ...]) -> dict[str, int]:
return {metric_name: self._metric_total(metric_name) for metric_name in metric_names}
def _metric_deltas(self, before: dict[str, int], after: dict[str, int]) -> dict[str, int]:
return {metric_name: after[metric_name] - before[metric_name] for metric_name in before}
def _install_path(self, net_id: str, path: Sequence[ComponentResult]) -> None:
all_geoms: list[Polygon] = []
@ -749,13 +778,14 @@ class PathFinder:
state: _RoutingState,
iteration: int,
net_id: str,
) -> RoutingResult:
) -> tuple[RoutingResult, bool]:
search = self.context.options.search
congestion = self.context.options.congestion
diagnostics = self.context.options.diagnostics
net = state.net_specs[net_id]
self.metrics.total_nets_routed += 1
self.context.cost_evaluator.collision_engine.remove_path(net_id)
guidance_seed_present = False
if iteration == 0 and state.initial_paths and net_id in state.initial_paths:
self.metrics.total_warm_start_paths_used += 1
@ -774,6 +804,7 @@ class PathFinder:
if guidance_result and guidance_result.reached_target and guidance_result.path:
guidance_seed = guidance_result.as_seed().segments
guidance_bonus = max(10.0, self.context.options.objective.bend_penalty * 0.25)
guidance_seed_present = True
run_config = SearchRunConfig.from_options(
self.context.options,
@ -800,7 +831,7 @@ class PathFinder:
state.accumulated_expanded_nodes.extend(self.metrics.last_expanded_nodes)
if not path:
return RoutingResult(net_id=net_id, path=(), reached_target=False)
return RoutingResult(net_id=net_id, path=(), reached_target=False), guidance_seed_present
reached_target = path[-1].end_port == net.target
if reached_target:
@ -812,11 +843,14 @@ class PathFinder:
if report.self_collision_count > 0:
state.needs_self_collision_check.add(net_id)
return RoutingResult(
net_id=net_id,
path=tuple(path),
reached_target=reached_target,
report=RoutingReport() if report is None else report,
return (
RoutingResult(
net_id=net_id,
path=tuple(path),
reached_target=reached_target,
report=RoutingReport() if report is None else report,
),
guidance_seed_present,
)
def _run_iteration(
@ -827,24 +861,66 @@ class PathFinder:
iteration_callback: Callable[[int, dict[str, RoutingResult]], None] | None,
) -> _IterationReview | None:
congestion = self.context.options.congestion
diagnostics = self.context.options.diagnostics
self.metrics.total_route_iterations += 1
self.metrics.reset_per_route()
if congestion.shuffle_nets and (iteration > 0 or state.initial_paths is None):
iteration_seed = (congestion.seed + iteration) if congestion.seed is not None else None
random.Random(iteration_seed).shuffle(state.ordered_net_ids)
iteration_penalty = self.context.congestion_penalty
routed_net_ids = [net_id for net_id in state.ordered_net_ids if net_id in reroute_net_ids]
self.metrics.total_nets_carried_forward += len(state.ordered_net_ids) - len(routed_net_ids)
iteration_before = {}
attempt_traces: list[IterationNetAttemptTrace] = []
if diagnostics.capture_iteration_trace:
iteration_before = self._capture_metric_totals(_ITERATION_TRACE_TOTALS)
for net_id in routed_net_ids:
if time.monotonic() - state.start_time > state.timeout_s:
self.metrics.total_timeout_events += 1
return None
result = self._route_net_once(state, iteration, net_id)
attempt_before = {}
if diagnostics.capture_iteration_trace:
attempt_before = self._capture_metric_totals(_ATTEMPT_TRACE_TOTALS)
result, guidance_seed_present = self._route_net_once(state, iteration, net_id)
state.results[net_id] = result
if diagnostics.capture_iteration_trace:
attempt_after = self._capture_metric_totals(_ATTEMPT_TRACE_TOTALS)
deltas = self._metric_deltas(attempt_before, attempt_after)
attempt_traces.append(
IterationNetAttemptTrace(
net_id=net_id,
reached_target=result.reached_target,
nodes_expanded=deltas["nodes_expanded"],
congestion_check_calls=deltas["congestion_check_calls"],
pruned_closed_set=deltas["pruned_closed_set"],
pruned_cost=deltas["pruned_cost"],
pruned_hard_collision=deltas["pruned_hard_collision"],
guidance_seed_present=guidance_seed_present,
)
)
review = self._reverify_iteration_results(state)
if diagnostics.capture_iteration_trace:
iteration_after = self._capture_metric_totals(_ITERATION_TRACE_TOTALS)
deltas = self._metric_deltas(iteration_before, iteration_after)
self.iteration_trace.append(
IterationTraceEntry(
iteration=iteration,
congestion_penalty=iteration_penalty,
routed_net_ids=tuple(routed_net_ids),
completed_nets=len(review.completed_net_ids),
conflict_edges=len(review.conflict_edges),
total_dynamic_collisions=review.total_dynamic_collisions,
nodes_expanded=deltas["nodes_expanded"],
congestion_check_calls=deltas["congestion_check_calls"],
congestion_candidate_ids=deltas["congestion_candidate_ids"],
congestion_exact_pair_checks=deltas["congestion_exact_pair_checks"],
net_attempts=tuple(attempt_traces),
)
)
if iteration_callback:
iteration_callback(iteration, state.results)
@ -973,6 +1049,7 @@ class PathFinder:
self.accumulated_expanded_nodes = []
self.conflict_trace = []
self.frontier_trace = []
self.iteration_trace = []
self.metrics.reset_totals()
self.metrics.reset_per_route()

View file

@ -91,6 +91,7 @@ def _make_run_result(
expanded_nodes=tuple(pathfinder.accumulated_expanded_nodes),
conflict_trace=tuple(pathfinder.conflict_trace),
frontier_trace=tuple(pathfinder.frontier_trace),
iteration_trace=tuple(pathfinder.iteration_trace),
)
@ -424,6 +425,14 @@ def snapshot_example_07_no_warm_start() -> ScenarioSnapshot:
)
def snapshot_example_07_no_warm_start_seed43() -> ScenarioSnapshot:
return _snapshot_example_07_variant(
"example_07_large_scale_routing_no_warm_start_seed43",
warm_start_enabled=False,
seed=43,
)
def trace_example_07() -> RoutingRunResult:
return _trace_example_07_variant(warm_start_enabled=True)
@ -432,6 +441,10 @@ def trace_example_07_no_warm_start() -> RoutingRunResult:
return _trace_example_07_variant(warm_start_enabled=False)
def trace_example_07_no_warm_start_seed43() -> RoutingRunResult:
return _trace_example_07_variant(warm_start_enabled=False, seed=43)
def _build_example_07_variant_stack(
*,
num_nets: int,
@ -439,6 +452,7 @@ def _build_example_07_variant_stack(
warm_start_enabled: bool,
capture_conflict_trace: bool = False,
capture_frontier_trace: bool = False,
capture_iteration_trace: bool = False,
) -> tuple[CostEvaluator, AStarMetrics, PathFinder]:
bounds = (0, 0, 1000, 1000)
obstacles = [
@ -481,6 +495,7 @@ def _build_example_07_variant_stack(
"capture_expanded": True,
"capture_conflict_trace": capture_conflict_trace,
"capture_frontier_trace": capture_frontier_trace,
"capture_iteration_trace": capture_iteration_trace,
"shuffle_nets": True,
"seed": seed,
"warm_start_enabled": warm_start_enabled,
@ -496,6 +511,7 @@ def _run_example_07_variant(
warm_start_enabled: bool,
capture_conflict_trace: bool = False,
capture_frontier_trace: bool = False,
capture_iteration_trace: bool = False,
) -> RoutingRunResult:
evaluator, metrics, pathfinder = _build_example_07_variant_stack(
num_nets=num_nets,
@ -503,6 +519,7 @@ def _run_example_07_variant(
warm_start_enabled=warm_start_enabled,
capture_conflict_trace=capture_conflict_trace,
capture_frontier_trace=capture_frontier_trace,
capture_iteration_trace=capture_iteration_trace,
)
def iteration_callback(idx: int, current_results: dict[str, RoutingResult]) -> None:
@ -519,11 +536,12 @@ def _snapshot_example_07_variant(
name: str,
*,
warm_start_enabled: bool,
seed: int = 42,
) -> ScenarioSnapshot:
t0 = perf_counter()
run = _run_example_07_variant(
num_nets=10,
seed=42,
seed=seed,
warm_start_enabled=warm_start_enabled,
)
t1 = perf_counter()
@ -533,13 +551,15 @@ def _snapshot_example_07_variant(
def _trace_example_07_variant(
*,
warm_start_enabled: bool,
seed: int = 42,
) -> RoutingRunResult:
return _run_example_07_variant(
num_nets=10,
seed=42,
seed=seed,
warm_start_enabled=warm_start_enabled,
capture_conflict_trace=True,
capture_frontier_trace=True,
capture_iteration_trace=True,
)
@ -644,6 +664,7 @@ SCENARIO_SNAPSHOTS: tuple[tuple[str, ScenarioSnapshotRun], ...] = (
PERFORMANCE_SCENARIO_SNAPSHOTS: tuple[tuple[str, ScenarioSnapshotRun], ...] = (
("example_07_large_scale_routing_no_warm_start", snapshot_example_07_no_warm_start),
("example_07_large_scale_routing_no_warm_start_seed43", snapshot_example_07_no_warm_start_seed43),
)
TRACE_SCENARIO_RUNS: tuple[tuple[str, TraceScenarioRun], ...] = (
@ -653,6 +674,7 @@ TRACE_SCENARIO_RUNS: tuple[tuple[str, TraceScenarioRun], ...] = (
TRACE_PERFORMANCE_SCENARIO_RUNS: tuple[tuple[str, TraceScenarioRun], ...] = (
("example_07_large_scale_routing_no_warm_start", trace_example_07_no_warm_start),
("example_07_large_scale_routing_no_warm_start_seed43", trace_example_07_no_warm_start_seed43),
)

View file

@ -54,6 +54,7 @@ def test_route_problem_smoke() -> None:
assert run.results_by_net["net1"].is_valid
assert run.conflict_trace == ()
assert run.frontier_trace == ()
assert run.iteration_trace == ()
def test_route_problem_supports_configs_and_debug_data() -> None:
@ -182,6 +183,72 @@ def test_capture_conflict_trace_preserves_route_outputs() -> None:
assert [entry.stage for entry in run_with_trace.conflict_trace] == ["iteration", "restored_best", "final"]
def test_capture_iteration_trace_preserves_route_outputs() -> None:
problem = RoutingProblem(
bounds=(0, 0, 100, 100),
nets=(
NetSpec("horizontal", Port(10, 50, 0), Port(90, 50, 0), width=2.0),
NetSpec("vertical", Port(50, 10, 90), Port(50, 90, 90), width=2.0),
),
)
base_options = RoutingOptions(
congestion=CongestionOptions(max_iterations=1, warm_start_enabled=False),
refinement=RefinementOptions(enabled=False),
)
run_without_trace = route(problem, options=base_options)
run_with_trace = route(
problem,
options=RoutingOptions(
congestion=base_options.congestion,
refinement=base_options.refinement,
diagnostics=DiagnosticsOptions(capture_iteration_trace=True),
),
)
assert {net_id: result.outcome for net_id, result in run_without_trace.results_by_net.items()} == {
net_id: result.outcome for net_id, result in run_with_trace.results_by_net.items()
}
assert len(run_with_trace.iteration_trace) == 1
def test_capture_iteration_trace_records_iteration_and_attempt_deltas() -> None:
problem = RoutingProblem(
bounds=(0, 0, 100, 100),
nets=(
NetSpec("horizontal", Port(10, 50, 0), Port(90, 50, 0), width=2.0),
NetSpec("vertical", Port(50, 10, 90), Port(50, 90, 90), width=2.0),
),
)
run = route(
problem,
options=RoutingOptions(
congestion=CongestionOptions(max_iterations=1, warm_start_enabled=False),
refinement=RefinementOptions(enabled=False),
diagnostics=DiagnosticsOptions(capture_iteration_trace=True),
),
)
entry = run.iteration_trace[0]
assert entry.iteration == 0
assert entry.congestion_penalty == 100.0
assert entry.routed_net_ids == ("horizontal", "vertical")
assert entry.completed_nets == 0
assert entry.conflict_edges == 1
assert entry.total_dynamic_collisions >= 2
assert entry.nodes_expanded >= 0
assert entry.congestion_check_calls >= 0
assert entry.congestion_candidate_ids >= 0
assert entry.congestion_exact_pair_checks >= 0
assert len(entry.net_attempts) == 2
assert [attempt.net_id for attempt in entry.net_attempts] == ["horizontal", "vertical"]
assert all(attempt.nodes_expanded >= 0 for attempt in entry.net_attempts)
assert all(attempt.congestion_check_calls >= 0 for attempt in entry.net_attempts)
assert all(not attempt.guidance_seed_present for attempt in entry.net_attempts)
assert sum(attempt.nodes_expanded for attempt in entry.net_attempts) == entry.nodes_expanded
assert sum(attempt.congestion_check_calls for attempt in entry.net_attempts) == entry.congestion_check_calls
def test_capture_conflict_trace_records_component_pairs() -> None:
problem = RoutingProblem(
bounds=(0, 0, 100, 100),

View file

@ -6,7 +6,13 @@ from typing import TYPE_CHECKING
import pytest
from inire.tests.example_scenarios import SCENARIOS, ScenarioOutcome, snapshot_example_07_no_warm_start
from inire.tests.example_scenarios import (
SCENARIOS,
ScenarioOutcome,
snapshot_example_07_no_warm_start,
snapshot_example_07_no_warm_start_seed43,
trace_example_07_no_warm_start_seed43,
)
if TYPE_CHECKING:
from collections.abc import Callable
@ -16,6 +22,7 @@ RUN_PERFORMANCE = os.environ.get("INIRE_RUN_PERFORMANCE") == "1"
PERFORMANCE_REPEATS = 3
REGRESSION_FACTOR = 1.5
NO_WARM_START_REGRESSION_SECONDS = 15.0
NO_WARM_START_SEED43_REGRESSION_SECONDS = 120.0
# Baselines are measured from clean 6a28dcf-style runs without plotting.
BASELINE_SECONDS = {
@ -85,3 +92,32 @@ def test_example_07_no_warm_start_runtime_regression() -> None:
f"{snapshot.duration_s:.4f}s exceeded guardrail "
f"{NO_WARM_START_REGRESSION_SECONDS:.1f}s"
)
@pytest.mark.performance
@pytest.mark.skipif(not RUN_PERFORMANCE, reason="set INIRE_RUN_PERFORMANCE=1 to run runtime regression checks")
def test_example_07_no_warm_start_seed43_runtime_regression() -> None:
snapshot = snapshot_example_07_no_warm_start_seed43()
run = trace_example_07_no_warm_start_seed43()
assert snapshot.total_results == 10
assert snapshot.valid_results == 10
assert snapshot.reached_targets == 10
assert snapshot.metrics.warm_start_paths_built == 0
assert snapshot.metrics.warm_start_paths_used == 0
assert snapshot.metrics.pair_local_search_pairs_considered >= 1
assert snapshot.metrics.pair_local_search_accepts >= 1
assert snapshot.duration_s <= NO_WARM_START_SEED43_REGRESSION_SECONDS, (
"example_07_large_scale_routing_no_warm_start_seed43 runtime "
f"{snapshot.duration_s:.4f}s exceeded guardrail "
f"{NO_WARM_START_SEED43_REGRESSION_SECONDS:.1f}s"
)
assert run.iteration_trace
assert len(run.results_by_net) == 10
assert sum(result.is_valid for result in run.results_by_net.values()) == 10
assert sum(result.reached_target for result in run.results_by_net.values()) == 10
assert run.metrics.warm_start_paths_built == 0
assert run.metrics.warm_start_paths_used == 0
assert run.metrics.pair_local_search_pairs_considered >= 1
assert run.metrics.pair_local_search_accepts >= 1

View file

@ -276,6 +276,30 @@ def test_record_frontier_trace_script_writes_selected_scenario(tmp_path: Path) -
assert (tmp_path / "frontier_trace.md").exists()
def test_record_iteration_trace_script_writes_selected_scenario(tmp_path: Path) -> None:
repo_root = Path(__file__).resolve().parents[2]
script_path = repo_root / "scripts" / "record_iteration_trace.py"
subprocess.run(
[
sys.executable,
str(script_path),
"--include-performance-only",
"--scenario",
"example_07_large_scale_routing_no_warm_start",
"--output-dir",
str(tmp_path),
],
check=True,
)
payload = json.loads((tmp_path / "iteration_trace.json").read_text())
assert payload["generated_at"]
assert payload["generator"] == "scripts/record_iteration_trace.py"
assert [entry["name"] for entry in payload["scenarios"]] == ["example_07_large_scale_routing_no_warm_start"]
assert (tmp_path / "iteration_trace.md").exists()
def test_characterize_pair_local_search_script_writes_outputs(tmp_path: Path) -> None:
repo_root = Path(__file__).resolve().parents[2]
script_path = repo_root / "scripts" / "characterize_pair_local_search.py"

View file

@ -0,0 +1,186 @@
#!/usr/bin/env python3
from __future__ import annotations
import argparse
import json
from collections import Counter
from dataclasses import asdict
from datetime import datetime
from pathlib import Path
from inire.tests.example_scenarios import TRACE_PERFORMANCE_SCENARIO_RUNS, TRACE_SCENARIO_RUNS
def _trace_registry(include_performance_only: bool) -> tuple[tuple[str, object], ...]:
if include_performance_only:
return TRACE_SCENARIO_RUNS + TRACE_PERFORMANCE_SCENARIO_RUNS
return TRACE_SCENARIO_RUNS
def _selected_runs(
selected_scenarios: tuple[str, ...] | None,
*,
include_performance_only: bool,
) -> tuple[tuple[str, object], ...]:
if selected_scenarios is None:
perf_registry = dict(TRACE_PERFORMANCE_SCENARIO_RUNS)
return (
(
"example_07_large_scale_routing_no_warm_start",
perf_registry["example_07_large_scale_routing_no_warm_start"],
),
(
"example_07_large_scale_routing_no_warm_start_seed43",
perf_registry["example_07_large_scale_routing_no_warm_start_seed43"],
),
)
registry = dict(TRACE_SCENARIO_RUNS + TRACE_PERFORMANCE_SCENARIO_RUNS)
allowed_standard = dict(_trace_registry(include_performance_only))
runs = []
for name in selected_scenarios:
if name in allowed_standard:
runs.append((name, allowed_standard[name]))
continue
if name in registry:
runs.append((name, registry[name]))
continue
valid = ", ".join(sorted(registry))
raise SystemExit(f"Unknown iteration-trace scenario: {name}. Valid scenarios: {valid}")
return tuple(runs)
def _build_payload(
selected_scenarios: tuple[str, ...] | None,
*,
include_performance_only: bool,
) -> dict[str, object]:
scenarios = []
for name, run in _selected_runs(selected_scenarios, include_performance_only=include_performance_only):
result = run()
scenarios.append(
{
"name": name,
"summary": {
"total_results": len(result.results_by_net),
"valid_results": sum(1 for entry in result.results_by_net.values() if entry.is_valid),
"reached_targets": sum(1 for entry in result.results_by_net.values() if entry.reached_target),
},
"metrics": asdict(result.metrics),
"iteration_trace": [asdict(entry) for entry in result.iteration_trace],
}
)
return {
"generated_at": datetime.now().astimezone().isoformat(timespec="seconds"),
"generator": "scripts/record_iteration_trace.py",
"scenarios": scenarios,
}
def _render_markdown(payload: dict[str, object]) -> str:
lines = [
"# Iteration Trace",
"",
f"Generated at {payload['generated_at']} by `{payload['generator']}`.",
"",
]
for scenario in payload["scenarios"]:
lines.extend(
[
f"## {scenario['name']}",
"",
f"Results: {scenario['summary']['valid_results']} valid / "
f"{scenario['summary']['reached_targets']} reached / "
f"{scenario['summary']['total_results']} total.",
"",
"| Iteration | Penalty | Routed Nets | Completed | Conflict Edges | Dynamic Collisions | Nodes | Congestion Checks | Candidate Ids | Exact Pairs |",
"| --: | --: | --: | --: | --: | --: | --: | --: | --: | --: |",
]
)
net_node_counts: Counter[str] = Counter()
net_check_counts: Counter[str] = Counter()
for entry in scenario["iteration_trace"]:
lines.append(
"| "
f"{entry['iteration']} | "
f"{entry['congestion_penalty']:.1f} | "
f"{len(entry['routed_net_ids'])} | "
f"{entry['completed_nets']} | "
f"{entry['conflict_edges']} | "
f"{entry['total_dynamic_collisions']} | "
f"{entry['nodes_expanded']} | "
f"{entry['congestion_check_calls']} | "
f"{entry['congestion_candidate_ids']} | "
f"{entry['congestion_exact_pair_checks']} |"
)
for attempt in entry["net_attempts"]:
net_node_counts[attempt["net_id"]] += attempt["nodes_expanded"]
net_check_counts[attempt["net_id"]] += attempt["congestion_check_calls"]
lines.extend(["", "Top nets by iteration-attributed nodes expanded:", ""])
if net_node_counts:
for net_id, count in net_node_counts.most_common(10):
lines.append(f"- `{net_id}`: {count}")
else:
lines.append("- None")
lines.extend(["", "Top nets by iteration-attributed congestion checks:", ""])
if net_check_counts:
for net_id, count in net_check_counts.most_common(10):
lines.append(f"- `{net_id}`: {count}")
else:
lines.append("- None")
lines.append("")
return "\n".join(lines)
def main() -> None:
parser = argparse.ArgumentParser(description="Record iteration-trace artifacts for selected trace scenarios.")
parser.add_argument(
"--scenario",
action="append",
dest="scenarios",
default=[],
help="Optional trace scenario name to include. May be passed more than once.",
)
parser.add_argument(
"--include-performance-only",
action="store_true",
help="Include performance-only trace scenarios when selecting from the standard registry.",
)
parser.add_argument(
"--output-dir",
type=Path,
default=None,
help="Directory to write iteration_trace.json and iteration_trace.md into. Defaults to <repo>/docs.",
)
args = parser.parse_args()
repo_root = Path(__file__).resolve().parents[1]
output_dir = repo_root / "docs" if args.output_dir is None else args.output_dir.resolve()
output_dir.mkdir(exist_ok=True)
selected = tuple(args.scenarios) if args.scenarios else None
payload = _build_payload(selected, include_performance_only=args.include_performance_only)
json_path = output_dir / "iteration_trace.json"
markdown_path = output_dir / "iteration_trace.md"
json_path.write_text(json.dumps(payload, indent=2, sort_keys=True) + "\n")
markdown_path.write_text(_render_markdown(payload) + "\n")
if json_path.is_relative_to(repo_root):
print(f"Wrote {json_path.relative_to(repo_root)}")
else:
print(f"Wrote {json_path}")
if markdown_path.is_relative_to(repo_root):
print(f"Wrote {markdown_path.relative_to(repo_root)}")
else:
print(f"Wrote {markdown_path}")
if __name__ == "__main__":
main()