Weine's investigation on benchmarking the suspend/resume process pointed
out a lot of the time in suspend/resume is being spent reprobing. While
the reprobing process is a lengthy one for good reason, we don't need to
hold up the entire suspend/resume process while we wait for it to
finish. Luckily as it turns out, we already trigger a full connector
reprobe in i915_hpd_poll_init_work(), so we can just ditch reprobing in
i915_drm_resume() entirely.
This won't lead to less time spent resuming just yet since now the
bottleneck will be waiting for the mode_config lock in
drm_kms_helper_poll_enable(), since that will be held as long as
i915_hpd_poll_init_work() is reprobing all of the connectors. But we'll
address that in the next patch.
Signed-off-by: Lyude <lyude@redhat.com>
Tested-by: David Weinehall <david.weinehall@linux.intel.com>
Reviewed-by: David Weinehall <david.weinehall@linux.intel.com>
Testcase: analyze_suspend.py -config config/suspend-callgraph.cfg -filter i915
* notifications.
* */
intel_hpd_init(dev_priv);
- /* Config may have changed between suspend and resume */
- drm_helper_hpd_irq_event(dev);
intel_opregion_register(dev_priv);