apply_to_page_range will acquire PTE lock while priv->lock is held,
and mn_invl_range_start tries to acquire priv->lock with PTE already
held. Fix by not holding priv->lock during the entire map operation.
This is safe because map->vma is set nonzero while the lock is held,
which will cause subsequent maps to fail and will cause the unmap
ioctl (and other users of gntdev_del_map) to return -EBUSY until the
area is unmapped. It is similarly impossible for gntdev_vma_close to
be called while the vma is still being created.
Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
if (!(vma->vm_flags & VM_WRITE))
map->flags |= GNTMAP_readonly;
+ spin_unlock(&priv->lock);
+
err = apply_to_page_range(vma->vm_mm, vma->vm_start,
vma->vm_end - vma->vm_start,
find_grant_ptes, map);
if (err) {
printk(KERN_WARNING "find_grant_ptes() failure.\n");
- goto unlock_out;
+ return err;
}
err = map_grant_pages(map);
if (err) {
printk(KERN_WARNING "map_grant_pages() failure.\n");
- goto unlock_out;
+ return err;
}
+
map->is_mapped = 1;
+ return 0;
+
unlock_out:
spin_unlock(&priv->lock);
return err;