Arriving at read_kmem() with an offset representing a bogus kernel
address (e.g. 0 from a simple "cat /dev/kmem") leads to copy_to_user
faulting on the kernel-side read.
x86_64 happens to get away with this since the optimised implementation
uses "rep movs*", thus the user write (which is allowed to fault) and
the kernel read are the same instruction, the kernel-side fault falls
into the user-side fixup handler and the chain of events which
transpires ends up returning an error as one might expect, even if it's
an inappropriate -EFAULT. On other architectures, though, the read is
not covered by the fixup entry for the write, and we get a big scary
"Unable to hande kernel paging request..." dump.
The more typical use-case of mmap_kmem() has always (within living
memory at least) returned -EIO for addresses which don't satisfy
pfn_valid(), so let's make that consistent across {read,write}_kem()
too.
Reported-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
char *kbuf; /* k-addr because vread() takes vmlist_lock rwlock */
int err = 0;
+ if (!pfn_valid(PFN_DOWN(p)))
+ return -EIO;
+
read = 0;
if (p < (unsigned long) high_memory) {
low_count = count;
char *kbuf; /* k-addr because vwrite() takes vmlist_lock rwlock */
int err = 0;
+ if (!pfn_valid(PFN_DOWN(p)))
+ return -EIO;
+
if (p < (unsigned long) high_memory) {
unsigned long to_write = min_t(unsigned long, count,
(unsigned long)high_memory - p);