GitHub/exynos8895/android_kernel_samsung_universal8895.git
11 years agoRevert "KVM: MMU: lazily drop large spte"
Marcelo Tosatti [Wed, 20 Feb 2013 21:52:02 +0000 (18:52 -0300)]
Revert "KVM: MMU: lazily drop large spte"

This reverts commit caf6900f2d8aaebe404c976753f6813ccd31d95e.

It is causing migration failures, reference
https://bugzilla.kernel.org/show_bug.cgi?id=54061.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agox86: pvclock kvm: align allocation size to page size
Marcelo Tosatti [Tue, 19 Feb 2013 01:58:14 +0000 (22:58 -0300)]
x86: pvclock kvm: align allocation size to page size

To match whats mapped via vsyscalls to userspace.

Reported-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoMerge commit 'origin/next' into kvm-ppc-next
Alexander Graf [Fri, 15 Feb 2013 00:12:59 +0000 (01:12 +0100)]
Merge commit 'origin/next' into kvm-ppc-next

11 years agoKVM: nVMX: Remove redundant get_vmcs12 from nested_vmx_exit_handled_msr
Jan Kiszka [Mon, 11 Feb 2013 11:19:28 +0000 (12:19 +0100)]
KVM: nVMX: Remove redundant get_vmcs12 from nested_vmx_exit_handled_msr

We already pass vmcs12 as argument.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agox86 emulator: fix parity calculation for AAD instruction
Gleb Natapov [Wed, 13 Feb 2013 15:50:39 +0000 (17:50 +0200)]
x86 emulator: fix parity calculation for AAD instruction

Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: PPC: BookE: Handle alignment interrupts
Alexander Graf [Thu, 31 Jan 2013 13:17:38 +0000 (14:17 +0100)]
KVM: PPC: BookE: Handle alignment interrupts

When the guest triggers an alignment interrupt, we don't handle it properly
today and instead BUG_ON(). This really shouldn't happen.

Instead, we should just pass the interrupt back into the guest so it can deal
with it.

Reported-by: Gao Guanhua-B22826 <B22826@freescale.com>
Tested-by: Gao Guanhua-B22826 <B22826@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agobooke: Added DBCR4 SPR number
Bharat Bhushan [Tue, 15 Jan 2013 22:24:43 +0000 (22:24 +0000)]
booke: Added DBCR4 SPR number

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: booke: Allow multiple exception types
Bharat Bhushan [Tue, 15 Jan 2013 22:24:39 +0000 (22:24 +0000)]
KVM: PPC: booke: Allow multiple exception types

Current kvmppc_booke_handlers uses the same macro (KVM_HANDLER) and
all handlers are considered to be the same size. This will not be
the case if we want to use different macros for different handlers.

This patch improves the kvmppc_booke_handler so that it can
support different macros for different handlers.

Signed-off-by: Liu Yu <yu.liu@freescale.com>
[bharat.bhushan@freescale.com: Substantial changes]
Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: booke: use vcpu reference from thread_struct
Bharat Bhushan [Tue, 15 Jan 2013 22:20:42 +0000 (22:20 +0000)]
KVM: PPC: booke: use vcpu reference from thread_struct

Like other places, use thread_struct to get vcpu reference.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoMerge commit 'origin/next' into kvm-ppc-next
Alexander Graf [Wed, 13 Feb 2013 11:56:14 +0000 (12:56 +0100)]
Merge commit 'origin/next' into kvm-ppc-next

11 years agoKVM: Remove user_alloc from struct kvm_memory_slot
Takuya Yoshikawa [Thu, 7 Feb 2013 09:55:57 +0000 (18:55 +0900)]
KVM: Remove user_alloc from struct kvm_memory_slot

This field was needed to differentiate memory slots created by the new
API, KVM_SET_USER_MEMORY_REGION, from those by the old equivalent,
KVM_SET_MEMORY_REGION, whose support was dropped long before:

  commit b74a07beed0e64bfba413dcb70dd6749c57f43dc
  KVM: Remove kernel-allocated memory regions

Although we also have private memory slots to which KVM allocates
memory with vm_mmap(), !user_alloc slots in other words, the slot id
should be enough for differentiating them.

Note: corresponding function parameters will be removed later.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: VMX: disable apicv by default
Yang Zhang [Sun, 10 Feb 2013 14:57:18 +0000 (22:57 +0800)]
KVM: VMX: disable apicv by default

Without Posted Interrupt, current code is broken. Just disable by
default until Posted Interrupt is ready.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: s390: Fix handling of iscs.
Cornelia Huck [Thu, 7 Feb 2013 12:20:52 +0000 (13:20 +0100)]
KVM: s390: Fix handling of iscs.

There are two ways to express an interruption subclass:
- As a bitmask, as used in cr6.
- As a number, as used in the I/O interruption word.

Unfortunately, we have treated the I/O interruption word as if it
contained the bitmask as well, which went unnoticed so far as
- (not-yet-released) qemu made the same mistake, and
- Linux guest kernels don't check the isc value in the I/O interruption
  word for subchannel interrupts.

Make sure that we treat the I/O interruption word correctly.

Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: MMU: cleanup __direct_map
Xiao Guangrong [Tue, 5 Feb 2013 07:28:02 +0000 (15:28 +0800)]
KVM: MMU: cleanup __direct_map

Use link_shadow_page to link the sp to the spte in __direct_map

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: MMU: remove pt_access in mmu_set_spte
Xiao Guangrong [Tue, 5 Feb 2013 07:27:27 +0000 (15:27 +0800)]
KVM: MMU: remove pt_access in mmu_set_spte

It is only used in debug code, so drop it

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: MMU: cleanup mapping-level
Xiao Guangrong [Tue, 5 Feb 2013 07:26:54 +0000 (15:26 +0800)]
KVM: MMU: cleanup mapping-level

Use min() to cleanup mapping_level

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: MMU: lazily drop large spte
Xiao Guangrong [Tue, 5 Feb 2013 07:11:09 +0000 (15:11 +0800)]
KVM: MMU: lazily drop large spte

Currently, kvm zaps the large spte if write-protected is needed, the later
read can fault on that spte. Actually, we can make the large spte readonly
instead of making them not present, the page fault caused by read access can
be avoided

The idea is from Avi:
| As I mentioned before, write-protecting a large spte is a good idea,
| since it moves some work from protect-time to fault-time, so it reduces
| jitter.  This removes the need for the return value.

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: VMX: cleanup vmx_set_cr0().
Gleb Natapov [Mon, 4 Feb 2013 14:00:28 +0000 (16:00 +0200)]
KVM: VMX: cleanup vmx_set_cr0().

When calculating hw_cr0 teh current code masks bits that should be always
on and re-adds them back immediately after. Cleanup the code by masking
only those bits that should be dropped from hw_cr0. This allow us to
get rid of some defines.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: VMX: add missing exit names to VMX_EXIT_REASONS array
Gleb Natapov [Sun, 3 Feb 2013 16:17:17 +0000 (18:17 +0200)]
KVM: VMX: add missing exit names to VMX_EXIT_REASONS array

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: VMX: disable SMEP feature when guest is in non-paging mode
Dongxiao Xu [Mon, 4 Feb 2013 03:50:43 +0000 (11:50 +0800)]
KVM: VMX: disable SMEP feature when guest is in non-paging mode

SMEP is disabled if CPU is in non-paging mode in hardware.
However KVM always uses paging mode to emulate guest non-paging
mode with TDP. To emulate this behavior, SMEP needs to be manually
disabled when guest switches to non-paging mode.

We met an issue that, SMP Linux guest with recent kernel (enable
SMEP support, for example, 3.5.3) would crash with triple fault if
setting unrestricted_guest=0. This is because KVM uses an identity
mapping page table to emulate the non-paging mode, where the page
table is set with USER flag. If SMEP is still enabled in this case,
guest will meet unhandlable page fault and then crash.

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: Remove duplicate text in api.txt
Geoff Levand [Thu, 31 Jan 2013 20:06:08 +0000 (12:06 -0800)]
KVM: Remove duplicate text in api.txt

Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoRevert "KVM: MMU: split kvm_mmu_free_page"
Gleb Natapov [Wed, 30 Jan 2013 14:45:05 +0000 (16:45 +0200)]
Revert "KVM: MMU: split kvm_mmu_free_page"

This reverts commit bd4c86eaa6ff10abc4e00d0f45d2a28b10b09df4.

There is not user for kvm_mmu_isolate_page() any more.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: MMU: drop superfluous is_present_gpte() check.
Gleb Natapov [Wed, 30 Jan 2013 14:45:04 +0000 (16:45 +0200)]
KVM: MMU: drop superfluous is_present_gpte() check.

Gust page walker puts only present ptes into ptes[] array. No need to
check it again.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: MMU: drop superfluous min() call.
Gleb Natapov [Wed, 30 Jan 2013 14:45:03 +0000 (16:45 +0200)]
KVM: MMU: drop superfluous min() call.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: MMU: set base_role.nxe during mmu initialization.
Gleb Natapov [Wed, 30 Jan 2013 14:45:02 +0000 (16:45 +0200)]
KVM: MMU: set base_role.nxe during mmu initialization.

Move base_role.nxe initialisation to where all other roles are initialized.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: MMU: drop unneeded checks.
Gleb Natapov [Wed, 30 Jan 2013 14:45:01 +0000 (16:45 +0200)]
KVM: MMU: drop unneeded checks.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: MMU: make spte_is_locklessly_modifiable() more clear
Gleb Natapov [Wed, 30 Jan 2013 14:45:00 +0000 (16:45 +0200)]
KVM: MMU: make spte_is_locklessly_modifiable() more clear

spte_is_locklessly_modifiable() checks that both SPTE_HOST_WRITEABLE and
SPTE_MMU_WRITEABLE are present on spte. Make it more explicit.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: set_memory_region: Disallow changing read-only attribute later
Takuya Yoshikawa [Wed, 30 Jan 2013 10:40:41 +0000 (19:40 +0900)]
KVM: set_memory_region: Disallow changing read-only attribute later

As Xiao pointed out, there are a few problems with it:
 - kvm_arch_commit_memory_region() write protects the memory slot only
   for GET_DIRTY_LOG when modifying the flags.
 - FNAME(sync_page) uses the old spte value to set a new one without
   checking KVM_MEM_READONLY flag.

Since we flush all shadow pages when creating a new slot, the simplest
fix is to disallow such problematic flag changes: this is safe because
no one is doing such things.

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Cc: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: set_memory_region: Identify the requested change explicitly
Takuya Yoshikawa [Tue, 29 Jan 2013 02:00:07 +0000 (11:00 +0900)]
KVM: set_memory_region: Identify the requested change explicitly

KVM_SET_USER_MEMORY_REGION forces __kvm_set_memory_region() to identify
what kind of change is being requested by checking the arguments.  The
current code does this checking at various points in code and each
condition being used there is not easy to understand at first glance.

This patch consolidates these checks and introduces an enum to name the
possible changes to clean up the code.

Although this does not introduce any functional changes, there is one
change which optimizes the code a bit: if we have nothing to change, the
new code returns 0 immediately.

Note that the return value for this case cannot be changed since QEMU
relies on it: we noticed this when we changed it to -EINVAL and got a
section mismatch error at the final stage of live migration.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agos390/kvm: Fix instruction decoding
Christian Borntraeger [Fri, 25 Jan 2013 14:34:17 +0000 (15:34 +0100)]
s390/kvm: Fix instruction decoding

Instructions with long displacement have a signed displacement.
Currently the sign bit is interpreted as 2^20: Lets fix it by doing the
sign extension from 20bit to 32bit and then use it as a signed variable
in the addition (see kvm_s390_get_base_disp_rsy).

Furthermore, there are lots of "int" in that code. This is problematic,
because shifting on a signed integer is undefined/implementation defined
if the bit value happens to be negative.
Fortunately the promotion rules will make the right hand side unsigned
anyway, so there is no real problem right now.
Let's convert them anyway to unsigned where appropriate to avoid
problems if the code is changed or copy/pasted later on.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agos390/virtio-ccw: Fix setup_vq error handling.
Cornelia Huck [Fri, 25 Jan 2013 14:34:16 +0000 (15:34 +0100)]
s390/virtio-ccw: Fix setup_vq error handling.

virtio_ccw_setup_vq() failed to unwind correctly on errors. In
particular, it failed to delete the virtqueue on errors, leading to
list corruption when virtio_ccw_del_vqs() iterated over a virtqueue
that had not been added to the vcdev's list.

Fix this with redoing the error unwinding in virtio_ccw_setup_vq(),
using a single path for all errors.

Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agos390/kvm: Fix store status for ACRS/FPRS
Christian Borntraeger [Fri, 25 Jan 2013 14:34:15 +0000 (15:34 +0100)]
s390/kvm: Fix store status for ACRS/FPRS

On store status we need to copy the current state of registers
into a save area. Currently we might save stale versions:
The sie state descriptor doesnt have fields for guest ACRS,FPRS,
those registers are simply stored in the host registers. The host
program must copy these away if needed. We do that in vcpu_put/load.

If we now do a store status in KVM code between vcpu_put/load, the
saved values are not up-to-date. Lets collect the ACRS/FPRS before
saving them.

This also fixes some strange problems with hotplug and virtio-ccw,
since the low level machine check handler (on hotplug a machine check
will happen) will revalidate all registers with the content of the
save area.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
CC: stable@vger.kernel.org
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agokvm: Handle yield_to failure return code for potential undercommit case
Raghavendra K T [Tue, 22 Jan 2013 07:39:24 +0000 (13:09 +0530)]
kvm: Handle yield_to failure return code for potential undercommit case

yield_to returns -ESRCH, When source and target of yield_to
run queue length is one. When we see three successive failures of
yield_to we assume we are in potential undercommit case and abort
from PLE handler.
The assumption is backed by low probability of wrong decision
for even worst case scenarios such as average runqueue length
between 1 and 2.

More detail on rationale behind using three tries:
if p is the probability of finding rq length one on a particular cpu,
and if we do n tries, then probability of exiting ple handler is:

 p^(n+1) [ because we would have come across one source with rq length
1 and n target cpu rqs  with length 1 ]

so
num tries:         probability of aborting ple handler (1.5x overcommit)
 1                 1/4
 2                 1/8
 3                 1/16

We can increase this probability with more tries, but the problem is
the overhead.
Also, If we have tried three times that means we would have iterated
over 3 good eligible vcpus along with many non-eligible candidates. In
worst case if we iterate all the vcpus, we reduce 1x performance and
overcommit performance get hit.

note that we do not update last boosted vcpu in failure cases.
Thank Avi for raising question on aborting after first fail from yield_to.

Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agosched: Bail out of yield_to when source and target runqueue has one task
Peter Zijlstra [Tue, 22 Jan 2013 07:39:13 +0000 (13:09 +0530)]
sched: Bail out of yield_to when source and target runqueue has one task

In case of undercomitted scenarios, especially in large guests
yield_to overhead is significantly high. when run queue length of
source and target is one, take an opportunity to bail out and return
-ESRCH. This return condition can be further exploited to quickly come
out of PLE handler.

(History: Raghavendra initially worked on break out of kvm ple handler upon
 seeing source runqueue length = 1, but it had to export rq length).
 Peter came up with the elegant idea of return -ESRCH in scheduler core.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Raghavendra, Checking the rq length of target vcpu condition added.(thanks Avi)
Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Acked-by: Andrew Jones <drjones@redhat.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agox86, apicv: add virtual interrupt delivery support
Yang Zhang [Fri, 25 Jan 2013 02:18:51 +0000 (10:18 +0800)]
x86, apicv: add virtual interrupt delivery support

Virtual interrupt delivery avoids KVM to inject vAPIC interrupts
manually, which is fully taken care of by the hardware. This needs
some special awareness into existing interrupr injection path:

- for pending interrupt, instead of direct injection, we may need
  update architecture specific indicators before resuming to guest.

- A pending interrupt, which is masked by ISR, should be also
  considered in above update action, since hardware will decide
  when to inject it at right time. Current has_interrupt and
  get_interrupt only returns a valid vector from injection p.o.v.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agox86, apicv: add virtual x2apic support
Yang Zhang [Fri, 25 Jan 2013 02:18:50 +0000 (10:18 +0800)]
x86, apicv: add virtual x2apic support

basically to benefit from apicv, we need to enable virtualized x2apic mode.
Currently, we only enable it when guest is really using x2apic.

Also, clear MSR bitmap for corresponding x2apic MSRs when guest enabled x2apic:
0x800 - 0x8ff: no read intercept for apicv register virtualization,
               except APIC ID and TMCCT which need software's assistance to
               get right value.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agox86, apicv: add APICv register virtualization support
Yang Zhang [Fri, 25 Jan 2013 02:18:49 +0000 (10:18 +0800)]
x86, apicv: add APICv register virtualization support

- APIC read doesn't cause VM-Exit
- APIC write becomes trap-like

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Yang Zhang <yang.z.zhang@intel.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agokvm: Obey read-only mappings in iommu
Alex Williamson [Thu, 24 Jan 2013 22:04:09 +0000 (15:04 -0700)]
kvm: Obey read-only mappings in iommu

We've been ignoring read-only mappings and programming everything
into the iommu as read-write.  Fix this to only include the write
access flag when read-only is not set.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agokvm: Force IOMMU remapping on memory slot read-only flag changes
Alex Williamson [Thu, 24 Jan 2013 22:04:03 +0000 (15:04 -0700)]
kvm: Force IOMMU remapping on memory slot read-only flag changes

Memory slot flags can be altered without changing other parameters of
the slot.  The read-only attribute is the only one the IOMMU cares
about, so generate an un-map, re-map when this occurs.  This also
avoid unnecessarily re-mapping the slot when no IOMMU visible changes
are made.

Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: x86 emulator: fix test_cc() build failure on i386
Avi Kivity [Sat, 26 Jan 2013 21:56:04 +0000 (23:56 +0200)]
KVM: x86 emulator: fix test_cc() build failure on i386

'pushq' doesn't exist on i386.  Replace with 'push', which should work
since the operand is a register.

Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: PPC: E500: Remove kvmppc_e500_tlbil_all usage from guest TLB code
Alexander Graf [Fri, 18 Jan 2013 14:22:08 +0000 (15:22 +0100)]
KVM: PPC: E500: Remove kvmppc_e500_tlbil_all usage from guest TLB code

The guest TLB handling code should not have any insight into how the host
TLB shadow code works.

kvmppc_e500_tlbil_all() is a function that is used for distinction between
e500v2 and e500mc (E.HV) on how to flush shadow entries. This function really
is private between the e500.c/e500mc.c file and e500_mmu_host.c.

Instead of this one, use the public kvmppc_core_flush_tlb() function to flush
all shadow TLB entries. As a nice side effect, with this we also end up
flushing TLB1 entries which we forgot to do before.

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: E500: Make clear_tlb_refs and clear_tlb1_bitmap static
Alexander Graf [Fri, 18 Jan 2013 14:13:19 +0000 (15:13 +0100)]
KVM: PPC: E500: Make clear_tlb_refs and clear_tlb1_bitmap static

Host shadow TLB flushing is logic that the guest TLB code should have
no insight about. Declare the internal clear_tlb_refs and clear_tlb1_bitmap
functions static to the host TLB handling file.

Instead of these, we can use the already exported kvmppc_core_flush_tlb().
This gives us a common API across the board to say "please flush any
pending host shadow translation".

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: e500: Implement TLB1-in-TLB0 mapping
Alexander Graf [Thu, 17 Jan 2013 16:54:36 +0000 (17:54 +0100)]
KVM: PPC: e500: Implement TLB1-in-TLB0 mapping

When a host mapping fault happens in a guest TLB1 entry today, we
map the translated guest entry into the host's TLB1.

This isn't particularly clever when the guest is mapped by normal 4k
pages, since these would be a lot better to put into TLB0 instead.

This patch adds the required logic to map 4k TLB1 shadow maps into
the host's TLB0.

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: E500: Split host and guest MMU parts
Alexander Graf [Fri, 11 Jan 2013 14:22:45 +0000 (15:22 +0100)]
KVM: PPC: E500: Split host and guest MMU parts

This patch splits the file e500_tlb.c into e500_mmu.c (guest TLB handling)
and e500_mmu_host.c (host TLB handling).

The main benefit of this split is readability and maintainability. It's
just a lot harder to write dirty code :).

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: e500: Call kvmppc_mmu_map for initial mapping
Alexander Graf [Thu, 17 Jan 2013 18:23:28 +0000 (19:23 +0100)]
KVM: PPC: e500: Call kvmppc_mmu_map for initial mapping

When emulating tlbwe, we want to automatically map the entry that just got
written in our shadow TLB map, because chances are quite high that it's
going to be used very soon.

Today this happens explicitly, duplicating all the logic that is in
kvmppc_mmu_map() already. Just call that one instead.

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: E500: Propagate errors when shadow mapping
Alexander Graf [Fri, 18 Jan 2013 01:31:01 +0000 (02:31 +0100)]
KVM: PPC: E500: Propagate errors when shadow mapping

When shadow mapping a page, mapping this page can fail. In that case we
don't have a shadow map.

Take this case into account, otherwise we might end up writing bogus TLB
entries into the host TLB.

While at it, also move the write_stlbe() calls into the respective TLBn
handlers.

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: E500: Explicitly mark shadow maps invalid
Alexander Graf [Fri, 18 Jan 2013 01:27:14 +0000 (02:27 +0100)]
KVM: PPC: E500: Explicitly mark shadow maps invalid

When we invalidate shadow TLB maps on the host, we don't mark them
as not valid. But we should.

Fix this by removing the E500_TLB_VALID from their flags when
invalidating.

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: E500: Move write_stlbe higher
Alexander Graf [Fri, 18 Jan 2013 01:25:23 +0000 (02:25 +0100)]
KVM: PPC: E500: Move write_stlbe higher

Later patches want to call the function and it doesn't have
dependencies on anything below write_host_tlbe.

Move it higher up in the file.

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: VMX: set vmx->emulation_required only when needed.
Gleb Natapov [Mon, 21 Jan 2013 13:36:49 +0000 (15:36 +0200)]
KVM: VMX: set vmx->emulation_required only when needed.

If emulate_invalid_guest_state=false vmx->emulation_required is never
actually used, but it ends up to be always set to true since
handle_invalid_guest_state(), the only place it is reset back to
false, is never called. This, besides been not very clean, makes vmexit
and vmentry path to check emulate_invalid_guest_state needlessly.

The patch fixes that by keeping emulation_required coherent with
emulate_invalid_guest_state setting.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86: fix use of uninitialized memory as segment descriptor in emulator.
Gleb Natapov [Mon, 21 Jan 2013 13:36:48 +0000 (15:36 +0200)]
KVM: x86: fix use of uninitialized memory as segment descriptor in emulator.

If VMX reports segment as unusable, zero descriptor passed by the emulator
before returning. Such descriptor will be considered not present by the
emulator.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: VMX: rename fix_pmode_dataseg to fix_pmode_seg.
Gleb Natapov [Mon, 21 Jan 2013 13:36:47 +0000 (15:36 +0200)]
KVM: VMX: rename fix_pmode_dataseg to fix_pmode_seg.

The function deals with code segment too.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: VMX: don't clobber segment AR of unusable segments.
Gleb Natapov [Mon, 21 Jan 2013 13:36:46 +0000 (15:36 +0200)]
KVM: VMX: don't clobber segment AR of unusable segments.

Usability is returned in unusable field, so not need to clobber entire
AR. Callers have to know how to deal with unusable segments already
since if emulate_invalid_guest_state=true AR is not zeroed.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: VMX: skip vmx->rmode.vm86_active check on cr0 write if unrestricted guest is...
Gleb Natapov [Mon, 21 Jan 2013 13:36:45 +0000 (15:36 +0200)]
KVM: VMX: skip vmx->rmode.vm86_active check on cr0 write if unrestricted guest is enabled

vmx->rmode.vm86_active is never true is unrestricted guest is enabled.
Make it more explicit that neither enter_pmode() nor enter_rmode() is
called in this case.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: VMX: remove hack that disables emulation on vcpu reset/init
Gleb Natapov [Mon, 21 Jan 2013 13:36:44 +0000 (15:36 +0200)]
KVM: VMX: remove hack that disables emulation on vcpu reset/init

There is no reason for it. If state is suitable for vmentry it
will be detected during guest entry and no emulation will happen.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: VMX: if unrestricted guest is enabled vcpu state is always valid.
Gleb Natapov [Mon, 21 Jan 2013 13:36:43 +0000 (15:36 +0200)]
KVM: VMX: if unrestricted guest is enabled vcpu state is always valid.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: VMX: reset CPL only on CS register write.
Gleb Natapov [Mon, 21 Jan 2013 13:36:42 +0000 (15:36 +0200)]
KVM: VMX: reset CPL only on CS register write.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: VMX: remove special CPL cache access during transition to real mode.
Gleb Natapov [Mon, 21 Jan 2013 13:36:41 +0000 (15:36 +0200)]
KVM: VMX: remove special CPL cache access during transition to real mode.

Since vmx_get_cpl() always returns 0 when VCPU is in real mode it is no
longer needed. Also reset CPL cache to zero during transaction to
protected mode since transaction may happen while CS.selectors & 3 != 0,
but in reality CPL is 0.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: convert a few freestanding emulations to fastop
Avi Kivity [Sat, 19 Jan 2013 17:51:57 +0000 (19:51 +0200)]
KVM: x86 emulator: convert a few freestanding emulations to fastop

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: rearrange fastop definitions
Avi Kivity [Sat, 19 Jan 2013 17:51:56 +0000 (19:51 +0200)]
KVM: x86 emulator: rearrange fastop definitions

Make fastop opcodes usable in other emulations.

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: convert 2-operand IMUL to fastop
Avi Kivity [Sat, 19 Jan 2013 17:51:55 +0000 (19:51 +0200)]
KVM: x86 emulator: convert 2-operand IMUL to fastop

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: convert BT/BTS/BTR/BTC/BSF/BSR to fastop
Avi Kivity [Sat, 19 Jan 2013 17:51:54 +0000 (19:51 +0200)]
KVM: x86 emulator: convert BT/BTS/BTR/BTC/BSF/BSR to fastop

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: convert INC/DEC to fastop
Avi Kivity [Sat, 19 Jan 2013 17:51:53 +0000 (19:51 +0200)]
KVM: x86 emulator: convert INC/DEC to fastop

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: covert SETCC to fastop
Avi Kivity [Sat, 19 Jan 2013 17:51:52 +0000 (19:51 +0200)]
KVM: x86 emulator: covert SETCC to fastop

This is a bit of a special case since we don't have the usual
byte/word/long/quad switch; instead we switch on the condition code embedded
in the instruction.

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: convert shift/rotate instructions to fastop
Avi Kivity [Sat, 19 Jan 2013 17:51:51 +0000 (19:51 +0200)]
KVM: x86 emulator: convert shift/rotate instructions to fastop

SHL, SHR, ROL, ROR, RCL, RCR, SAR, SAL

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: Convert SHLD, SHRD to fastop
Avi Kivity [Sat, 19 Jan 2013 17:51:50 +0000 (19:51 +0200)]
KVM: x86 emulator: Convert SHLD, SHRD to fastop

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86: improve reexecute_instruction
Xiao Guangrong [Sun, 13 Jan 2013 15:49:07 +0000 (23:49 +0800)]
KVM: x86: improve reexecute_instruction

The current reexecute_instruction can not well detect the failed instruction
emulation. It allows guest to retry all the instructions except it accesses
on error pfn

For example, some cases are nested-write-protect - if the page we want to
write is used as PDE but it chains to itself. Under this case, we should
stop the emulation and report the case to userspace

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86: let reexecute_instruction work for tdp
Xiao Guangrong [Sun, 13 Jan 2013 15:46:52 +0000 (23:46 +0800)]
KVM: x86: let reexecute_instruction work for tdp

Currently, reexecute_instruction refused to retry all instructions if
tdp is enabled. If nested npt is used, the emulation may be caused by
shadow page, it can be fixed by dropping the shadow page. And the only
condition that tdp can not retry the instruction is the access fault
on error pfn

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86: clean up reexecute_instruction
Xiao Guangrong [Sun, 13 Jan 2013 15:44:12 +0000 (23:44 +0800)]
KVM: x86: clean up reexecute_instruction

Little cleanup for reexecute_instruction, also use gpa_to_gfn in
retry_instruction

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: set_memory_region: Remove unnecessary variable memslot
Takuya Yoshikawa [Fri, 11 Jan 2013 09:27:43 +0000 (18:27 +0900)]
KVM: set_memory_region: Remove unnecessary variable memslot

One such variable, slot, is enough for holding a pointer temporarily.
We also remove another local variable named slot, which is limited in
a block, since it is confusing to have the same name in this function.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: set_memory_region: Don't check for overlaps unless we create or move a slot
Takuya Yoshikawa [Fri, 11 Jan 2013 09:26:55 +0000 (18:26 +0900)]
KVM: set_memory_region: Don't check for overlaps unless we create or move a slot

Don't need the check for deleting an existing slot or just modifiying
the flags.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: set_memory_region: Don't jump to out_free unnecessarily
Takuya Yoshikawa [Fri, 11 Jan 2013 09:26:10 +0000 (18:26 +0900)]
KVM: set_memory_region: Don't jump to out_free unnecessarily

This makes the separation between the sanity checks and the rest of the
code a bit clearer.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: s390: kvm/sigp.c: fix memory leakage
Cong Ding [Tue, 15 Jan 2013 10:17:29 +0000 (11:17 +0100)]
KVM: s390: kvm/sigp.c: fix memory leakage

the variable inti should be freed in the branch CPUSTAT_STOPPED.

Signed-off-by: Cong Ding <dinggnu@gmail.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: MMU: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes...
Takuya Yoshikawa [Tue, 8 Jan 2013 10:47:33 +0000 (19:47 +0900)]
KVM: MMU: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time

If the userspace starts dirty logging for a large slot, say 64GB of
memory, kvm_mmu_slot_remove_write_access() needs to hold mmu_lock for
a long time such as tens of milliseconds.  This patch controls the lock
hold time by asking the scheduler if we need to reschedule for others.

One penalty for this is that we need to flush TLBs before releasing
mmu_lock.  But since holding mmu_lock for a long time does affect not
only the guest, vCPU threads in other words, but also the host as a
whole, we should pay for that.

In practice, the cost will not be so high because we can protect a fair
amount of memory before being rescheduled: on my test environment,
cond_resched_lock() was called only once for protecting 12GB of memory
even without THP.  We can also revisit Avi's "unlocked TLB flush" work
later for completely suppressing extra TLB flushes if needed.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself
Takuya Yoshikawa [Tue, 8 Jan 2013 10:46:48 +0000 (19:46 +0900)]
KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself

Better to place mmu_lock handling and TLB flushing code together since
this is a self-contained function.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself
Takuya Yoshikawa [Tue, 8 Jan 2013 10:46:07 +0000 (19:46 +0900)]
KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself

No reason to make callers take mmu_lock since we do not need to protect
kvm_mmu_change_mmu_pages() and kvm_mmu_slot_remove_write_access()
together by mmu_lock in kvm_arch_commit_memory_region(): the former
calls kvm_mmu_commit_zap_page() and flushes TLBs by itself.

Note: we do not need to protect kvm->arch.n_requested_mmu_pages by
mmu_lock as can be seen from the fact that it is read locklessly.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: Remove unused slot_bitmap from kvm_mmu_page
Takuya Yoshikawa [Tue, 8 Jan 2013 10:45:28 +0000 (19:45 +0900)]
KVM: Remove unused slot_bitmap from kvm_mmu_page

Not needed any more.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based
Takuya Yoshikawa [Tue, 8 Jan 2013 10:44:48 +0000 (19:44 +0900)]
KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based

This makes it possible to release mmu_lock and reschedule conditionally
in a later patch.  Although this may increase the time needed to protect
the whole slot when we start dirty logging, the kernel should not allow
the userspace to trigger something that will hold a spinlock for such a
long time as tens of milliseconds: actually there is no limit since it
is roughly proportional to the number of guest pages.

Another point to note is that this patch removes the only user of
slot_bitmap which will cause some problems when we increase the number
of slots further.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: MMU: Remove unused parameter level from __rmap_write_protect()
Takuya Yoshikawa [Tue, 8 Jan 2013 10:44:09 +0000 (19:44 +0900)]
KVM: MMU: Remove unused parameter level from __rmap_write_protect()

No longer need to care about the mapping level in this function.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoKVM: Write protect the updated slot only when dirty logging is enabled
Takuya Yoshikawa [Tue, 8 Jan 2013 10:43:28 +0000 (19:43 +0900)]
KVM: Write protect the updated slot only when dirty logging is enabled

Calling kvm_mmu_slot_remove_write_access() for a deleted slot does
nothing but search for non-existent mmu pages which have mappings to
that deleted memory; this is safe but a waste of time.

Since we want to make the function rmap based in a later patch, in a
manner which makes it unsafe to be called for a deleted slot, we makes
the caller see if the slot is non-zero and being dirty logged.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
11 years agoMerge branch 'kvm-ppc-next' of https://github.com/agraf/linux-2.6 into queue
Gleb Natapov [Mon, 14 Jan 2013 09:01:26 +0000 (11:01 +0200)]
Merge branch 'kvm-ppc-next' of https://github.com/agraf/linux-2.6 into queue

11 years agoKVM: trace: Fix exit decoding.
Cornelia Huck [Tue, 8 Jan 2013 12:00:01 +0000 (13:00 +0100)]
KVM: trace: Fix exit decoding.

trace_kvm_userspace_exit has been missing the KVM_EXIT_WATCHDOG exit.

CC: Bharat Bhushan <r65777@freescale.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: MMU: fix infinite fault access retry
Xiao Guangrong [Tue, 8 Jan 2013 06:36:51 +0000 (14:36 +0800)]
KVM: MMU: fix infinite fault access retry

We have two issues in current code:
- if target gfn is used as its page table, guest will refault then kvm will use
  small page size to map it. We need two #PF to fix its shadow page table

- sometimes, say a exception is triggered during vm-exit caused by #PF
  (see handle_exception() in vmx.c), we remove all the shadow pages shadowed
  by the target gfn before go into page fault path, it will cause infinite
  loop:
  delete shadow pages shadowed by the gfn -> try to use large page size to map
  the gfn -> retry the access ->...

To fix these, we can adjust page size early if the target gfn is used as page
table

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: MMU: fix Dirty bit missed if CR0.WP = 0
Xiao Guangrong [Tue, 8 Jan 2013 06:36:04 +0000 (14:36 +0800)]
KVM: MMU: fix Dirty bit missed if CR0.WP = 0

If the write-fault access is from supervisor and CR0.WP is not set on the
vcpu, kvm will fix it by adjusting pte access - it sets the W bit on pte
and clears U bit. This is the chance that kvm can change pte access from
readonly to writable

Unfortunately, the pte access is the access of 'direct' shadow page table,
means direct sp.role.access = pte_access, then we will create a writable
spte entry on the readonly shadow page table. It will cause Dirty bit is
not tracked when two guest ptes point to the same large page. Note, it
does not have other impact except Dirty bit since cr0.wp is encoded into
sp.role

It can be fixed by adjusting pte access before establishing shadow page
table. Also, after that, no mmu specified code exists in the common function
and drop two parameters in set_spte

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: PPC: BookE: Add EPR ONE_REG sync
Alexander Graf [Fri, 4 Jan 2013 17:28:51 +0000 (18:28 +0100)]
KVM: PPC: BookE: Add EPR ONE_REG sync

We need to be able to read and write the contents of the EPR register
from user space.

This patch implements that logic through the ONE_REG API and declares
its (never implemented) SREGS counterpart as deprecated.

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: BookE: Implement EPR exit
Alexander Graf [Fri, 4 Jan 2013 17:12:48 +0000 (18:12 +0100)]
KVM: PPC: BookE: Implement EPR exit

The External Proxy Facility in FSL BookE chips allows the interrupt
controller to automatically acknowledge an interrupt as soon as a
core gets its pending external interrupt delivered.

Today, user space implements the interrupt controller, so we need to
check on it during such a cycle.

This patch implements logic for user space to enable EPR exiting,
disable EPR exiting and EPR exiting itself, so that user space can
acknowledge an interrupt when an external interrupt has successfully
been delivered into the guest vcpu.

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: BookE: Emulate mfspr on EPR
Alexander Graf [Fri, 4 Jan 2013 17:02:14 +0000 (18:02 +0100)]
KVM: PPC: BookE: Emulate mfspr on EPR

The EPR register is potentially valid for PR KVM as well, so we need
to emulate accesses to it. It's only defined for reading, so only
handle the mfspr case.

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: BookE: Allow irq deliveries to inject requests
Alexander Graf [Thu, 20 Dec 2012 04:52:39 +0000 (04:52 +0000)]
KVM: PPC: BookE: Allow irq deliveries to inject requests

When injecting an interrupt into guest context, we usually don't need
to check for requests anymore. At least not until today.

With the introduction of EPR, we will have to create a request when the
guest has successfully accepted an external interrupt though.

So we need to prepare the interrupt delivery to abort guest entry
gracefully. Otherwise we'd delay the EPR request.

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: Fix mfspr/mtspr MMUCFG emulation
Mihai Caraman [Thu, 20 Dec 2012 04:52:39 +0000 (04:52 +0000)]
KVM: PPC: Fix mfspr/mtspr MMUCFG emulation

On mfspr/mtspr emulation path Book3E's MMUCFG SPR with value 1015 clashes
with G4's MSSSR0 SPR. Move MSSSR0 emulation from generic part to Books3S.
MSSSR0 also clashes with Book3S's DABRX SPR. DABRX was not explicitly
handled so Book3S execution flow will behave as before.

Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: Book3S: PR: Enable alternative instruction for SC 1
Alexander Graf [Fri, 14 Dec 2012 22:42:05 +0000 (23:42 +0100)]
KVM: PPC: Book3S: PR: Enable alternative instruction for SC 1

When running on top of pHyp, the hypercall instruction "sc 1" goes
straight into pHyp without trapping in supervisor mode.

So if we want to support PAPR guest in this configuration we need to
add a second way of accessing PAPR hypercalls, preferably with the
exact same semantics except for the instruction.

So let's overlay an officially reserved instruction and emulate PAPR
hypercalls whenever we hit that one.

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: Only WARN on invalid emulation
Alexander Graf [Fri, 14 Dec 2012 22:46:03 +0000 (23:46 +0100)]
KVM: PPC: Only WARN on invalid emulation

When we hit an emulation result that we didn't expect, that is an error,
but it's nothing that warrants a BUG(), because it can be guest triggered.

So instead, let's only WARN() the user that this happened.

Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: PPC: Fix SREGS documentation reference
Mihai Caraman [Tue, 11 Dec 2012 03:38:23 +0000 (03:38 +0000)]
KVM: PPC: Fix SREGS documentation reference

Reflect the uapi folder change in SREGS API documentation.

Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com>
Reviewed-by: Amos Kong <kongjianjun@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
11 years agoKVM: s390: Gracefully handle busy conditions on ccw_device_start
Christian Borntraeger [Mon, 7 Jan 2013 14:51:52 +0000 (15:51 +0100)]
KVM: s390: Gracefully handle busy conditions on ccw_device_start

In rare cases a virtio command might try to issue a ccw before a former
ccw was answered with a tsch. This will cause CC=2 (busy). Lets just
retry in that case.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: s390: Dynamic allocation of virtio-ccw I/O data.
Cornelia Huck [Mon, 7 Jan 2013 14:51:51 +0000 (15:51 +0100)]
KVM: s390: Dynamic allocation of virtio-ccw I/O data.

Dynamically allocate any data structures like ccw used when
doing channel I/O. Otherwise, we'd need to add extra serialization
for the different callbacks using the same data structures.

Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: convert basic ALU ops to fastop
Avi Kivity [Fri, 4 Jan 2013 14:18:54 +0000 (16:18 +0200)]
KVM: x86 emulator: convert basic ALU ops to fastop

Opcodes:
TEST
CMP
ADD
ADC
SUB
SBB
XOR
OR
AND

Acked-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: add macros for defining 2-operand fastop emulation
Avi Kivity [Fri, 4 Jan 2013 14:18:53 +0000 (16:18 +0200)]
KVM: x86 emulator: add macros for defining 2-operand fastop emulation

Acked-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: convert NOT, NEG to fastop
Avi Kivity [Fri, 4 Jan 2013 14:18:52 +0000 (16:18 +0200)]
KVM: x86 emulator: convert NOT, NEG to fastop

Acked-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: mark CMP, CMPS, SCAS, TEST as NoWrite
Avi Kivity [Fri, 4 Jan 2013 14:18:51 +0000 (16:18 +0200)]
KVM: x86 emulator: mark CMP, CMPS, SCAS, TEST as NoWrite

Acked-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: introduce NoWrite flag
Avi Kivity [Fri, 4 Jan 2013 14:18:50 +0000 (16:18 +0200)]
KVM: x86 emulator: introduce NoWrite flag

Instead of disabling writeback via OP_NONE, just specify NoWrite.

Acked-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: Support for declaring single operand fastops
Avi Kivity [Fri, 4 Jan 2013 14:18:49 +0000 (16:18 +0200)]
KVM: x86 emulator: Support for declaring single operand fastops

Acked-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
11 years agoKVM: x86 emulator: framework for streamlining arithmetic opcodes
Avi Kivity [Fri, 4 Jan 2013 14:18:48 +0000 (16:18 +0200)]
KVM: x86 emulator: framework for streamlining arithmetic opcodes

We emulate arithmetic opcodes by executing a "similar" (same operation,
different operands) on the cpu.  This ensures accurate emulation, esp. wrt.
eflags.  However, the prologue and epilogue around the opcode is fairly long,
consisting of a switch (for the operand size) and code to load and save the
operands.  This is repeated for every opcode.

This patch introduces an alternative way to emulate arithmetic opcodes.
Instead of the above, we have four (three on i386) functions consisting
of just the opcode and a ret; one for each operand size.  For example:

   .align 8
   em_notb:
not %al
ret

   .align 8
   em_notw:
not %ax
ret

   .align 8
   em_notl:
not %eax
ret

   .align 8
   em_notq:
not %rax
ret

The prologue and epilogue are shared across all opcodes.  Note the functions
use a special calling convention; notably eflags is an input/output parameter
and is not clobbered.  Rather than dispatching the four functions through a
jump table, the functions are declared as a constant size (8) so their address
can be calculated.

Acked-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>