Paul Mackerras [Fri, 20 Sep 2013 04:52:49 +0000 (14:52 +1000)]
KVM: PPC: Book3S PR: Allocate kvm_vcpu structs from kvm_vcpu_cache
This makes PR KVM allocate its kvm_vcpu structs from the kvm_vcpu_cache
rather than having them embedded in the kvmppc_vcpu_book3s struct,
which is allocated with vzalloc. The reason is to reduce the
differences between PR and HV KVM in order to make is easier to have
them coexist in one kernel binary.
With this, the kvm_vcpu struct has a pointer to the kvmppc_vcpu_book3s
struct. The pointer to the kvmppc_book3s_shadow_vcpu struct has moved
from the kvmppc_vcpu_book3s struct to the kvm_vcpu struct, and is only
present for 32-bit, since it is only used for 32-bit.
Signed-off-by: Paul Mackerras <paulus@samba.org>
[agraf: squash in compile fix from Aneesh]
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 20 Sep 2013 04:52:48 +0000 (14:52 +1000)]
KVM: PPC: Book3S PR: Make HPT accesses and updates SMP-safe
This adds a per-VM mutex to provide mutual exclusion between vcpus
for accesses to and updates of the guest hashed page table (HPT).
This also makes the code use single-byte writes to the HPT entry
when updating of the reference (R) and change (C) bits. The reason
for doing this, rather than writing back the whole HPTE, is that on
non-PAPR virtual machines, the guest OS might be writing to the HPTE
concurrently, and writing back the whole HPTE might conflict with
that. Also, real hardware does single-byte writes to update R and C.
The new mutex is taken in kvmppc_mmu_book3s_64_xlate() when reading
the HPT and updating R and/or C, and in the PAPR HPT update hcalls
(H_ENTER, H_REMOVE, etc.). Having the mutex means that we don't need
to use a hypervisor lock bit in the HPT update hcalls, and we don't
need to be careful about the order in which the bytes of the HPTE are
updated by those hcalls.
The other change here is to make emulated TLB invalidations (tlbie)
effective across all vcpus. To do this we call kvmppc_mmu_pte_vflush
for all vcpus in kvmppc_ppc_book3s_64_tlbie().
For 32-bit, this makes the setting of the accessed and dirty bits use
single-byte writes, and makes tlbie invalidate shadow HPTEs for all
vcpus.
With this, PR KVM can successfully run SMP guests.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 20 Sep 2013 04:52:47 +0000 (14:52 +1000)]
KVM: PPC: Book3S PR: Correct errors in H_ENTER implementation
The implementation of H_ENTER in PR KVM has some errors:
* With H_EXACT not set, if the HPTEG is full, we return H_PTEG_FULL
as the return value of kvmppc_h_pr_enter, but the caller is expecting
one of the EMULATE_* values. The H_PTEG_FULL needs to go in the
guest's R3 instead.
* With H_EXACT set, if the selected HPTE is already valid, the H_ENTER
call should return a H_PTEG_FULL error.
This fixes these errors and also makes it write only the selected HPTE,
not the whole group, since only the selected HPTE has been modified.
This also micro-optimizes the calculations involving pte_index and i.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 20 Sep 2013 04:52:46 +0000 (14:52 +1000)]
KVM: PPC: Book3S PR: Handle PP0 page-protection bit in guest HPTEs
64-bit POWER processors have a three-bit field for page protection in
the hashed page table entry (HPTE). Currently we only interpret the two
bits that were present in older versions of the architecture. The only
defined combination that has the new bit set is 110, meaning read-only
for supervisor and no access for user mode.
This adds code to kvmppc_mmu_book3s_64_xlate() to interpret the extra
bit appropriately.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 20 Sep 2013 04:52:45 +0000 (14:52 +1000)]
KVM: PPC: Book3S PR: Use 64k host pages where possible
Currently, PR KVM uses 4k pages for the host-side mappings of guest
memory, regardless of the host page size. When the host page size is
64kB, we might as well use 64k host page mappings for guest mappings
of 64kB and larger pages and for guest real-mode mappings. However,
the magic page has to remain a 4k page.
To implement this, we first add another flag bit to the guest VSID
values we use, to indicate that this segment is one where host pages
should be mapped using 64k pages. For segments with this bit set
we set the bits in the shadow SLB entry to indicate a 64k base page
size. When faulting in host HPTEs for this segment, we make them
64k HPTEs instead of 4k. We record the pagesize in struct hpte_cache
for use when invalidating the HPTE.
For now we restrict the segment containing the magic page (if any) to
4k pages. It should be possible to lift this restriction in future
by ensuring that the magic 4k page is appropriately positioned within
a host 64k page.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 20 Sep 2013 04:52:44 +0000 (14:52 +1000)]
KVM: PPC: Book3S PR: Allow guest to use 64k pages
This adds the code to interpret 64k HPTEs in the guest hashed page
table (HPT), 64k SLB entries, and to tell the guest about 64k pages
in kvm_vm_ioctl_get_smmu_info(). Guest 64k pages are still shadowed
by 4k pages.
This also adds another hash table to the four we have already in
book3s_mmu_hpte.c to allow us to find all the PTEs that we have
instantiated that match a given 64k guest page.
The tlbie instruction changed starting with POWER6 to use a bit in
the RB operand to indicate large page invalidations, and to use other
RB bits to indicate the base and actual page sizes and the segment
size. 64k pages came in slightly earlier, with POWER5++.
We use one bit in vcpu->arch.hflags to indicate that the emulated
cpu supports 64k pages, and another to indicate that it has the new
tlbie definition.
The KVM_PPC_GET_SMMU_INFO ioctl presents a bit of a problem, because
the MMU capabilities depend on which CPU model we're emulating, but it
is a VM ioctl not a VCPU ioctl and therefore doesn't get passed a VCPU
fd. In addition, commonly-used userspace (QEMU) calls it before
setting the PVR for any VCPU. Therefore, as a best effort we look at
the first vcpu in the VM and return 64k pages or not depending on its
capabilities. We also make the PVR default to the host PVR on recent
CPUs that support 1TB segments (and therefore multiple page sizes as
well) so that KVM_PPC_GET_SMMU_INFO will include 64k page and 1TB
segment support on those CPUs.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 20 Sep 2013 04:52:43 +0000 (14:52 +1000)]
KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu
Currently PR-style KVM keeps the volatile guest register values
(R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two
places, a kmalloc'd struct and in the PACA, and it gets copied back
and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
can't rely on being able to access the kmalloc'd struct.
This changes the code to copy the volatile values into the shadow_vcpu
as one of the last things done before entering the guest. Similarly
the values are copied back out of the shadow_vcpu to the kvm_vcpu
immediately after exiting the guest. We arrange for interrupts to be
still disabled at this point so that we can't get preempted on 64-bit
and end up copying values from the wrong PACA.
This means that the accessor functions in kvm_book3s.h for these
registers are greatly simplified, and are same between PR and HV KVM.
In places where accesses to shadow_vcpu fields are now replaced by
accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
With this, the time to read the PVR one million times in a loop went
from 567.7ms to 575.5ms (averages of 6 values), an increase of about
1.4% for this worse-case test for guest entries and exits. The
standard deviation of the measurements is about 11ms, so the
difference is only marginally significant statistically.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 20 Sep 2013 04:52:42 +0000 (14:52 +1000)]
KVM: PPC: Book3S PR: Fix compilation without CONFIG_ALTIVEC
Commit
9d1ffdd8f3 ("KVM: PPC: Book3S PR: Don't corrupt guest state
when kernel uses VMX") added a call to kvmppc_load_up_altivec() that
isn't guarded by CONFIG_ALTIVEC, causing a link failure when building
a kernel without CONFIG_ALTIVEC set. This adds an #ifdef to fix this.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 20 Sep 2013 04:52:41 +0000 (14:52 +1000)]
KVM: PPC: Book3S HV: Don't crash host on unknown guest interrupt
If we come out of a guest with an interrupt that we don't know about,
instead of crashing the host with a BUG(), we now return to userspace
with the exit reason set to KVM_EXIT_UNKNOWN and the trap vector in
the hw.hardware_exit_reason field of the kvm_run structure, as is done
on x86. Note that run->exit_reason is already set to KVM_EXIT_UNKNOWN
at the beginning of kvmppc_handle_exit().
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Sat, 21 Sep 2013 04:35:02 +0000 (14:35 +1000)]
KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7
This enables us to use the Processor Compatibility Register (PCR) on
POWER7 to put the processor into architecture 2.05 compatibility mode
when running a guest. In this mode the new instructions and registers
that were introduced on POWER7 are disabled in user mode. This
includes all the VSX facilities plus several other instructions such
as ldbrx, stdbrx, popcntw, popcntd, etc.
To select this mode, we have a new register accessible through the
set/get_one_reg interface, called KVM_REG_PPC_ARCH_COMPAT. Setting
this to zero gives the full set of capabilities of the processor.
Setting it to one of the "logical" PVR values defined in PAPR puts
the vcpu into the compatibility mode for the corresponding
architecture level. The supported values are:
0x0f000002 Architecture 2.05 (POWER6)
0x0f000003 Architecture 2.06 (POWER7)
0x0f100003 Architecture 2.06+ (POWER7+)
Since the PCR is per-core, the architecture compatibility level and
the corresponding PCR value are stored in the struct kvmppc_vcore, and
are therefore shared between all vcpus in a virtual core.
Signed-off-by: Paul Mackerras <paulus@samba.org>
[agraf: squash in fix to add missing break statements and documentation]
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 20 Sep 2013 04:52:39 +0000 (14:52 +1000)]
KVM: PPC: Book3S HV: Add support for guest Program Priority Register
POWER7 and later IBM server processors have a register called the
Program Priority Register (PPR), which controls the priority of
each hardware CPU SMT thread, and affects how fast it runs compared
to other SMT threads. This priority can be controlled by writing to
the PPR or by use of a set of instructions of the form or rN,rN,rN
which are otherwise no-ops but have been defined to set the priority
to particular levels.
This adds code to context switch the PPR when entering and exiting
guests and to make the PPR value accessible through the SET/GET_ONE_REG
interface. When entering the guest, we set the PPR as late as
possible, because if we are setting a low thread priority it will
make the code run slowly from that point on. Similarly, the
first-level interrupt handlers save the PPR value in the PACA very
early on, and set the thread priority to the medium level, so that
the interrupt handling code runs at a reasonable speed.
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 20 Sep 2013 04:52:38 +0000 (14:52 +1000)]
KVM: PPC: Book3S HV: Store LPCR value for each virtual core
This adds the ability to have a separate LPCR (Logical Partitioning
Control Register) value relating to a guest for each virtual core,
rather than only having a single value for the whole VM. This
corresponds to what real POWER hardware does, where there is a LPCR
per CPU thread but most of the fields are required to have the same
value on all active threads in a core.
The per-virtual-core LPCR can be read and written using the
GET/SET_ONE_REG interface. Userspace can can only modify the
following fields of the LPCR value:
DPFD Default prefetch depth
ILE Interrupt little-endian
TC Translation control (secondary HPT hash group search disable)
We still maintain a per-VM default LPCR value in kvm->arch.lpcr, which
contains bits relating to memory management, i.e. the Virtualized
Partition Memory (VPM) bits and the bits relating to guest real mode.
When this default value is updated, the update needs to be propagated
to the per-vcore values, so we add a kvmppc_update_lpcr() helper to do
that.
Signed-off-by: Paul Mackerras <paulus@samba.org>
[agraf: fix whitespace]
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 20 Sep 2013 04:52:37 +0000 (14:52 +1000)]
KVM: PPC: BookE: Add GET/SET_ONE_REG interface for VRSAVE
This makes the VRSAVE register value for a vcpu accessible through
the GET/SET_ONE_REG interface on Book E systems (in addition to the
existing GET/SET_SREGS interface), for consistency with Book 3S.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 6 Sep 2013 03:24:35 +0000 (13:24 +1000)]
KVM: PPC: Book3S HV: Avoid unbalanced increments of VPA yield count
The yield count in the VPA is supposed to be incremented every time
we enter the guest, and every time we exit the guest, so that its
value is even when the vcpu is running in the guest and odd when it
isn't. However, it's currently possible that we increment the yield
count on the way into the guest but then find that other CPU threads
are already exiting the guest, so we go back to nap mode via the
secondary_too_late label. In this situation we don't increment the
yield count again, breaking the relationship between the LSB of the
count and whether the vcpu is in the guest.
To fix this, we move the increment of the yield count to a point
after we have checked whether other CPU threads are exiting.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 6 Sep 2013 03:24:13 +0000 (13:24 +1000)]
KVM: PPC: Book3S HV: Pull out interrupt-reading code into a subroutine
This moves the code in book3s_hv_rmhandlers.S that reads any pending
interrupt from the XICS interrupt controller, and works out whether
it is an IPI for the guest, an IPI for the host, or a device interrupt,
into a new function called kvmppc_read_intr. Later patches will
need this.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 6 Sep 2013 03:23:44 +0000 (13:23 +1000)]
KVM: PPC: Book3S HV: Restructure kvmppc_hv_entry to be a subroutine
We have two paths into and out of the low-level guest entry and exit
code: from a vcpu task via kvmppc_hv_entry_trampoline, and from the
system reset vector for an offline secondary thread on POWER7 via
kvm_start_guest. Currently both just branch to kvmppc_hv_entry to
enter the guest, and on guest exit, we test the vcpu physical thread
ID to detect which way we came in and thus whether we should return
to the vcpu task or go back to nap mode.
In order to make the code flow clearer, and to keep the code relating
to each flow together, this turns kvmppc_hv_entry into a subroutine
that follows the normal conventions for call and return. This means
that kvmppc_hv_entry_trampoline() and kvmppc_hv_entry() now establish
normal stack frames, and we use the normal stack slots for saving
return addresses rather than local_paca->kvm_hstate.vmhandler. Apart
from that this is mostly moving code around unchanged.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 6 Sep 2013 03:23:21 +0000 (13:23 +1000)]
KVM: PPC: Book3S HV: Implement H_CONFER
The H_CONFER hypercall is used when a guest vcpu is spinning on a lock
held by another vcpu which has been preempted, and the spinning vcpu
wishes to give its timeslice to the lock holder. We implement this
in the straightforward way using kvm_vcpu_yield_to().
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 6 Sep 2013 03:18:32 +0000 (13:18 +1000)]
KVM: PPC: Book3S: Add GET/SET_ONE_REG interface for VRSAVE
The VRSAVE register value for a vcpu is accessible through the
GET/SET_SREGS interface for Book E processors, but not for Book 3S
processors. In order to make this accessible for Book 3S processors,
this adds a new register identifier for GET/SET_ONE_REG, and adds
the code to implement it.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 6 Sep 2013 03:17:46 +0000 (13:17 +1000)]
KVM: PPC: Book3S HV: Implement timebase offset for guests
This allows guests to have a different timebase origin from the host.
This is needed for migration, where a guest can migrate from one host
to another and the two hosts might have a different timebase origin.
However, the timebase seen by the guest must not go backwards, and
should go forwards only by a small amount corresponding to the time
taken for the migration.
Therefore this provides a new per-vcpu value accessed via the one_reg
interface using the new KVM_REG_PPC_TB_OFFSET identifier. This value
defaults to 0 and is not modified by KVM. On entering the guest, this
value is added onto the timebase, and on exiting the guest, it is
subtracted from the timebase.
This is only supported for recent POWER hardware which has the TBU40
(timebase upper 40 bits) register. Writing to the TBU40 register only
alters the upper 40 bits of the timebase, leaving the lower 24 bits
unchanged. This provides a way to modify the timebase for guest
migration without disturbing the synchronization of the timebase
registers across CPU cores. The kernel rounds up the value given
to a multiple of 2^24.
Timebase values stored in KVM structures (struct kvm_vcpu, struct
kvmppc_vcore, etc.) are stored as host timebase values. The timebase
values in the dispatch trace log need to be guest timebase values,
however, since that is read directly by the guest. This moves the
setting of vcpu->arch.dec_expires on guest exit to a point after we
have restored the host timebase so that vcpu->arch.dec_expires is a
host timebase value.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Paul Mackerras [Fri, 6 Sep 2013 03:11:18 +0000 (13:11 +1000)]
KVM: PPC: Book3S HV: Save/restore SIAR and SDAR along with other PMU registers
Currently we are not saving and restoring the SIAR and SDAR registers in
the PMU (performance monitor unit) on guest entry and exit. The result
is that performance monitoring tools in the guest could get false
information about where a program was executing and what data it was
accessing at the time of a performance monitor interrupt. This fixes
it by saving and restoring these registers along with the other PMU
registers on guest entry/exit.
This also provides a way for userspace to access these values for a
vcpu via the one_reg interface.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Michael Neuling [Tue, 3 Sep 2013 01:13:12 +0000 (11:13 +1000)]
KVM: PPC: Book3S HV: Reserve POWER8 space in get/set_one_reg
This reserves space in get/set_one_reg ioctl for the extra guest state
needed for POWER8. It doesn't implement these at all, it just reserves
them so that the ABI is defined now.
A few things to note here:
- This add *a lot* state for transactional memory. TM suspend mode,
this is unavoidable, you can't simply roll back all transactions and
store only the checkpointed state. I've added this all to
get/set_one_reg (including GPRs) rather than creating a new ioctl
which returns a struct kvm_regs like KVM_GET_REGS does. This means we
if we need to extract the TM state, we are going to need a bucket load
of IOCTLs. Hopefully most of the time this will not be needed as we
can look at the MSR to see if TM is active and only grab them when
needed. If this becomes a bottle neck in future we can add another
ioctl to grab all this state in one go.
- The TM state is offset by 0x80000000.
- For TM, I've done away with VMX and FP and created a single 64x128 bit
VSX register space.
- I've left a space of 1 (at 0x9c) since Paulus needs to add a value
which applies to POWER7 as well.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Gleb Natapov [Wed, 16 Oct 2013 12:30:32 +0000 (15:30 +0300)]
Merge tag 'kvm-arm-for-3.13-1' of git://git.linaro.org/people/cdall/linux-kvm-arm into next
Updates for KVM/ARM including cpu=host and Cortex-A7 support
Christoffer Dall [Wed, 16 Oct 2013 00:43:00 +0000 (17:43 -0700)]
KVM: ARM: Remove non-ASCII space characters
Some strange character leaped into the documentation, which makes
git-send-email behave quite strangely. Get rid of this before it bites
anyone else.
Cc: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
chai wen [Mon, 14 Oct 2013 14:22:33 +0000 (22:22 +0800)]
KVM: Drop FOLL_GET in GUP when doing async page fault
Page pinning is not mandatory in kvm async page fault processing since
after async page fault event is delivered to a guest it accesses page once
again and does its own GUP. Drop the FOLL_GET flag in GUP in async_pf
code, and do some simplifying in check/clear processing.
Suggested-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Gu zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: chai wen <chaiw.fnst@cn.fujitsu.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Christoffer Dall [Sun, 13 Oct 2013 19:35:58 +0000 (20:35 +0100)]
KVM: s390: Get rid of KVM_HPAGE defines
Now when the main kvm code relying on these defines has been moved to
the x86 specific part of the world, we can get rid of these.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Christoffer Dall [Wed, 2 Oct 2013 21:22:33 +0000 (14:22 -0700)]
KVM: PPC: Get rid of KVM_HPAGE defines
Now when the main kvm code relying on these defines has been moved to
the x86 specific part of the world, we can get rid of these.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Christoffer Dall [Wed, 2 Oct 2013 21:22:32 +0000 (14:22 -0700)]
KVM: ia64: Get rid of KVM_HPAGE defines
Now when the main kvm code relying on these defines has been moved to
the x86 specific part of the world, we can get rid of these.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Christoffer Dall [Wed, 2 Oct 2013 21:22:31 +0000 (14:22 -0700)]
KVM: mips: Get rid of KVM_HPAGE defines
Now when the main kvm code relying on these defines has been moved to
the x86 specific part of the world, we can get rid of these.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Christoffer Dall [Wed, 2 Oct 2013 21:22:30 +0000 (14:22 -0700)]
KVM: arm64: Get rid of KVM_HPAGE defines
Now when the main kvm code relying on these defines has been moved to
the x86 specific part of the world, we can get rid of these.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Christoffer Dall [Wed, 2 Oct 2013 21:22:29 +0000 (14:22 -0700)]
KVM: ARM: Get rid of KVM_HPAGE defines
The KVM_HPAGE_DEFINES are a little artificial on ARM, since the huge
page size is statically defined at compile time and there is only a
single huge page size.
Now when the main kvm code relying on these defines has been moved to
the x86 specific part of the world, we can get rid of these.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Christoffer Dall [Wed, 2 Oct 2013 21:22:28 +0000 (14:22 -0700)]
KVM: Move gfn_to_index to x86 specific code
The gfn_to_index function relies on huge page defines which either may
not make sense on systems that don't support huge pages or are defined
in an unconvenient way for other architectures. Since this is
x86-specific, move the function to arch/x86/include/asm/kvm_host.h.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Jonathan Austin [Thu, 26 Sep 2013 15:49:28 +0000 (16:49 +0100)]
KVM: ARM: Add support for Cortex-A7
This patch adds support for running Cortex-A7 guests on Cortex-A7 hosts.
As Cortex-A7 is architecturally compatible with A15, this patch is largely just
generalising existing code. Areas where 'implementation defined' behaviour
is identical for A7 and A15 is moved to allow it to be used by both cores.
The check to ensure that coprocessor register tables are sorted correctly is
also moved in to 'common' code to avoid each new cpu doing its own check
(and possibly forgetting to do so!)
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Jonathan Austin [Thu, 26 Sep 2013 15:49:26 +0000 (16:49 +0100)]
KVM: ARM: fix the size of TTBCR_{T0SZ,T1SZ} masks
The T{0,1}SZ fields of TTBCR are 3 bits wide when using the long descriptor
format. Likewise, the T0SZ field of the HTCR is 3-bits. KVM currently
defines TTBCR_T{0,1}SZ as 3, not 7.
The T0SZ mask is used to calculate the value for the HTCR, both to pick out
TTBCR.T0SZ and mask off the equivalent field in the HTCR during
read-modify-write. The incorrect mask size causes the (UNKNOWN) reset value
of HTCR.T0SZ to leak in to the calculated HTCR value. Linux will hang when
initializing KVM if HTCR's reset value has bit 2 set (sometimes the case on
A7/TC2)
Fixing T0SZ allows A7 cores to boot and T1SZ is also fixed for completeness.
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Jonathan Austin [Thu, 26 Sep 2013 15:49:27 +0000 (16:49 +0100)]
KVM: ARM: Fix calculation of virtual CPU ID
KVM does not have a notion of multiple clusters for CPUs, just a linear
array of CPUs. When using a system with cores in more than one cluster, the
current method for calculating the virtual MPIDR will leak the (physical)
cluster information into the virtual MPIDR. One effect of this is that
Linux under KVM fails to boot multiple CPUs that aren't in the 0th cluster.
This patch does away with exposing the real MPIDR fields in favour of simply
using the virtual CPU number (but preserving the U bit, as before).
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Arthur Chunqi Li [Mon, 16 Sep 2013 08:11:44 +0000 (16:11 +0800)]
KVM: nVMX: Fully support nested VMX preemption timer
This patch contains the following two changes:
1. Fix the bug in nested preemption timer support. If vmexit L2->L0
with some reasons not emulated by L1, preemption timer value should
be save in such exits.
2. Add support of "Save VMX-preemption timer value" VM-Exit controls
to nVMX.
With this patch, nested VMX preemption timer features are fully
supported.
Signed-off-by: Arthur Chunqi Li <yzt356@gmail.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 2 Oct 2013 14:56:13 +0000 (16:56 +0200)]
KVM: mmu: change useless int return types to void
kvm_mmu initialization is mostly filling in function pointers, there is
no way for it to fail. Clean up unused return values.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Paolo Bonzini [Wed, 2 Oct 2013 14:56:12 +0000 (16:56 +0200)]
KVM: mmu: unify destroy_kvm_mmu with kvm_mmu_unload
They do the same thing, and destroy_kvm_mmu can be confused with
kvm_mmu_destroy.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Paolo Bonzini [Wed, 2 Oct 2013 14:56:11 +0000 (16:56 +0200)]
KVM: mmu: remove uninteresting MMU "new_cr3" callbacks
The new_cr3 MMU callback has been a wrapper for mmu_free_roots since commit
e676505 (KVM: MMU: Force cr3 reload with two dimensional paging on mov
cr3 emulation, 2012-07-08).
The commit message mentioned that "mmu_free_roots() is somewhat of an overkill,
but fixing that is more complicated and will be done after this minimal fix".
One year has passed, and no one really felt the need to do a different fix.
Wrap the call with a kvm_mmu_new_cr3 function for clarity, but remove the
callback.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Paolo Bonzini [Wed, 2 Oct 2013 14:56:10 +0000 (16:56 +0200)]
KVM: mmu: remove uninteresting MMU "free" callbacks
The free MMU callback has been a wrapper for mmu_free_roots since mmu_free_roots
itself was introduced (commit
17ac10a, [PATCH] KVM: MU: Special treatment
for shadow pae root pages, 2007-01-05), and has always been the same for all
MMU cases. Remove the indirection as it is useless.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Paolo Bonzini [Wed, 2 Oct 2013 14:06:16 +0000 (16:06 +0200)]
KVM: x86: only copy XSAVE state for the supported features
This makes the interface more deterministic for userspace, which can expect
(after configuring only the features it supports) to get exactly the same
state from the kernel, independent of the host CPU and kernel version.
Suggested-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Paolo Bonzini [Wed, 2 Oct 2013 14:06:15 +0000 (16:06 +0200)]
KVM: x86: prevent setting unsupported XSAVE states
A guest can still attempt to save and restore XSAVE states even if they
have been masked in CPUID leaf 0Dh. This usually is not visible to
the guest, but is still wrong: "Any attempt to set a reserved bit (as
determined by the contents of EAX and EDX after executing CPUID with
EAX=0DH, ECX= 0H) in XCR0 for a given processor will result in a #GP
exception".
The patch also performs the same checks as __kvm_set_xcr in KVM_SET_XSAVE.
This catches migration from newer to older kernel/processor before the
guest starts running.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Paolo Bonzini [Wed, 2 Oct 2013 14:06:14 +0000 (16:06 +0200)]
KVM: x86: mask unsupported XSAVE entries from leaf 0Dh index 0
XSAVE entries that KVM does not support are reported by
KVM_GET_SUPPORTED_CPUID for leaf 0Dh index 0 if the host supports them;
they should be left out unless there is also hypervisor support for them.
Sub-leafs are correctly handled in supported_xcr0_bit, fix index 0
to match.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Andre Richter [Wed, 2 Oct 2013 10:23:26 +0000 (12:23 +0200)]
virt/kvm/iommu.c: Add leading zeros to device's BDF notation in debug messages
When KVM (de)assigns PCI(e) devices to VMs, a debug message is printed
including the BDF notation of the respective device. Currently, the BDF
notation does not have the commonly used leading zeros. This produces
messages like "assign device 0:1:8.0", which look strange at first sight.
The patch fixes this by exchanging the printk(KERN_DEBUG ...) with dev_info()
and also inserts "kvm" into the debug message, so that it is obvious where
the message comes from. Also reduces LoC.
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Andre Richter <andre.o.richter@gmail.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Anup Patel [Mon, 30 Sep 2013 08:50:08 +0000 (14:20 +0530)]
KVM: Add documentation for KVM_ARM_PREFERRED_TARGET ioctl
To implement CPU=Host we have added KVM_ARM_PREFERRED_TARGET
vm ioctl which provides information to user space required for
creating VCPU matching underlying Host.
This patch adds info related to this new KVM_ARM_PREFERRED_TARGET
vm ioctl in the KVM API documentation.
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Anup Patel [Mon, 30 Sep 2013 08:50:07 +0000 (14:20 +0530)]
ARM/ARM64: KVM: Implement KVM_ARM_PREFERRED_TARGET ioctl
For implementing CPU=host, we need a mechanism for querying
preferred VCPU target type on underlying Host.
This patch implements KVM_ARM_PREFERRED_TARGET vm ioctl which
returns struct kvm_vcpu_init instance containing information
about preferred VCPU target type and target specific features
available for it.
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Anup Patel [Mon, 30 Sep 2013 08:50:06 +0000 (14:20 +0530)]
ARM64: KVM: Implement kvm_vcpu_preferred_target() function
This patch implements kvm_vcpu_preferred_target() function for
KVM ARM64 which will help us implement KVM_ARM_PREFERRED_TARGET
ioctl for user space.
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Anup Patel [Mon, 30 Sep 2013 08:50:05 +0000 (14:20 +0530)]
ARM: KVM: Implement kvm_vcpu_preferred_target() function
This patch implements kvm_vcpu_preferred_target() function for
KVM ARM which will help us implement KVM_ARM_PREFERRED_TARGET ioctl
for user space.
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Anup Patel [Wed, 11 Sep 2013 13:04:22 +0000 (18:34 +0530)]
KVM: ARM: Fix typo in comments of inject_abt()
Very minor typo in comments of inject_abt() when we update fault status
register for injecting prefetch abort.
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Paolo Bonzini [Wed, 25 Sep 2013 11:53:07 +0000 (13:53 +0200)]
KVM: Convert kvm_lock back to non-raw spinlock
In commit
e935b8372cf8 ("KVM: Convert kvm_lock to raw_spinlock"),
the kvm_lock was made a raw lock. However, the kvm mmu_shrink()
function tries to grab the (non-raw) mmu_lock within the scope of
the raw locked kvm_lock being held. This leads to the following:
BUG: sleeping function called from invalid context at kernel/rtmutex.c:659
in_atomic(): 1, irqs_disabled(): 0, pid: 55, name: kswapd0
Preemption disabled at:[<
ffffffffa0376eac>] mmu_shrink+0x5c/0x1b0 [kvm]
Pid: 55, comm: kswapd0 Not tainted 3.4.34_preempt-rt
Call Trace:
[<
ffffffff8106f2ad>] __might_sleep+0xfd/0x160
[<
ffffffff817d8d64>] rt_spin_lock+0x24/0x50
[<
ffffffffa0376f3c>] mmu_shrink+0xec/0x1b0 [kvm]
[<
ffffffff8111455d>] shrink_slab+0x17d/0x3a0
[<
ffffffff81151f00>] ? mem_cgroup_iter+0x130/0x260
[<
ffffffff8111824a>] balance_pgdat+0x54a/0x730
[<
ffffffff8111fe47>] ? set_pgdat_percpu_threshold+0xa7/0xd0
[<
ffffffff811185bf>] kswapd+0x18f/0x490
[<
ffffffff81070961>] ? get_parent_ip+0x11/0x50
[<
ffffffff81061970>] ? __init_waitqueue_head+0x50/0x50
[<
ffffffff81118430>] ? balance_pgdat+0x730/0x730
[<
ffffffff81060d2b>] kthread+0xdb/0xe0
[<
ffffffff8106e122>] ? finish_task_switch+0x52/0x100
[<
ffffffff817e1e94>] kernel_thread_helper+0x4/0x10
[<
ffffffff81060c50>] ? __init_kthread_worker+0x
After the previous patch, kvm_lock need not be a raw spinlock anymore,
so change it back.
Reported-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: kvm@vger.kernel.org
Cc: gleb@redhat.com
Cc: jan.kiszka@siemens.com
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 10 Sep 2013 10:58:35 +0000 (12:58 +0200)]
KVM: protect kvm_usage_count with its own spinlock
The VM list need not be protected by a raw spinlock. Separate the
two so that kvm_lock can be made non-raw.
Cc: kvm@vger.kernel.org
Cc: gleb@redhat.com
Cc: jan.kiszka@siemens.com
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 10 Sep 2013 10:57:17 +0000 (12:57 +0200)]
KVM: cleanup (physical) CPU hotplug
Remove the useless argument, and do not do anything if there are no
VMs running at the time of the hotplug.
Cc: kvm@vger.kernel.org
Cc: gleb@redhat.com
Cc: jan.kiszka@siemens.com
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Gleb Natapov [Wed, 25 Sep 2013 09:51:36 +0000 (12:51 +0300)]
KVM: nVMX: Do not generate #DF if #PF happens during exception delivery into L2
If #PF happens during delivery of an exception into L2 and L1 also do
not have the page mapped in its shadow page table then L0 needs to
generate vmexit to L2 with original event in IDT_VECTORING_INFO, but
current code combines both exception and generates #DF instead. Fix that
by providing nVMX specific function to handle page faults during page
table walk that handles this case correctly.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Gleb Natapov [Wed, 25 Sep 2013 09:51:35 +0000 (12:51 +0300)]
KVM: nVMX: Check all exceptions for intercept during delivery to L2
All exceptions should be checked for intercept during delivery to L2,
but we check only #PF currently. Drop nested_run_pending while we are
at it since exception cannot be injected during vmentry anyway.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
[Renamed the nested_vmx_check_exception function. - Paolo]
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Gleb Natapov [Wed, 25 Sep 2013 09:51:34 +0000 (12:51 +0300)]
KVM: nVMX: Do not put exception that caused vmexit to IDT_VECTORING_INFO
If an exception causes vmexit directly it should not be reported in
IDT_VECTORING_INFO during the exit. For that we need to be able to
distinguish between exception that is injected into nested VM and one that
is reinjected because its delivery failed. Fortunately we already have
mechanism to do so for nested SVM, so here we just use correct function
to requeue exceptions and make sure that reinjected exception is not
moved to IDT_VECTORING_INFO during vmexit emulation and not re-checked
for interception during delivery.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Gleb Natapov [Wed, 25 Sep 2013 09:51:33 +0000 (12:51 +0300)]
KVM: nVMX: Amend nested_run_pending logic
EXIT_REASON_VMLAUNCH/EXIT_REASON_VMRESUME exit does not mean that nested
VM will actually run during next entry. Move setting nested_run_pending
closer to vmentry emulation code and move its clearing close to vmexit to
minimize amount of code that will erroneously run with nested_run_pending
set.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Thomas Huth [Thu, 12 Sep 2013 08:33:49 +0000 (10:33 +0200)]
KVM: s390: Intercept SCK instruction
Interception of the SET CLOCK instruction is mandatory, so this patch
provides a simple handler for this instruction (by setting up the
"epoch" field in the sie_block).
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Thomas Huth [Thu, 12 Sep 2013 08:33:48 +0000 (10:33 +0200)]
KVM: s390: Implement TEST BLOCK
This patch provides a simple version for the mandatory TEST BLOCK
instruction interception, so that guests that use this instruction
do not crash anymore.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Thomas Huth [Thu, 12 Sep 2013 08:33:47 +0000 (10:33 +0200)]
KVM: s390: Helper for converting real addresses to absolute
Added a separate helper function that translates guest real addresses
to guest absolute addresses by applying the prefix of the guest CPU.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Thomas Huth [Thu, 12 Sep 2013 08:33:46 +0000 (10:33 +0200)]
KVM: s390: Allow NULL parameter for kvm_s390_get_regs_rre
We're not always interested in both registers that are specified
for an RRE instruction. So allow NULL as parameter, too, to indicate
that we do not need the corresponding value.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Thomas Huth [Thu, 12 Sep 2013 08:33:45 +0000 (10:33 +0200)]
KVM: s390: Lock kvm->srcu at the appropriate places
The kvm->srcu lock has to be held while accessing the memory of
guests and during certain other actions. This patch now adds
the locks to the __vcpu_run function so that all affected code
is protected now (and additionally to the KVM_S390_STORE_STATUS
ioctl, which can be called out-of-band and needs a separate lock).
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Thomas Huth [Thu, 12 Sep 2013 08:33:44 +0000 (10:33 +0200)]
KVM: s390: Push run loop into __vcpu_run
Moved the do-while loop from kvm_arch_vcpu_ioctl_run into __vcpu_run
and the calling of kvm_handle_sie_intercept() into vcpu_post_run()
(so we can add the srcu locks in a proper way in the next patch).
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Thomas Huth [Thu, 12 Sep 2013 08:33:43 +0000 (10:33 +0200)]
KVM: s390: Split up __vcpu_run into three parts
In preparation for the following patch (which will change the indentation
of __vcpu_run quite a bit), this patch puts most of the code from __vcpu_run
into separate functions. The first function handles the code that runs
before the SIE instruction and the other one handles the code that runs
afterwards.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Thomas Huth [Thu, 12 Sep 2013 08:33:42 +0000 (10:33 +0200)]
KVM: s390: Remove dead "rerun vcpu" code
The need for SIE_INTERCEPT_RERUNVCPU has been removed long ago already,
with the following commit:
f7850c92884b40915001e332a0a33ed4f10158e8
[S390] remove kvm mmu reload on s390
Since the remainders are dead code, they are now removed by this patch.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Raghavendra K T [Thu, 12 Sep 2013 07:30:11 +0000 (13:00 +0530)]
Documentation/kvm: Update cpuid documentation for steal time and pv eoi
Thanks Michael S Tsirkin for rewriting the description and suggestions.
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Jan Kiszka [Thu, 8 Aug 2013 14:26:33 +0000 (16:26 +0200)]
KVM: nVMX: Enable unrestricted guest mode support
Now that we provide EPT support, there is no reason to torture our
guests by hiding the relieving unrestricted guest mode feature. We just
need to relax CR0 checks for always-on bits as PE and PG can now be
switched off.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Jan Kiszka [Thu, 8 Aug 2013 14:26:31 +0000 (16:26 +0200)]
KVM: nVMX: Implement support for EFER saving on VM-exit
Implement and advertise VM_EXIT_SAVE_IA32_EFER. L0 traps EFER writes
unconditionally, so we always find the current L2 value in the
architectural state.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Jan Kiszka [Thu, 8 Aug 2013 14:26:29 +0000 (16:26 +0200)]
KVM: nVMX: Do not set identity page map for L2
Fiddling with CR3 for L2 is L1's job. It may set its own, different
identity map or simple leave it alone if unrestricted guest mode is
enabled. This also fixes reading back the current CR3 on L2 exits for
reporting it to L1.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Jan Kiszka [Tue, 3 Sep 2013 19:11:45 +0000 (21:11 +0200)]
KVM: nVMX: Replace kvm_set_cr0 with vmx_set_cr0 in load_vmcs12_host_state
kvm_set_cr0 performs checks on the state transition that may prevent
loading L1's cr0. For now we rely on the hardware to catch invalid
states loaded by L1 into its VMCS. Still, consistency checks on the host
state part of the VMCS on guest entry will have to be improved later on.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Radim Krčmář [Wed, 4 Sep 2013 20:32:24 +0000 (22:32 +0200)]
kvm: remove .done from struct kvm_async_pf
'.done' is used to mark the completion of 'async_pf_execute()', but
'cancel_work_sync()' returns true when the work was canceled, so we
use it instead.
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Thomas Huth [Mon, 9 Sep 2013 15:32:56 +0000 (17:32 +0200)]
KVM: Add documentation for kvm->srcu lock
This patch documents the kvm->srcu lock (using the information from
a mail which has been posted by Marcelo Tosatti to the kvm mailing
list some months ago, see the following URL for details:
http://www.mail-archive.com/kvm@vger.kernel.org/msg90040.html )
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Linus Torvalds [Mon, 23 Sep 2013 22:41:09 +0000 (15:41 -0700)]
Linux 3.12-rc2
Linus Torvalds [Mon, 23 Sep 2013 19:53:07 +0000 (12:53 -0700)]
Merge tag 'staging-3.12-rc2' of git://git./linux/kernel/git/gregkh/staging
Pull staging fixes from Greg KH:
"Here are a number of small staging tree and iio driver fixes. Nothing
major, just lots of little things"
* tag 'staging-3.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging: (34 commits)
iio:buffer_cb: Add missing iio_buffer_init()
iio: Prevent race between IIO chardev opening and IIO device free
iio: fix: Keep a reference to the IIO device for open file descriptors
iio: Stop sampling when the device is removed
iio: Fix crash when scan_bytes is computed with active_scan_mask == NULL
iio: Fix mcp4725 dev-to-indio_dev conversion in suspend/resume
iio: Fix bma180 dev-to-indio_dev conversion in suspend/resume
iio: Fix tmp006 dev-to-indio_dev conversion in suspend/resume
iio: iio_device_add_event_sysfs() bugfix
staging: iio:
ade7854-spi: Fix return value
staging:iio:hmc5843: Fix measurement conversion
iio: isl29018: Fix uninitialized value
staging:iio:dummy fix kfifo_buf kconfig dependency issue if kfifo modular and buffer enabled for built in dummy driver.
iio: at91: fix adc_clk overflow
staging: line6: add bounds check in snd_toneport_source_put()
Staging: comedi: Fix dependencies for drivers misclassified as PCI
staging: r8188eu: Adjust RX gain
staging: r8188eu: Fix smatch warning in core/rtw_ieee80211.
staging: r8188eu: Fix smatch error in core/rtw_mlme_ext.c
staging: r8188eu: Fix Smatch off-by-one warning in hal/rtl8188e_hal_init.c
...
Linus Torvalds [Mon, 23 Sep 2013 19:52:35 +0000 (12:52 -0700)]
Merge tag 'usb-3.12-rc2' of git://git./linux/kernel/git/gregkh/usb
Pull USB fixes from Greg KH:
"Here are a number of small USB fixes for 3.12-rc2.
One is a revert of a EHCI change that isn't quite ready for 3.12.
Others are minor things, gadget fixes, Kconfig fixes, and some quirks
and documentation updates.
All have been in linux-next for a bit"
* tag 'usb-3.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb:
USB: pl2303: distinguish between original and cloned HX chips
USB: Faraday fotg210: fix email addresses
USB: fix typo in usb serial simple driver Kconfig
Revert "USB: EHCI: support running URB giveback in tasklet context"
usb: s3c-hsotg: do not disconnect gadget when receiving ErlySusp intr
usb: s3c-hsotg: fix unregistration function
usb: gadget: f_mass_storage: reset endpoint driver data when disabled
usb: host: fsl-mph-dr-of: Staticize local symbols
usb: gadget: f_eem: Staticize eem_alloc
usb: gadget: f_ecm: Staticize ecm_alloc
usb: phy: omap-usb3: Fix return value
usb: dwc3: gadget: avoid memory leak when failing to allocate all eps
usb: dwc3: remove extcon dependency
usb: gadget: add '__ref' for rndis_config_register() and cdc_config_register()
usb: dwc3: pci: add support for BayTrail
usb: gadget: cdc2: fix conversion to new interface of f_ecm
usb: gadget: fix a bug and a WARN_ON in dummy-hcd
usb: gadget: mv_u3d_core: fix violation of locking discipline in mv_u3d_ep_disable()
Linus Torvalds [Mon, 23 Sep 2013 02:51:49 +0000 (19:51 -0700)]
Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
Pull drm fixes from Dave Airlie:
- some small fixes for msm and exynos
- a regression revert affecting nouveau users with old userspace
- intel pageflip deadlock and gpu hang fixes, hsw modesetting hangs
* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: (22 commits)
Revert "drm: mark context support as a legacy subsystem"
drm/i915: Don't enable the cursor on a disable pipe
drm/i915: do not update cursor in crtc mode set
drm/exynos: fix return value check in lowlevel_buffer_allocate()
drm/exynos: Fix address space warnings in exynos_drm_fbdev.c
drm/exynos: Fix address space warning in exynos_drm_buf.c
drm/exynos: Remove redundant OF dependency
drm/msm: drop unnecessary set_need_resched()
drm/i915: kill set_need_resched
drm/msm: fix potential NULL pointer dereference
drm/i915/dvo: set crtc timings again for panel fixed modes
drm/i915/sdvo: Robustify the dtd<->drm_mode conversions
drm/msm: workaround for missing irq
drm/msm: return -EBUSY if bo still active
drm/msm: fix return value check in ERR_PTR()
drm/msm: fix cmdstream size check
drm/msm: hangcheck harder
drm/msm: handle read vs write fences
drm/i915/sdvo: Fully translate sync flags in the dtd->mode conversion
drm/i915: Use proper print format for debug prints
...
Linus Torvalds [Sun, 22 Sep 2013 22:00:11 +0000 (15:00 -0700)]
Merge branch 'for-3.12/core' of git://git.kernel.dk/linux-block
Pull block IO fixes from Jens Axboe:
"After merge window, no new stuff this time only a collection of neatly
confined and simple fixes"
* 'for-3.12/core' of git://git.kernel.dk/linux-block:
cfq: explicitly use 64bit divide operation for 64bit arguments
block: Add nr_bios to block_rq_remap tracepoint
If the queue is dying then we only call the rq->end_io callout. This leaves bios setup on the request, because the caller assumes when the blk_execute_rq_nowait/blk_execute_rq call has completed that the rq->bios have been cleaned up.
bio-integrity: Fix use of bs->bio_integrity_pool after free
blkcg: relocate root_blkg setting and clearing
block: Convert kmalloc_node(...GFP_ZERO...) to kzalloc_node(...)
block: trace all devices plug operation
Linus Torvalds [Sun, 22 Sep 2013 21:58:49 +0000 (14:58 -0700)]
Merge branch 'for-linus' of git://git./linux/kernel/git/mason/linux-btrfs
Pull btrfs fixes from Chris Mason:
"These are mostly bug fixes and a two small performance fixes. The
most important of the bunch are Josef's fix for a snapshotting
regression and Mark's update to fix compile problems on arm"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: (25 commits)
Btrfs: create the uuid tree on remount rw
btrfs: change extent-same to copy entire argument struct
Btrfs: dir_inode_operations should use btrfs_update_time also
btrfs: Add btrfs: prefix to kernel log output
btrfs: refuse to remount read-write after abort
Btrfs: btrfs_ioctl_default_subvol: Revert back to toplevel subvolume when arg is 0
Btrfs: don't leak transaction in btrfs_sync_file()
Btrfs: add the missing mutex unlock in write_all_supers()
Btrfs: iput inode on allocation failure
Btrfs: remove space_info->reservation_progress
Btrfs: kill delay_iput arg to the wait_ordered functions
Btrfs: fix worst case calculator for space usage
Revert "Btrfs: rework the overcommit logic to be based on the total size"
Btrfs: improve replacing nocow extents
Btrfs: drop dir i_size when adding new names on replay
Btrfs: replay dir_index items before other items
Btrfs: check roots last log commit when checking if an inode has been logged
Btrfs: actually log directory we are fsync()'ing
Btrfs: actually limit the size of delalloc range
Btrfs: allocate the free space by the existed max extent size when ENOSPC
...
Anatol Pomozov [Sun, 22 Sep 2013 18:43:47 +0000 (12:43 -0600)]
cfq: explicitly use 64bit divide operation for 64bit arguments
'samples' is 64bit operant, but do_div() second parameter is 32.
do_div silently truncates high 32 bits and calculated result
is invalid.
In case if low 32bit of 'samples' are zeros then do_div() produces
kernel crash.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Greg Kroah-Hartman [Sat, 21 Sep 2013 23:45:36 +0000 (16:45 -0700)]
Merge tag 'iio-fixes-for-3.12a' of git://git./linux/kernel/git/jic23/iio into staging-linus
Jonathan writes:
First round of IIO fixes for 3.12
A series of wrong 'struct dev' assumptions in suspend/resume callbacks
following on from this issue being identified in a new driver review.
One to watch out for in future.
A number of driver specific fixes
1) at91 - fix a overflow in clock rate computation
2) dummy - Kconfig dependency issue
3) isl29018 - uninitialized value
4) hmc5843 - measurement conversion bug introduced by recent cleanup.
5)
ade7854-spi - wrong return value.
Some IIO core fixes
1) Wrong value picked up for event code creation for a modified channel
2) A null dereference on failure to initialize a buffer after no buffer has
been in use, when using the available_scan_masks approach.
3) Sampling not stopped when a device is removed. Effects forced removal
such as hot unplugging.
4) Prevent device going away if a chrdev is still open in userspace.
5) Prevent race on chardev opening and device being freed.
6) Add a missing iio_buffer_init in the call back buffer.
These last few are the first part of a set from Lars-Peter Clausen who
has been taking a closer look at our removal paths and buffer handling
than anyone has for quite some time.
Linus Torvalds [Sat, 21 Sep 2013 22:59:41 +0000 (15:59 -0700)]
Merge tag 'nfs-for-3.12-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
Pull NFS client bugfix from Trond Myklebust:
"Fix a regression due to incorrect sharing of gss auth caches"
* tag 'nfs-for-3.12-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
RPCSEC_GSS: fix crash on destroying gss auth
Jun'ichi Nomura [Sat, 21 Sep 2013 19:57:47 +0000 (13:57 -0600)]
block: Add nr_bios to block_rq_remap tracepoint
Adding the number of bios in a remapped request to 'block_rq_remap'
tracepoint.
Request remapper clones bios in a request to track the completion
status of each bio. So the number of bios can be useful information
for investigation.
Related discussions:
http://www.redhat.com/archives/dm-devel/2013-August/msg00084.html
http://www.redhat.com/archives/dm-devel/2013-September/msg00024.html
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Sat, 21 Sep 2013 02:33:20 +0000 (22:33 -0400)]
Btrfs: create the uuid tree on remount rw
Users have been complaining of the uuid tree stuff warning that there is no uuid
root when trying to do snapshot operations. This is because if you mount -o ro
we will not create the uuid tree. But then if you mount -o rw,remount we will
still not create it and then any subsequent snapshot/subvol operations you try
to do will fail gloriously. Fix this by creating the uuid_root on remount rw if
it was not already there. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Mark Fasheh [Tue, 17 Sep 2013 22:43:54 +0000 (15:43 -0700)]
btrfs: change extent-same to copy entire argument struct
btrfs_ioctl_file_extent_same() uses __put_user_unaligned() to copy some data
back to it's argument struct. Unfortunately, not all architectures provide
__put_user_unaligned(), so compiles break on them if btrfs is selected.
Instead, just copy the whole struct in / out at the start and end of
operations, respectively.
Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Guangyu Sun [Mon, 16 Sep 2013 17:42:03 +0000 (10:42 -0700)]
Btrfs: dir_inode_operations should use btrfs_update_time also
Commit
2bc5565286121d2a77ccd728eb3484dff2035b58 (Btrfs: don't update atime on
RO subvolumes) ensures that the access time of an inode is not updated when
the inode lives in a read-only subvolume.
However, if a directory on a read-only subvolume is accessed, the atime is
updated. This results in a write operation to a read-only subvolume. I
believe that access times should never be updated on read-only subvolumes.
To reproduce:
# mkfs.btrfs -f /dev/dm-3
(...)
# mount /dev/dm-3 /mnt
# btrfs subvol create /mnt/sub
Create subvolume '/mnt/sub'
# mkdir /mnt/sub/dir
# echo "abc" > /mnt/sub/dir/file
# btrfs subvol snapshot -r /mnt/sub /mnt/rosnap
Create a readonly snapshot of '/mnt/sub' in '/mnt/rosnap'
# stat /mnt/rosnap/dir
File: `/mnt/rosnap/dir'
Size: 8 Blocks: 0 IO Block: 4096 directory
Device: 16h/22d Inode: 257 Links: 1
Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2013-09-11 07:21:49.
389157126 -0400
Modify: 2013-09-11 07:22:02.
330156079 -0400
Change: 2013-09-11 07:22:02.
330156079 -0400
# ls /mnt/rosnap/dir
file
# stat /mnt/rosnap/dir
File: `/mnt/rosnap/dir'
Size: 8 Blocks: 0 IO Block: 4096 directory
Device: 16h/22d Inode: 257 Links: 1
Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2013-09-11 07:22:56.
797151670 -0400
Modify: 2013-09-11 07:22:02.
330156079 -0400
Change: 2013-09-11 07:22:02.
330156079 -0400
Reported-by: Koen De Wit <koen.de.wit@oracle.com>
Signed-off-by: Guangyu Sun <guangyu.sun@oracle.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Frank Holton [Fri, 13 Sep 2013 15:46:50 +0000 (11:46 -0400)]
btrfs: Add btrfs: prefix to kernel log output
The kernel log entries for device label %s and device fsid %pU
are missing the btrfs: prefix. Add those here.
Signed-off-by: Frank Holton <fholton@gmail.com>
Reviewed-by: David Sterba <dsterba@suse.cz>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
David Sterba [Fri, 13 Sep 2013 15:41:20 +0000 (17:41 +0200)]
btrfs: refuse to remount read-write after abort
It's still possible to flip the filesystem into RW mode after it's
remounted RO due to an abort. There are lots of places that check for
the superblock error bit and will not write data, but we should not let
the filesystem appear read-write.
Signed-off-by: David Sterba <dsterba@suse.cz>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
chandan [Fri, 13 Sep 2013 14:04:10 +0000 (19:34 +0530)]
Btrfs: btrfs_ioctl_default_subvol: Revert back to toplevel subvolume when arg is 0
This patch makes it possible to set BTRFS_FS_TREE_OBJECTID as the default
subvolume by passing a subvolume id of 0.
Signed-off-by: chandan <chandan@linux.vnet.ibm.com>
Reviewed-by: David Sterba <dsterba@suse.cz>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Filipe David Borba Manana [Wed, 11 Sep 2013 19:36:44 +0000 (20:36 +0100)]
Btrfs: don't leak transaction in btrfs_sync_file()
In btrfs_sync_file(), if the call to btrfs_log_dentry_safe() returns
a negative error (for e.g. -ENOMEM via btrfs_log_inode()), we would
return without ending/freeing the transaction.
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Stefan Behrens [Wed, 11 Sep 2013 07:59:22 +0000 (09:59 +0200)]
Btrfs: add the missing mutex unlock in write_all_supers()
The BUG() was replaced by btrfs_error() and return -EIO with the
patch "get rid of one BUG() in write_all_supers()", but the missing
mutex_unlock() was overlooked.
The 0-DAY kernel build service from Intel reported the missing
unlock which was found by the coccinelle tool:
fs/btrfs/disk-io.c:3422:2-8: preceding lock on line 3374
Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Josef Bacik [Tue, 17 Sep 2013 15:25:44 +0000 (11:25 -0400)]
Btrfs: iput inode on allocation failure
We don't do the iput when we fail to allocate our delayed delalloc work in
__start_delalloc_inodes, fix this.
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Josef Bacik [Tue, 17 Sep 2013 15:02:25 +0000 (11:02 -0400)]
Btrfs: remove space_info->reservation_progress
This isn't used for anything anymore, just remove it.
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Josef Bacik [Tue, 17 Sep 2013 14:55:51 +0000 (10:55 -0400)]
Btrfs: kill delay_iput arg to the wait_ordered functions
This is a left over of how we used to wait for ordered extents, which was to
grab the inode and then run filemap flush on it. However if we have an ordered
extent then we already are holding a ref on the inode, and we just use
btrfs_start_ordered_extent anyway, so there is no reason to have an extra ref on
the inode to start work on the ordered extent. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Josef Bacik [Tue, 17 Sep 2013 14:50:06 +0000 (10:50 -0400)]
Btrfs: fix worst case calculator for space usage
Forever ago I made the worst case calculator say that we could potentially split
into 3 blocks for every level on the way down, which isn't right. If we split
we're only going to get two new blocks, the one we originally cow'ed and the new
one we're going to split. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Josef Bacik [Tue, 17 Sep 2013 14:48:00 +0000 (10:48 -0400)]
Revert "Btrfs: rework the overcommit logic to be based on the total size"
This reverts commit
70afa3998c9baed4186df38988246de1abdab56d. It is causing
performance issues and wasn't actually correct. There were problems with the
way we flushed delalloc and that was the real cause of the early enospc.
Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Josef Bacik [Thu, 12 Sep 2013 20:58:28 +0000 (16:58 -0400)]
Btrfs: improve replacing nocow extents
Various people have hit a deadlock when running btrfs/011. This is because when
replacing nocow extents we will take the i_mutex to make sure nobody messes with
the file while we are replacing the extent. The problem is we are already
holding a transaction open, which is a locking inversion, so instead we need to
save these inodes we find and then process them outside of the transaction.
Further we can't just lock the inode and assume we are good to go. We need to
lock the extent range and then read back the extent cache for the inode to make
sure the extent really still points at the physical block we want. If it
doesn't we don't have to copy it. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Josef Bacik [Wed, 11 Sep 2013 18:17:00 +0000 (14:17 -0400)]
Btrfs: drop dir i_size when adding new names on replay
So if we have dir_index items in the log that means we also have the inode item
as well, which means that the inode's i_size is correct. However when we
process dir_index'es we call btrfs_add_link() which will increase the
directory's i_size for the new entry. To fix this we need to just set the dir
items i_size to 0, and then as we find dir_index items we adjust the i_size.
btrfs_add_link() will do it for new entries, and if the entry already exists we
can just add the name_len to the i_size ourselves. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Josef Bacik [Wed, 11 Sep 2013 15:57:23 +0000 (11:57 -0400)]
Btrfs: replay dir_index items before other items
A user reported a bug where his log would not replay because he was getting
-EEXIST back. This was because he had a file moved into a directory that was
logged. What happens is the file had a lower inode number, and so it is
processed first when replaying the log, and so we add the inode ref in for the
directory it was moved to. But then we process the directories DIR_INDEX item
and try to add the inode ref for that inode and it fails because we already
added it when we replayed the inode. To solve this problem we need to just
process any DIR_INDEX items we have in the log first so this all is taken care
of, and then we can replay the rest of the items. With this patch my reproducer
can remount the file system properly instead of erroring out. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Josef Bacik [Wed, 11 Sep 2013 13:55:42 +0000 (09:55 -0400)]
Btrfs: check roots last log commit when checking if an inode has been logged
Liu introduced a local copy of the last log commit for an inode to make sure we
actually log an inode even if a log commit has already taken place. In order to
make sure we didn't relog the same inode multiple times he set this local copy
to the current trans when we log the inode, because usually we log the inode and
then sync the log. The exception to this is during rename, we will relog an
inode if the name changed and it is already in the log. The problem with this
is then we go to sync the inode, and our check to see if the inode has already
been logged is tripped and we don't sync the log. To fix this we need to _also_
check against the roots last log commit, because it could be less than what is
in our local copy of the log commit. This fixes a bug where we rename a file
into a directory and then fsync the directory and then on remount the directory
is no longer there. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Josef Bacik [Wed, 11 Sep 2013 13:36:30 +0000 (09:36 -0400)]
Btrfs: actually log directory we are fsync()'ing
If you just create a directory and then fsync that directory and then pull the
power plug you will come back up and the directory will not be there. That is
because we won't actually create directories if we've logged files inside of
them since they will be created on replay, but in this check we will set our
logged_trans of our current directory if it happens to be a directory, making us
think it doesn't need to be logged. Fix the logic to only do this to parent
directories. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Josef Bacik [Fri, 30 Aug 2013 18:38:49 +0000 (14:38 -0400)]
Btrfs: actually limit the size of delalloc range
So forever we have had this thing to limit the amount of delalloc pages we'll
setup to be written out to 128mb. This is because we have to lock all the pages
in this range, so anything above this gets a bit unweildly, and also without a
limit we'll happily allocate gigantic chunks of disk space. Turns out our check
for this wasn't quite right, we wouldn't actually limit the chunk we wanted to
write out, we'd just stop looking for more space after we went over the limit.
So if you do a giant 20gb dd on my box with lots of ram I could get 2gig
extents. This is fine normally, except when you go to relocate these extents
and we can't find enough space to relocate these moster extents, since we have
to be able to allocate exactly the same sized extent to move it around. So fix
this by actually enforcing the limit. With this patch I'm no longer seeing
giant 1.5gb extents. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Miao Xie [Mon, 9 Sep 2013 05:19:42 +0000 (13:19 +0800)]
Btrfs: allocate the free space by the existed max extent size when ENOSPC
By the current code, if the requested size is very large, and all the extents
in the free space cache are small, we will waste lots of the cpu time to cut
the requested size in half and search the cache again and again until it gets
down to the size the allocator can return. In fact, we can know the max extent
size in the cache after the first search, so we needn't cut the size in half
repeatedly, and just use the max extent size directly. This way can save
lots of cpu time and make the performance grow up when there are only fragments
in the free space cache.
According to my test, if there are only 4KB free space extents in the fs,
and the total size of those extents are 256MB, we can reduce the execute
time of the following test from 5.4s to 1.4s.
dd if=/dev/zero of=<testfile> bs=1MB count=1 oflag=sync
Changelog v2 -> v3:
- fix the problem that we skip the block group with the space which is
less than we need.
Changelog v1 -> v2:
- address the problem that we return a wrong start position when searching
the free space in a bitmap.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>