summaryrefslogtreecommitdiff
path: root/drivers/kvm/mmu.c
diff options
context:
space:
mode:
authorAvi Kivity <avi@qumranet.com>2007-05-01 13:44:05 (GMT)
committerAvi Kivity <avi@qumranet.com>2007-07-16 09:05:39 (GMT)
commitfce0657ff9f14f6b1f147b5fcd6db2f54c06424e (patch)
treee2a1a101f5f77894674738476cb5808327c03f0c /drivers/kvm/mmu.c
parent09072daf37abbfe8e2d5018dd913f229c76190f7 (diff)
downloadlinux-fsl-qoriq-fce0657ff9f14f6b1f147b5fcd6db2f54c06424e.tar.xz
KVM: MMU: Respect nonpae pagetable quadrant when zapping ptes
When a guest writes to a page that has an mmu shadow, we have to clear the shadow pte corresponding to the memory location touched by the guest. Now, in nonpae mode, a single guest page may have two or four shadow pages (because a nonpae page maps 4MB or 4GB, whereas the pae shadow maps 2MB or 1GB), so we when we look up the page we find up to three additional aliases for the page. Since we _clear_ the shadow pte, it doesn't matter except for a slight performance penalty, but if we want to _update_ the shadow pte instead of clearing it, it is vital that we don't modify the aliases. Fortunately, exactly which page is needed (the "quadrant") is easily computed, and is accessible in the shadow page header. All we need is to ignore shadow pages from the wrong quadrants. Signed-off-by: Avi Kivity <avi@qumranet.com>
Diffstat (limited to 'drivers/kvm/mmu.c')
-rw-r--r--drivers/kvm/mmu.c4
1 files changed, 4 insertions, 0 deletions
diff --git a/drivers/kvm/mmu.c b/drivers/kvm/mmu.c
index b3a83ef..23dc461 100644
--- a/drivers/kvm/mmu.c
+++ b/drivers/kvm/mmu.c
@@ -1150,6 +1150,7 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
unsigned pte_size;
unsigned page_offset;
unsigned misaligned;
+ unsigned quadrant;
int level;
int flooded = 0;
int npte;
@@ -1202,7 +1203,10 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
page_offset <<= 1;
npte = 2;
}
+ quadrant = page_offset >> PAGE_SHIFT;
page_offset &= ~PAGE_MASK;
+ if (quadrant != page->role.quadrant)
+ continue;
}
spte = __va(page->page_hpa);
spte += page_offset / sizeof(*spte);