KVM: s390: Enable KVM_GENERIC_MMU_NOTIFIER

Enable KVM_GENERIC_MMU_NOTIFIER, for now with empty placeholder callbacks.

Also enable KVM_MMU_LOCKLESS_AGING and define KVM_HAVE_MMU_RWLOCK.

Acked-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Reviewed-by: Steffen Eiden <seiden@linux.ibm.com>
Reviewed-by: Christoph Schlameuss <schlameuss@linux.ibm.com>
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
This commit is contained in:
Claudio Imbrenda
2026-02-04 16:02:39 +01:00
parent f0e1ca6cc3
commit a2f2798fa6
3 changed files with 47 additions and 1 deletions

View File

@@ -27,6 +27,7 @@
#include <asm/isc.h>
#include <asm/guarded_storage.h>
#define KVM_HAVE_MMU_RWLOCK
#define KVM_MAX_VCPUS 255
#define KVM_INTERNAL_MEM_SLOTS 1

View File

@@ -30,6 +30,8 @@ config KVM
select KVM_VFIO
select MMU_NOTIFIER
select VIRT_XFER_TO_GUEST_WORK
select KVM_GENERIC_MMU_NOTIFIER
select KVM_MMU_LOCKLESS_AGING
help
Support hosting paravirtualized guest machines using the SIE
virtualization capability on the mainframe. This should work

View File

@@ -4805,7 +4805,7 @@ try_again:
rc = fixup_user_fault(vcpu->arch.gmap->mm, vmaddr, fault_flags, &unlocked);
if (!rc)
rc = __gmap_link(vcpu->arch.gmap, gaddr, vmaddr);
scoped_guard(spinlock, &vcpu->kvm->mmu_lock) {
scoped_guard(read_lock, &vcpu->kvm->mmu_lock) {
kvm_release_faultin_page(vcpu->kvm, page, false, writable);
}
mmap_read_unlock(vcpu->arch.gmap->mm);
@@ -6021,6 +6021,49 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
return;
}
/**
* kvm_test_age_gfn() - test young
* @kvm: the kvm instance
* @range: the range of guest addresses whose young status needs to be cleared
*
* Context: called by KVM common code without holding the kvm mmu lock
* Return: true if any page in the given range is young, otherwise 0.
*/
bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
{
return false;
}
/**
* kvm_age_gfn() - clear young
* @kvm: the kvm instance
* @range: the range of guest addresses whose young status needs to be cleared
*
* Context: called by KVM common code without holding the kvm mmu lock
* Return: true if any page in the given range was young, otherwise 0.
*/
bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
{
return false;
}
/**
* kvm_unmap_gfn_range() - Unmap a range of guest addresses
* @kvm: the kvm instance
* @range: the range of guest page frames to invalidate
*
* This function always returns false because every DAT table modification
* has to use the appropriate DAT table manipulation instructions, which will
* keep the TLB coherent, hence no additional TLB flush is ever required.
*
* Context: called by KVM common code with the kvm mmu write lock held
* Return: false
*/
bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
{
return false;
}
static inline unsigned long nonhyp_mask(int i)
{
unsigned int nonhyp_fai = (sclp.hmfai << i * 2) >> 30;