mirror of
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
synced 2025-09-04 20:19:47 +08:00
* Move a lot of state that was previously stored on a per vcpu
basis into a per-CPU area, because it is only pertinent to the
host while the vcpu is loaded. This results in better state
tracking, and a smaller vcpu structure.
* Add full handling of the ERET/ERETAA/ERETAB instructions in
nested virtualisation. The last two instructions also require
emulating part of the pointer authentication extension.
As a result, the trap handling of pointer authentication has
been greatly simplified.
* Turn the global (and not very scalable) LPI translation cache
into a per-ITS, scalable cache, making non directly injected
LPIs much cheaper to make visible to the vcpu.
* A batch of pKVM patches, mostly fixes and cleanups, as the
upstreaming process seems to be resuming. Fingers crossed!
* Allocate PPIs and SGIs outside of the vcpu structure, allowing
for smaller EL2 mapping and some flexibility in implementing
more or less than 32 private IRQs.
* Purge stale mpidr_data if a vcpu is created after the MPIDR
map has been created.
* Preserve vcpu-specific ID registers across a vcpu reset.
* Various minor cleanups and improvements.
LoongArch:
* Add ParaVirt IPI support.
* Add software breakpoint support.
* Add mmio trace events support.
RISC-V:
* Support guest breakpoints using ebreak
* Introduce per-VCPU mp_state_lock and reset_cntx_lock
* Virtualize SBI PMU snapshot and counter overflow interrupts
* New selftests for SBI PMU and Guest ebreak
* Some preparatory work for both TDX and SNP page fault handling.
This also cleans up the page fault path, so that the priorities
of various kinds of fauls (private page, no memory, write
to read-only slot, etc.) are easier to follow.
x86:
* Minimize amount of time that shadow PTEs remain in the special
REMOVED_SPTE state. This is a state where the mmu_lock is held for
reading but concurrent accesses to the PTE have to spin; shortening
its use allows other vCPUs to repopulate the zapped region while
the zapper finishes tearing down the old, defunct page tables.
* Advertise the max mappable GPA in the "guest MAXPHYADDR" CPUID field,
which is defined by hardware but left for software use. This lets KVM
communicate its inability to map GPAs that set bits 51:48 on hosts
without 5-level nested page tables. Guest firmware is expected to
use the information when mapping BARs; this avoids that they end up at
a legal, but unmappable, GPA.
* Fixed a bug where KVM would not reject accesses to MSR that aren't
supposed to exist given the vCPU model and/or KVM configuration.
* As usual, a bunch of code cleanups.
x86 (AMD):
* Implement a new and improved API to initialize SEV and SEV-ES VMs, which
will also be extendable to SEV-SNP. The new API specifies the desired
encryption in KVM_CREATE_VM and then separately initializes the VM.
The new API also allows customizing the desired set of VMSA features;
the features affect the measurement of the VM's initial state, and
therefore enabling them cannot be done tout court by the hypervisor.
While at it, the new API includes two bugfixes that couldn't be
applied to the old one without a flag day in userspace or without
affecting the initial measurement. When a SEV-ES VM is created with
the new VM type, KVM_GET_REGS/KVM_SET_REGS and friends are
rejected once the VMSA has been encrypted. Also, the FPU and AVX
state will be synchronized and encrypted too.
* Support for GHCB version 2 as applicable to SEV-ES guests. This, once
more, is only accessible when using the new KVM_SEV_INIT2 flow for
initialization of SEV-ES VMs.
x86 (Intel):
* An initial bunch of prerequisite patches for Intel TDX were merged.
They generally don't do anything interesting. The only somewhat user
visible change is a new debugging mode that checks that KVM's MMU
never triggers a #VE virtualization exception in the guest.
* Clear vmcs.EXIT_QUALIFICATION when synthesizing an EPT Misconfig VM-Exit to
L1, as per the SDM.
Generic:
* Use vfree() instead of kvfree() for allocations that always use vcalloc()
or __vcalloc().
* Remove .change_pte() MMU notifier - the changes to non-KVM code are
small and Andrew Morton asked that I also take those through the KVM
tree. The callback was only ever implemented by KVM (which was also the
original user of MMU notifiers) but it had been nonfunctional ever since
calls to set_pte_at_notify were wrapped with invalidate_range_start
and invalidate_range_end... in 2012.
Selftests:
* Enhance the demand paging test to allow for better reporting and stressing
of UFFD performance.
* Convert the steal time test to generate TAP-friendly output.
* Fix a flaky false positive in the xen_shinfo_test due to comparing elapsed
time across two different clock domains.
* Skip the MONITOR/MWAIT test if the host doesn't actually support MWAIT.
* Avoid unnecessary use of "sudo" in the NX hugepage test wrapper shell
script, to play nice with running in a minimal userspace environment.
* Allow skipping the RSEQ test's sanity check that the vCPU was able to
complete a reasonable number of KVM_RUNs, as the assert can fail on a
completely valid setup. If the test is run on a large-ish system that is
otherwise idle, and the test isn't affined to a low-ish number of CPUs, the
vCPU task can be repeatedly migrated to CPUs that are in deep sleep states,
which results in the vCPU having very little net runtime before the next
migration due to high wakeup latencies.
* Define _GNU_SOURCE for all selftests to fix a warning that was introduced by
a change to kselftest_harness.h late in the 6.9 cycle, and because forcing
every test to #define _GNU_SOURCE is painful.
* Provide a global pseudo-RNG instance for all tests, so that library code can
generate random, but determinstic numbers.
* Use the global pRNG to randomly force emulation of select writes from guest
code on x86, e.g. to help validate KVM's emulation of locked accesses.
* Allocate and initialize x86's GDT, IDT, TSS, segments, and default exception
handlers at VM creation, instead of forcing tests to manually trigger the
related setup.
Documentation:
* Fix a goof in the KVM_CREATE_GUEST_MEMFD documentation.
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmZE878UHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroOukQf+LcvZsWtrC7Wd5K9SQbYXaS4Rk6P6
JHoQW2d0hUN893J2WibEw+l1J/0vn5JumqHXyZgJ7CbaMtXkWWQTwDSDLuURUKpv
XNB3Sb17G87NH+s1tOh0tA9h5upbtlHVHvrtIwdbb9+XHgQ6HTL4uk+HdfO/p9fW
cWBEZAKoWcCIa99Numv3pmq5vdrvBlNggwBugBS8TH69EKMw+V1Vu1SFkIdNDTQk
NJJ28cohoP3wnwlIHaXSmU4RujipPH3Lm/xupyA5MwmzO713eq2yUqV49jzhD5/I
MA4Ruvgrdm4wpp89N9lQMyci91u6q7R9iZfMu0tSg2qYI3UPKIdstd8sOA==
=2lED
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM updates from Paolo Bonzini:
"ARM:
- Move a lot of state that was previously stored on a per vcpu basis
into a per-CPU area, because it is only pertinent to the host while
the vcpu is loaded. This results in better state tracking, and a
smaller vcpu structure.
- Add full handling of the ERET/ERETAA/ERETAB instructions in nested
virtualisation. The last two instructions also require emulating
part of the pointer authentication extension. As a result, the trap
handling of pointer authentication has been greatly simplified.
- Turn the global (and not very scalable) LPI translation cache into
a per-ITS, scalable cache, making non directly injected LPIs much
cheaper to make visible to the vcpu.
- A batch of pKVM patches, mostly fixes and cleanups, as the
upstreaming process seems to be resuming. Fingers crossed!
- Allocate PPIs and SGIs outside of the vcpu structure, allowing for
smaller EL2 mapping and some flexibility in implementing more or
less than 32 private IRQs.
- Purge stale mpidr_data if a vcpu is created after the MPIDR map has
been created.
- Preserve vcpu-specific ID registers across a vcpu reset.
- Various minor cleanups and improvements.
LoongArch:
- Add ParaVirt IPI support
- Add software breakpoint support
- Add mmio trace events support
RISC-V:
- Support guest breakpoints using ebreak
- Introduce per-VCPU mp_state_lock and reset_cntx_lock
- Virtualize SBI PMU snapshot and counter overflow interrupts
- New selftests for SBI PMU and Guest ebreak
- Some preparatory work for both TDX and SNP page fault handling.
This also cleans up the page fault path, so that the priorities of
various kinds of fauls (private page, no memory, write to read-only
slot, etc.) are easier to follow.
x86:
- Minimize amount of time that shadow PTEs remain in the special
REMOVED_SPTE state.
This is a state where the mmu_lock is held for reading but
concurrent accesses to the PTE have to spin; shortening its use
allows other vCPUs to repopulate the zapped region while the zapper
finishes tearing down the old, defunct page tables.
- Advertise the max mappable GPA in the "guest MAXPHYADDR" CPUID
field, which is defined by hardware but left for software use.
This lets KVM communicate its inability to map GPAs that set bits
51:48 on hosts without 5-level nested page tables. Guest firmware
is expected to use the information when mapping BARs; this avoids
that they end up at a legal, but unmappable, GPA.
- Fixed a bug where KVM would not reject accesses to MSR that aren't
supposed to exist given the vCPU model and/or KVM configuration.
- As usual, a bunch of code cleanups.
x86 (AMD):
- Implement a new and improved API to initialize SEV and SEV-ES VMs,
which will also be extendable to SEV-SNP.
The new API specifies the desired encryption in KVM_CREATE_VM and
then separately initializes the VM. The new API also allows
customizing the desired set of VMSA features; the features affect
the measurement of the VM's initial state, and therefore enabling
them cannot be done tout court by the hypervisor.
While at it, the new API includes two bugfixes that couldn't be
applied to the old one without a flag day in userspace or without
affecting the initial measurement. When a SEV-ES VM is created with
the new VM type, KVM_GET_REGS/KVM_SET_REGS and friends are rejected
once the VMSA has been encrypted. Also, the FPU and AVX state will
be synchronized and encrypted too.
- Support for GHCB version 2 as applicable to SEV-ES guests.
This, once more, is only accessible when using the new
KVM_SEV_INIT2 flow for initialization of SEV-ES VMs.
x86 (Intel):
- An initial bunch of prerequisite patches for Intel TDX were merged.
They generally don't do anything interesting. The only somewhat
user visible change is a new debugging mode that checks that KVM's
MMU never triggers a #VE virtualization exception in the guest.
- Clear vmcs.EXIT_QUALIFICATION when synthesizing an EPT Misconfig
VM-Exit to L1, as per the SDM.
Generic:
- Use vfree() instead of kvfree() for allocations that always use
vcalloc() or __vcalloc().
- Remove .change_pte() MMU notifier - the changes to non-KVM code are
small and Andrew Morton asked that I also take those through the
KVM tree.
The callback was only ever implemented by KVM (which was also the
original user of MMU notifiers) but it had been nonfunctional ever
since calls to set_pte_at_notify were wrapped with
invalidate_range_start and invalidate_range_end... in 2012.
Selftests:
- Enhance the demand paging test to allow for better reporting and
stressing of UFFD performance.
- Convert the steal time test to generate TAP-friendly output.
- Fix a flaky false positive in the xen_shinfo_test due to comparing
elapsed time across two different clock domains.
- Skip the MONITOR/MWAIT test if the host doesn't actually support
MWAIT.
- Avoid unnecessary use of "sudo" in the NX hugepage test wrapper
shell script, to play nice with running in a minimal userspace
environment.
- Allow skipping the RSEQ test's sanity check that the vCPU was able
to complete a reasonable number of KVM_RUNs, as the assert can fail
on a completely valid setup.
If the test is run on a large-ish system that is otherwise idle,
and the test isn't affined to a low-ish number of CPUs, the vCPU
task can be repeatedly migrated to CPUs that are in deep sleep
states, which results in the vCPU having very little net runtime
before the next migration due to high wakeup latencies.
- Define _GNU_SOURCE for all selftests to fix a warning that was
introduced by a change to kselftest_harness.h late in the 6.9
cycle, and because forcing every test to #define _GNU_SOURCE is
painful.
- Provide a global pseudo-RNG instance for all tests, so that library
code can generate random, but determinstic numbers.
- Use the global pRNG to randomly force emulation of select writes
from guest code on x86, e.g. to help validate KVM's emulation of
locked accesses.
- Allocate and initialize x86's GDT, IDT, TSS, segments, and default
exception handlers at VM creation, instead of forcing tests to
manually trigger the related setup.
Documentation:
- Fix a goof in the KVM_CREATE_GUEST_MEMFD documentation"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (225 commits)
selftests/kvm: remove dead file
KVM: selftests: arm64: Test vCPU-scoped feature ID registers
KVM: selftests: arm64: Test that feature ID regs survive a reset
KVM: selftests: arm64: Store expected register value in set_id_regs
KVM: selftests: arm64: Rename helper in set_id_regs to imply VM scope
KVM: arm64: Only reset vCPU-scoped feature ID regs once
KVM: arm64: Reset VM feature ID regs from kvm_reset_sys_regs()
KVM: arm64: Rename is_id_reg() to imply VM scope
KVM: arm64: Destroy mpidr_data for 'late' vCPU creation
KVM: arm64: Use hVHE in pKVM by default on CPUs with VHE support
KVM: arm64: Fix hvhe/nvhe early alias parsing
KVM: SEV: Allow per-guest configuration of GHCB protocol version
KVM: SEV: Add GHCB handling for termination requests
KVM: SEV: Add GHCB handling for Hypervisor Feature Support requests
KVM: SEV: Add support to handle AP reset MSR protocol
KVM: x86: Explicitly zero kvm_caps during vendor module load
KVM: x86: Fully re-initialize supported_mce_cap on vendor module load
KVM: x86: Fully re-initialize supported_vm_types on vendor module load
KVM: x86/mmu: Sanity check that __kvm_faultin_pfn() doesn't create noslot pfns
KVM: x86/mmu: Initialize kvm_page_fault's pfn and hva to error values
...
404 lines
9.6 KiB
C
404 lines
9.6 KiB
C
// SPDX-License-Identifier: GPL-2.0
|
|
/*
|
|
* Early cpufeature override framework
|
|
*
|
|
* Copyright (C) 2020 Google LLC
|
|
* Author: Marc Zyngier <maz@kernel.org>
|
|
*/
|
|
|
|
#include <linux/ctype.h>
|
|
#include <linux/kernel.h>
|
|
#include <linux/libfdt.h>
|
|
|
|
#include <asm/cacheflush.h>
|
|
#include <asm/cpufeature.h>
|
|
#include <asm/setup.h>
|
|
|
|
#include "pi.h"
|
|
|
|
#define FTR_DESC_NAME_LEN 20
|
|
#define FTR_DESC_FIELD_LEN 10
|
|
#define FTR_ALIAS_NAME_LEN 30
|
|
#define FTR_ALIAS_OPTION_LEN 116
|
|
|
|
static u64 __boot_status __initdata;
|
|
|
|
typedef bool filter_t(u64 val);
|
|
|
|
struct ftr_set_desc {
|
|
char name[FTR_DESC_NAME_LEN];
|
|
PREL64(struct arm64_ftr_override, override);
|
|
struct {
|
|
char name[FTR_DESC_FIELD_LEN];
|
|
u8 shift;
|
|
u8 width;
|
|
PREL64(filter_t, filter);
|
|
} fields[];
|
|
};
|
|
|
|
#define FIELD(n, s, f) { .name = n, .shift = s, .width = 4, .filter = f }
|
|
|
|
static bool __init mmfr1_vh_filter(u64 val)
|
|
{
|
|
/*
|
|
* If we ever reach this point while running VHE, we're
|
|
* guaranteed to be on one of these funky, VHE-stuck CPUs. If
|
|
* the user was trying to force nVHE on us, proceed with
|
|
* attitude adjustment.
|
|
*/
|
|
return !(__boot_status == (BOOT_CPU_FLAG_E2H | BOOT_CPU_MODE_EL2) &&
|
|
val == 0);
|
|
}
|
|
|
|
static const struct ftr_set_desc mmfr1 __prel64_initconst = {
|
|
.name = "id_aa64mmfr1",
|
|
.override = &id_aa64mmfr1_override,
|
|
.fields = {
|
|
FIELD("vh", ID_AA64MMFR1_EL1_VH_SHIFT, mmfr1_vh_filter),
|
|
{}
|
|
},
|
|
};
|
|
|
|
|
|
static bool __init mmfr2_varange_filter(u64 val)
|
|
{
|
|
int __maybe_unused feat;
|
|
|
|
if (val)
|
|
return false;
|
|
|
|
#ifdef CONFIG_ARM64_LPA2
|
|
feat = cpuid_feature_extract_signed_field(read_sysreg(id_aa64mmfr0_el1),
|
|
ID_AA64MMFR0_EL1_TGRAN_SHIFT);
|
|
if (feat >= ID_AA64MMFR0_EL1_TGRAN_LPA2) {
|
|
id_aa64mmfr0_override.val |=
|
|
(ID_AA64MMFR0_EL1_TGRAN_LPA2 - 1) << ID_AA64MMFR0_EL1_TGRAN_SHIFT;
|
|
id_aa64mmfr0_override.mask |= 0xfU << ID_AA64MMFR0_EL1_TGRAN_SHIFT;
|
|
}
|
|
#endif
|
|
return true;
|
|
}
|
|
|
|
static const struct ftr_set_desc mmfr2 __prel64_initconst = {
|
|
.name = "id_aa64mmfr2",
|
|
.override = &id_aa64mmfr2_override,
|
|
.fields = {
|
|
FIELD("varange", ID_AA64MMFR2_EL1_VARange_SHIFT, mmfr2_varange_filter),
|
|
{}
|
|
},
|
|
};
|
|
|
|
static bool __init pfr0_sve_filter(u64 val)
|
|
{
|
|
/*
|
|
* Disabling SVE also means disabling all the features that
|
|
* are associated with it. The easiest way to do it is just to
|
|
* override id_aa64zfr0_el1 to be 0.
|
|
*/
|
|
if (!val) {
|
|
id_aa64zfr0_override.val = 0;
|
|
id_aa64zfr0_override.mask = GENMASK(63, 0);
|
|
}
|
|
|
|
return true;
|
|
}
|
|
|
|
static const struct ftr_set_desc pfr0 __prel64_initconst = {
|
|
.name = "id_aa64pfr0",
|
|
.override = &id_aa64pfr0_override,
|
|
.fields = {
|
|
FIELD("sve", ID_AA64PFR0_EL1_SVE_SHIFT, pfr0_sve_filter),
|
|
FIELD("el0", ID_AA64PFR0_EL1_EL0_SHIFT, NULL),
|
|
{}
|
|
},
|
|
};
|
|
|
|
static bool __init pfr1_sme_filter(u64 val)
|
|
{
|
|
/*
|
|
* Similarly to SVE, disabling SME also means disabling all
|
|
* the features that are associated with it. Just set
|
|
* id_aa64smfr0_el1 to 0 and don't look back.
|
|
*/
|
|
if (!val) {
|
|
id_aa64smfr0_override.val = 0;
|
|
id_aa64smfr0_override.mask = GENMASK(63, 0);
|
|
}
|
|
|
|
return true;
|
|
}
|
|
|
|
static const struct ftr_set_desc pfr1 __prel64_initconst = {
|
|
.name = "id_aa64pfr1",
|
|
.override = &id_aa64pfr1_override,
|
|
.fields = {
|
|
FIELD("bt", ID_AA64PFR1_EL1_BT_SHIFT, NULL ),
|
|
FIELD("mte", ID_AA64PFR1_EL1_MTE_SHIFT, NULL),
|
|
FIELD("sme", ID_AA64PFR1_EL1_SME_SHIFT, pfr1_sme_filter),
|
|
{}
|
|
},
|
|
};
|
|
|
|
static const struct ftr_set_desc isar1 __prel64_initconst = {
|
|
.name = "id_aa64isar1",
|
|
.override = &id_aa64isar1_override,
|
|
.fields = {
|
|
FIELD("gpi", ID_AA64ISAR1_EL1_GPI_SHIFT, NULL),
|
|
FIELD("gpa", ID_AA64ISAR1_EL1_GPA_SHIFT, NULL),
|
|
FIELD("api", ID_AA64ISAR1_EL1_API_SHIFT, NULL),
|
|
FIELD("apa", ID_AA64ISAR1_EL1_APA_SHIFT, NULL),
|
|
{}
|
|
},
|
|
};
|
|
|
|
static const struct ftr_set_desc isar2 __prel64_initconst = {
|
|
.name = "id_aa64isar2",
|
|
.override = &id_aa64isar2_override,
|
|
.fields = {
|
|
FIELD("gpa3", ID_AA64ISAR2_EL1_GPA3_SHIFT, NULL),
|
|
FIELD("apa3", ID_AA64ISAR2_EL1_APA3_SHIFT, NULL),
|
|
FIELD("mops", ID_AA64ISAR2_EL1_MOPS_SHIFT, NULL),
|
|
{}
|
|
},
|
|
};
|
|
|
|
static const struct ftr_set_desc smfr0 __prel64_initconst = {
|
|
.name = "id_aa64smfr0",
|
|
.override = &id_aa64smfr0_override,
|
|
.fields = {
|
|
FIELD("smever", ID_AA64SMFR0_EL1_SMEver_SHIFT, NULL),
|
|
/* FA64 is a one bit field... :-/ */
|
|
{ "fa64", ID_AA64SMFR0_EL1_FA64_SHIFT, 1, },
|
|
{}
|
|
},
|
|
};
|
|
|
|
static bool __init hvhe_filter(u64 val)
|
|
{
|
|
u64 mmfr1 = read_sysreg(id_aa64mmfr1_el1);
|
|
|
|
return (val == 1 &&
|
|
lower_32_bits(__boot_status) == BOOT_CPU_MODE_EL2 &&
|
|
cpuid_feature_extract_unsigned_field(mmfr1,
|
|
ID_AA64MMFR1_EL1_VH_SHIFT));
|
|
}
|
|
|
|
static const struct ftr_set_desc sw_features __prel64_initconst = {
|
|
.name = "arm64_sw",
|
|
.override = &arm64_sw_feature_override,
|
|
.fields = {
|
|
FIELD("nokaslr", ARM64_SW_FEATURE_OVERRIDE_NOKASLR, NULL),
|
|
FIELD("hvhe", ARM64_SW_FEATURE_OVERRIDE_HVHE, hvhe_filter),
|
|
FIELD("rodataoff", ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF, NULL),
|
|
{}
|
|
},
|
|
};
|
|
|
|
static const
|
|
PREL64(const struct ftr_set_desc, reg) regs[] __prel64_initconst = {
|
|
{ &mmfr1 },
|
|
{ &mmfr2 },
|
|
{ &pfr0 },
|
|
{ &pfr1 },
|
|
{ &isar1 },
|
|
{ &isar2 },
|
|
{ &smfr0 },
|
|
{ &sw_features },
|
|
};
|
|
|
|
static const struct {
|
|
char alias[FTR_ALIAS_NAME_LEN];
|
|
char feature[FTR_ALIAS_OPTION_LEN];
|
|
} aliases[] __initconst = {
|
|
{ "kvm_arm.mode=nvhe", "arm64_sw.hvhe=0 id_aa64mmfr1.vh=0" },
|
|
{ "kvm_arm.mode=protected", "arm64_sw.hvhe=1" },
|
|
{ "arm64.nosve", "id_aa64pfr0.sve=0" },
|
|
{ "arm64.nosme", "id_aa64pfr1.sme=0" },
|
|
{ "arm64.nobti", "id_aa64pfr1.bt=0" },
|
|
{ "arm64.nopauth",
|
|
"id_aa64isar1.gpi=0 id_aa64isar1.gpa=0 "
|
|
"id_aa64isar1.api=0 id_aa64isar1.apa=0 "
|
|
"id_aa64isar2.gpa3=0 id_aa64isar2.apa3=0" },
|
|
{ "arm64.nomops", "id_aa64isar2.mops=0" },
|
|
{ "arm64.nomte", "id_aa64pfr1.mte=0" },
|
|
{ "nokaslr", "arm64_sw.nokaslr=1" },
|
|
{ "rodata=off", "arm64_sw.rodataoff=1" },
|
|
{ "arm64.nolva", "id_aa64mmfr2.varange=0" },
|
|
{ "arm64.no32bit_el0", "id_aa64pfr0.el0=1" },
|
|
};
|
|
|
|
static int __init parse_hexdigit(const char *p, u64 *v)
|
|
{
|
|
// skip "0x" if it comes next
|
|
if (p[0] == '0' && tolower(p[1]) == 'x')
|
|
p += 2;
|
|
|
|
// check whether the RHS is a single hex digit
|
|
if (!isxdigit(p[0]) || (p[1] && !isspace(p[1])))
|
|
return -EINVAL;
|
|
|
|
*v = tolower(*p) - (isdigit(*p) ? '0' : 'a' - 10);
|
|
return 0;
|
|
}
|
|
|
|
static int __init find_field(const char *cmdline, char *opt, int len,
|
|
const struct ftr_set_desc *reg, int f, u64 *v)
|
|
{
|
|
int flen = strlen(reg->fields[f].name);
|
|
|
|
// append '<fieldname>=' to obtain '<name>.<fieldname>='
|
|
memcpy(opt + len, reg->fields[f].name, flen);
|
|
len += flen;
|
|
opt[len++] = '=';
|
|
|
|
if (memcmp(cmdline, opt, len))
|
|
return -1;
|
|
|
|
return parse_hexdigit(cmdline + len, v);
|
|
}
|
|
|
|
static void __init match_options(const char *cmdline)
|
|
{
|
|
char opt[FTR_DESC_NAME_LEN + FTR_DESC_FIELD_LEN + 2];
|
|
int i;
|
|
|
|
for (i = 0; i < ARRAY_SIZE(regs); i++) {
|
|
const struct ftr_set_desc *reg = prel64_pointer(regs[i].reg);
|
|
struct arm64_ftr_override *override;
|
|
int len = strlen(reg->name);
|
|
int f;
|
|
|
|
override = prel64_pointer(reg->override);
|
|
|
|
// set opt[] to '<name>.'
|
|
memcpy(opt, reg->name, len);
|
|
opt[len++] = '.';
|
|
|
|
for (f = 0; reg->fields[f].name[0] != '\0'; f++) {
|
|
u64 shift = reg->fields[f].shift;
|
|
u64 width = reg->fields[f].width ?: 4;
|
|
u64 mask = GENMASK_ULL(shift + width - 1, shift);
|
|
bool (*filter)(u64 val);
|
|
u64 v;
|
|
|
|
if (find_field(cmdline, opt, len, reg, f, &v))
|
|
continue;
|
|
|
|
/*
|
|
* If an override gets filtered out, advertise
|
|
* it by setting the value to the all-ones while
|
|
* clearing the mask... Yes, this is fragile.
|
|
*/
|
|
filter = prel64_pointer(reg->fields[f].filter);
|
|
if (filter && !filter(v)) {
|
|
override->val |= mask;
|
|
override->mask &= ~mask;
|
|
continue;
|
|
}
|
|
|
|
override->val &= ~mask;
|
|
override->val |= (v << shift) & mask;
|
|
override->mask |= mask;
|
|
|
|
return;
|
|
}
|
|
}
|
|
}
|
|
|
|
static __init void __parse_cmdline(const char *cmdline, bool parse_aliases)
|
|
{
|
|
do {
|
|
char buf[256];
|
|
size_t len;
|
|
int i;
|
|
|
|
cmdline = skip_spaces(cmdline);
|
|
|
|
/* terminate on "--" appearing on the command line by itself */
|
|
if (cmdline[0] == '-' && cmdline[1] == '-' && isspace(cmdline[2]))
|
|
return;
|
|
|
|
for (len = 0; cmdline[len] && !isspace(cmdline[len]); len++) {
|
|
if (len >= sizeof(buf) - 1)
|
|
break;
|
|
if (cmdline[len] == '-')
|
|
buf[len] = '_';
|
|
else
|
|
buf[len] = cmdline[len];
|
|
}
|
|
if (!len)
|
|
return;
|
|
|
|
buf[len] = 0;
|
|
|
|
cmdline += len;
|
|
|
|
match_options(buf);
|
|
|
|
for (i = 0; parse_aliases && i < ARRAY_SIZE(aliases); i++)
|
|
if (!memcmp(buf, aliases[i].alias, len + 1))
|
|
__parse_cmdline(aliases[i].feature, false);
|
|
} while (1);
|
|
}
|
|
|
|
static __init const u8 *get_bootargs_cmdline(const void *fdt, int node)
|
|
{
|
|
static char const bootargs[] __initconst = "bootargs";
|
|
const u8 *prop;
|
|
|
|
if (node < 0)
|
|
return NULL;
|
|
|
|
prop = fdt_getprop(fdt, node, bootargs, NULL);
|
|
if (!prop)
|
|
return NULL;
|
|
|
|
return strlen(prop) ? prop : NULL;
|
|
}
|
|
|
|
static __init void parse_cmdline(const void *fdt, int chosen)
|
|
{
|
|
static char const cmdline[] __initconst = CONFIG_CMDLINE;
|
|
const u8 *prop = get_bootargs_cmdline(fdt, chosen);
|
|
|
|
if (IS_ENABLED(CONFIG_CMDLINE_FORCE) || !prop)
|
|
__parse_cmdline(cmdline, true);
|
|
|
|
if (!IS_ENABLED(CONFIG_CMDLINE_FORCE) && prop)
|
|
__parse_cmdline(prop, true);
|
|
}
|
|
|
|
void __init init_feature_override(u64 boot_status, const void *fdt,
|
|
int chosen)
|
|
{
|
|
struct arm64_ftr_override *override;
|
|
const struct ftr_set_desc *reg;
|
|
int i;
|
|
|
|
for (i = 0; i < ARRAY_SIZE(regs); i++) {
|
|
reg = prel64_pointer(regs[i].reg);
|
|
override = prel64_pointer(reg->override);
|
|
|
|
override->val = 0;
|
|
override->mask = 0;
|
|
}
|
|
|
|
__boot_status = boot_status;
|
|
|
|
parse_cmdline(fdt, chosen);
|
|
|
|
for (i = 0; i < ARRAY_SIZE(regs); i++) {
|
|
reg = prel64_pointer(regs[i].reg);
|
|
override = prel64_pointer(reg->override);
|
|
dcache_clean_inval_poc((unsigned long)override,
|
|
(unsigned long)(override + 1));
|
|
}
|
|
}
|
|
|
|
char * __init skip_spaces(const char *str)
|
|
{
|
|
while (isspace(*str))
|
|
++str;
|
|
return (char *)str;
|
|
}
|