mirror of
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
synced 2026-03-22 07:27:12 +08:00
Merge tag 'locking-core-2026-02-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking updates from Ingo Molnar:
"Lock debugging:
- Implement compiler-driven static analysis locking context checking,
using the upcoming Clang 22 compiler's context analysis features
(Marco Elver)
We removed Sparse context analysis support, because prior to
removal even a defconfig kernel produced 1,700+ context tracking
Sparse warnings, the overwhelming majority of which are false
positives. On an allmodconfig kernel the number of false positive
context tracking Sparse warnings grows to over 5,200... On the plus
side of the balance actual locking bugs found by Sparse context
analysis is also rather ... sparse: I found only 3 such commits in
the last 3 years. So the rate of false positives and the
maintenance overhead is rather high and there appears to be no
active policy in place to achieve a zero-warnings baseline to move
the annotations & fixers to developers who introduce new code.
Clang context analysis is more complete and more aggressive in
trying to find bugs, at least in principle. Plus it has a different
model to enabling it: it's enabled subsystem by subsystem, which
results in zero warnings on all relevant kernel builds (as far as
our testing managed to cover it). Which allowed us to enable it by
default, similar to other compiler warnings, with the expectation
that there are no warnings going forward. This enforces a
zero-warnings baseline on clang-22+ builds (Which are still limited
in distribution, admittedly)
Hopefully the Clang approach can lead to a more maintainable
zero-warnings status quo and policy, with more and more subsystems
and drivers enabling the feature. Context tracking can be enabled
for all kernel code via WARN_CONTEXT_ANALYSIS_ALL=y (default
disabled), but this will generate a lot of false positives.
( Having said that, Sparse support could still be added back,
if anyone is interested - the removal patch is still
relatively straightforward to revert at this stage. )
Rust integration updates: (Alice Ryhl, Fujita Tomonori, Boqun Feng)
- Add support for Atomic<i8/i16/bool> and replace most Rust native
AtomicBool usages with Atomic<bool>
- Clean up LockClassKey and improve its documentation
- Add missing Send and Sync trait implementation for SetOnce
- Make ARef Unpin as it is supposed to be
- Add __rust_helper to a few Rust helpers as a preparation for
helper LTO
- Inline various lock related functions to avoid additional function
calls
WW mutexes:
- Extend ww_mutex tests and other test-ww_mutex updates (John
Stultz)
Misc fixes and cleanups:
- rcu: Mark lockdep_assert_rcu_helper() __always_inline (Arnd
Bergmann)
- locking/local_lock: Include more missing headers (Peter Zijlstra)
- seqlock: fix scoped_seqlock_read kernel-doc (Randy Dunlap)
- rust: sync: Replace `kernel::c_str!` with C-Strings (Tamir
Duberstein)"
* tag 'locking-core-2026-02-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (90 commits)
locking/rwlock: Fix write_trylock_irqsave() with CONFIG_INLINE_WRITE_TRYLOCK
rcu: Mark lockdep_assert_rcu_helper() __always_inline
compiler-context-analysis: Remove __assume_ctx_lock from initializers
tomoyo: Use scoped init guard
crypto: Use scoped init guard
kcov: Use scoped init guard
compiler-context-analysis: Introduce scoped init guards
cleanup: Make __DEFINE_LOCK_GUARD handle commas in initializers
seqlock: fix scoped_seqlock_read kernel-doc
tools: Update context analysis macros in compiler_types.h
rust: sync: Replace `kernel::c_str!` with C-Strings
rust: sync: Inline various lock related methods
rust: helpers: Move #define __rust_helper out of atomic.c
rust: wait: Add __rust_helper to helpers
rust: time: Add __rust_helper to helpers
rust: task: Add __rust_helper to helpers
rust: sync: Add __rust_helper to helpers
rust: refcount: Add __rust_helper to helpers
rust: rcu: Add __rust_helper to helpers
rust: processor: Add __rust_helper to helpers
...
This commit is contained in:
169
Documentation/dev-tools/context-analysis.rst
Normal file
169
Documentation/dev-tools/context-analysis.rst
Normal file
@@ -0,0 +1,169 @@
|
|||||||
|
.. SPDX-License-Identifier: GPL-2.0
|
||||||
|
.. Copyright (C) 2025, Google LLC.
|
||||||
|
|
||||||
|
.. _context-analysis:
|
||||||
|
|
||||||
|
Compiler-Based Context Analysis
|
||||||
|
===============================
|
||||||
|
|
||||||
|
Context Analysis is a language extension, which enables statically checking
|
||||||
|
that required contexts are active (or inactive) by acquiring and releasing
|
||||||
|
user-definable "context locks". An obvious application is lock-safety checking
|
||||||
|
for the kernel's various synchronization primitives (each of which represents a
|
||||||
|
"context lock"), and checking that locking rules are not violated.
|
||||||
|
|
||||||
|
The Clang compiler currently supports the full set of context analysis
|
||||||
|
features. To enable for Clang, configure the kernel with::
|
||||||
|
|
||||||
|
CONFIG_WARN_CONTEXT_ANALYSIS=y
|
||||||
|
|
||||||
|
The feature requires Clang 22 or later.
|
||||||
|
|
||||||
|
The analysis is *opt-in by default*, and requires declaring which modules and
|
||||||
|
subsystems should be analyzed in the respective `Makefile`::
|
||||||
|
|
||||||
|
CONTEXT_ANALYSIS_mymodule.o := y
|
||||||
|
|
||||||
|
Or for all translation units in the directory::
|
||||||
|
|
||||||
|
CONTEXT_ANALYSIS := y
|
||||||
|
|
||||||
|
It is possible to enable the analysis tree-wide, however, which will result in
|
||||||
|
numerous false positive warnings currently and is *not* generally recommended::
|
||||||
|
|
||||||
|
CONFIG_WARN_CONTEXT_ANALYSIS_ALL=y
|
||||||
|
|
||||||
|
Programming Model
|
||||||
|
-----------------
|
||||||
|
|
||||||
|
The below describes the programming model around using context lock types.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
Enabling context analysis can be seen as enabling a dialect of Linux C with
|
||||||
|
a Context System. Some valid patterns involving complex control-flow are
|
||||||
|
constrained (such as conditional acquisition and later conditional release
|
||||||
|
in the same function).
|
||||||
|
|
||||||
|
Context analysis is a way to specify permissibility of operations to depend on
|
||||||
|
context locks being held (or not held). Typically we are interested in
|
||||||
|
protecting data and code in a critical section by requiring a specific context
|
||||||
|
to be active, for example by holding a specific lock. The analysis ensures that
|
||||||
|
callers cannot perform an operation without the required context being active.
|
||||||
|
|
||||||
|
Context locks are associated with named structs, along with functions that
|
||||||
|
operate on struct instances to acquire and release the associated context lock.
|
||||||
|
|
||||||
|
Context locks can be held either exclusively or shared. This mechanism allows
|
||||||
|
assigning more precise privileges when a context is active, typically to
|
||||||
|
distinguish where a thread may only read (shared) or also write (exclusive) to
|
||||||
|
data guarded within a context.
|
||||||
|
|
||||||
|
The set of contexts that are actually active in a given thread at a given point
|
||||||
|
in program execution is a run-time concept. The static analysis works by
|
||||||
|
calculating an approximation of that set, called the context environment. The
|
||||||
|
context environment is calculated for every program point, and describes the
|
||||||
|
set of contexts that are statically known to be active, or inactive, at that
|
||||||
|
particular point. This environment is a conservative approximation of the full
|
||||||
|
set of contexts that will actually be active in a thread at run-time.
|
||||||
|
|
||||||
|
More details are also documented `here
|
||||||
|
<https://clang.llvm.org/docs/ThreadSafetyAnalysis.html>`_.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
Clang's analysis explicitly does not infer context locks acquired or
|
||||||
|
released by inline functions. It requires explicit annotations to (a) assert
|
||||||
|
that it's not a bug if a context lock is released or acquired, and (b) to
|
||||||
|
retain consistency between inline and non-inline function declarations.
|
||||||
|
|
||||||
|
Supported Kernel Primitives
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Currently the following synchronization primitives are supported:
|
||||||
|
`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`,
|
||||||
|
`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`, `local_lock_t`,
|
||||||
|
`ww_mutex`.
|
||||||
|
|
||||||
|
To initialize variables guarded by a context lock with an initialization
|
||||||
|
function (``type_init(&lock)``), prefer using ``guard(type_init)(&lock)`` or
|
||||||
|
``scoped_guard(type_init, &lock) { ... }`` to initialize such guarded members
|
||||||
|
or globals in the enclosing scope. This initializes the context lock and treats
|
||||||
|
the context as active within the initialization scope (initialization implies
|
||||||
|
exclusive access to the underlying object).
|
||||||
|
|
||||||
|
For example::
|
||||||
|
|
||||||
|
struct my_data {
|
||||||
|
spinlock_t lock;
|
||||||
|
int counter __guarded_by(&lock);
|
||||||
|
};
|
||||||
|
|
||||||
|
void init_my_data(struct my_data *d)
|
||||||
|
{
|
||||||
|
...
|
||||||
|
guard(spinlock_init)(&d->lock);
|
||||||
|
d->counter = 0;
|
||||||
|
...
|
||||||
|
}
|
||||||
|
|
||||||
|
Alternatively, initializing guarded variables can be done with context analysis
|
||||||
|
disabled, preferably in the smallest possible scope (due to lack of any other
|
||||||
|
checking): either with a ``context_unsafe(var = init)`` expression, or by
|
||||||
|
marking small initialization functions with the ``__context_unsafe(init)``
|
||||||
|
attribute.
|
||||||
|
|
||||||
|
Lockdep assertions, such as `lockdep_assert_held()`, inform the compiler's
|
||||||
|
context analysis that the associated synchronization primitive is held after
|
||||||
|
the assertion. This avoids false positives in complex control-flow scenarios
|
||||||
|
and encourages the use of Lockdep where static analysis is limited. For
|
||||||
|
example, this is useful when a function doesn't *always* require a lock, making
|
||||||
|
`__must_hold()` inappropriate.
|
||||||
|
|
||||||
|
Keywords
|
||||||
|
~~~~~~~~
|
||||||
|
|
||||||
|
.. kernel-doc:: include/linux/compiler-context-analysis.h
|
||||||
|
:identifiers: context_lock_struct
|
||||||
|
token_context_lock token_context_lock_instance
|
||||||
|
__guarded_by __pt_guarded_by
|
||||||
|
__must_hold
|
||||||
|
__must_not_hold
|
||||||
|
__acquires
|
||||||
|
__cond_acquires
|
||||||
|
__releases
|
||||||
|
__must_hold_shared
|
||||||
|
__acquires_shared
|
||||||
|
__cond_acquires_shared
|
||||||
|
__releases_shared
|
||||||
|
__acquire
|
||||||
|
__release
|
||||||
|
__acquire_shared
|
||||||
|
__release_shared
|
||||||
|
__acquire_ret
|
||||||
|
__acquire_shared_ret
|
||||||
|
context_unsafe
|
||||||
|
__context_unsafe
|
||||||
|
disable_context_analysis enable_context_analysis
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
The function attribute `__no_context_analysis` is reserved for internal
|
||||||
|
implementation of context lock types, and should be avoided in normal code.
|
||||||
|
|
||||||
|
Background
|
||||||
|
----------
|
||||||
|
|
||||||
|
Clang originally called the feature `Thread Safety Analysis
|
||||||
|
<https://clang.llvm.org/docs/ThreadSafetyAnalysis.html>`_, with some keywords
|
||||||
|
and documentation still using the thread-safety-analysis-only terminology. This
|
||||||
|
was later changed and the feature became more flexible, gaining the ability to
|
||||||
|
define custom "capabilities". Its foundations can be found in `Capability
|
||||||
|
Systems <https://www.cs.cornell.edu/talc/papers/capabilities.pdf>`_, used to
|
||||||
|
specify the permissibility of operations to depend on some "capability" being
|
||||||
|
held (or not held).
|
||||||
|
|
||||||
|
Because the feature is not just able to express capabilities related to
|
||||||
|
synchronization primitives, and "capability" is already overloaded in the
|
||||||
|
kernel, the naming chosen for the kernel departs from Clang's initial "Thread
|
||||||
|
Safety" and "capability" nomenclature; we refer to the feature as "Context
|
||||||
|
Analysis" to avoid confusion. The internal implementation still makes
|
||||||
|
references to Clang's terminology in a few places, such as `-Wthread-safety`
|
||||||
|
being the warning option that also still appears in diagnostic messages.
|
||||||
@@ -21,6 +21,7 @@ Documentation/process/debugging/index.rst
|
|||||||
checkpatch
|
checkpatch
|
||||||
clang-format
|
clang-format
|
||||||
coccinelle
|
coccinelle
|
||||||
|
context-analysis
|
||||||
sparse
|
sparse
|
||||||
kcov
|
kcov
|
||||||
gcov
|
gcov
|
||||||
|
|||||||
@@ -53,25 +53,6 @@ sure that bitwise types don't get mixed up (little-endian vs big-endian
|
|||||||
vs cpu-endian vs whatever), and there the constant "0" really _is_
|
vs cpu-endian vs whatever), and there the constant "0" really _is_
|
||||||
special.
|
special.
|
||||||
|
|
||||||
Using sparse for lock checking
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
The following macros are undefined for gcc and defined during a sparse
|
|
||||||
run to use the "context" tracking feature of sparse, applied to
|
|
||||||
locking. These annotations tell sparse when a lock is held, with
|
|
||||||
regard to the annotated function's entry and exit.
|
|
||||||
|
|
||||||
__must_hold - The specified lock is held on function entry and exit.
|
|
||||||
|
|
||||||
__acquires - The specified lock is held on function exit, but not entry.
|
|
||||||
|
|
||||||
__releases - The specified lock is held on function entry, but not exit.
|
|
||||||
|
|
||||||
If the function enters and exits without the lock held, acquiring and
|
|
||||||
releasing the lock inside the function in a balanced way, no
|
|
||||||
annotation is needed. The three annotations above are for cases where
|
|
||||||
sparse would otherwise report a context imbalance.
|
|
||||||
|
|
||||||
Getting sparse
|
Getting sparse
|
||||||
--------------
|
--------------
|
||||||
|
|
||||||
|
|||||||
@@ -583,7 +583,7 @@ To access PTE-level page tables, a helper like :c:func:`!pte_offset_map_lock` or
|
|||||||
:c:func:`!pte_offset_map` can be used depending on stability requirements.
|
:c:func:`!pte_offset_map` can be used depending on stability requirements.
|
||||||
These map the page table into kernel memory if required, take the RCU lock, and
|
These map the page table into kernel memory if required, take the RCU lock, and
|
||||||
depending on variant, may also look up or acquire the PTE lock.
|
depending on variant, may also look up or acquire the PTE lock.
|
||||||
See the comment on :c:func:`!__pte_offset_map_lock`.
|
See the comment on :c:func:`!pte_offset_map_lock`.
|
||||||
|
|
||||||
Atomicity
|
Atomicity
|
||||||
^^^^^^^^^
|
^^^^^^^^^
|
||||||
@@ -667,7 +667,7 @@ must be released via :c:func:`!pte_unmap_unlock`.
|
|||||||
.. note:: There are some variants on this, such as
|
.. note:: There are some variants on this, such as
|
||||||
:c:func:`!pte_offset_map_rw_nolock` when we know we hold the PTE stable but
|
:c:func:`!pte_offset_map_rw_nolock` when we know we hold the PTE stable but
|
||||||
for brevity we do not explore this. See the comment for
|
for brevity we do not explore this. See the comment for
|
||||||
:c:func:`!__pte_offset_map_lock` for more details.
|
:c:func:`!pte_offset_map_lock` for more details.
|
||||||
|
|
||||||
When modifying data in ranges we typically only wish to allocate higher page
|
When modifying data in ranges we typically only wish to allocate higher page
|
||||||
tables as necessary, using these locks to avoid races or overwriting anything,
|
tables as necessary, using these locks to avoid races or overwriting anything,
|
||||||
@@ -686,7 +686,7 @@ At the leaf page table, that is the PTE, we can't entirely rely on this pattern
|
|||||||
as we have separate PMD and PTE locks and a THP collapse for instance might have
|
as we have separate PMD and PTE locks and a THP collapse for instance might have
|
||||||
eliminated the PMD entry as well as the PTE from under us.
|
eliminated the PMD entry as well as the PTE from under us.
|
||||||
|
|
||||||
This is why :c:func:`!__pte_offset_map_lock` locklessly retrieves the PMD entry
|
This is why :c:func:`!pte_offset_map_lock` locklessly retrieves the PMD entry
|
||||||
for the PTE, carefully checking it is as expected, before acquiring the
|
for the PTE, carefully checking it is as expected, before acquiring the
|
||||||
PTE-specific lock, and then *again* checking that the PMD entry is as expected.
|
PTE-specific lock, and then *again* checking that the PMD entry is as expected.
|
||||||
|
|
||||||
|
|||||||
11
MAINTAINERS
11
MAINTAINERS
@@ -6153,6 +6153,17 @@ M: Nelson Escobar <neescoba@cisco.com>
|
|||||||
S: Supported
|
S: Supported
|
||||||
F: drivers/infiniband/hw/usnic/
|
F: drivers/infiniband/hw/usnic/
|
||||||
|
|
||||||
|
CLANG CONTEXT ANALYSIS
|
||||||
|
M: Marco Elver <elver@google.com>
|
||||||
|
R: Bart Van Assche <bvanassche@acm.org>
|
||||||
|
L: llvm@lists.linux.dev
|
||||||
|
S: Maintained
|
||||||
|
F: Documentation/dev-tools/context-analysis.rst
|
||||||
|
F: include/linux/compiler-context-analysis.h
|
||||||
|
F: lib/test_context-analysis.c
|
||||||
|
F: scripts/Makefile.context-analysis
|
||||||
|
F: scripts/context-analysis-suppression.txt
|
||||||
|
|
||||||
CLANG CONTROL FLOW INTEGRITY SUPPORT
|
CLANG CONTROL FLOW INTEGRITY SUPPORT
|
||||||
M: Sami Tolvanen <samitolvanen@google.com>
|
M: Sami Tolvanen <samitolvanen@google.com>
|
||||||
M: Kees Cook <kees@kernel.org>
|
M: Kees Cook <kees@kernel.org>
|
||||||
|
|||||||
1
Makefile
1
Makefile
@@ -1125,6 +1125,7 @@ include-$(CONFIG_RANDSTRUCT) += scripts/Makefile.randstruct
|
|||||||
include-$(CONFIG_KSTACK_ERASE) += scripts/Makefile.kstack_erase
|
include-$(CONFIG_KSTACK_ERASE) += scripts/Makefile.kstack_erase
|
||||||
include-$(CONFIG_AUTOFDO_CLANG) += scripts/Makefile.autofdo
|
include-$(CONFIG_AUTOFDO_CLANG) += scripts/Makefile.autofdo
|
||||||
include-$(CONFIG_PROPELLER_CLANG) += scripts/Makefile.propeller
|
include-$(CONFIG_PROPELLER_CLANG) += scripts/Makefile.propeller
|
||||||
|
include-$(CONFIG_WARN_CONTEXT_ANALYSIS) += scripts/Makefile.context-analysis
|
||||||
include-$(CONFIG_GCC_PLUGINS) += scripts/Makefile.gcc-plugins
|
include-$(CONFIG_GCC_PLUGINS) += scripts/Makefile.gcc-plugins
|
||||||
|
|
||||||
include $(addprefix $(srctree)/, $(include-y))
|
include $(addprefix $(srctree)/, $(include-y))
|
||||||
|
|||||||
@@ -21,8 +21,9 @@ struct mm_id {
|
|||||||
int syscall_fd_map[STUB_MAX_FDS];
|
int syscall_fd_map[STUB_MAX_FDS];
|
||||||
};
|
};
|
||||||
|
|
||||||
void enter_turnstile(struct mm_id *mm_id) __acquires(turnstile);
|
struct mutex *__get_turnstile(struct mm_id *mm_id);
|
||||||
void exit_turnstile(struct mm_id *mm_id) __releases(turnstile);
|
void enter_turnstile(struct mm_id *mm_id) __acquires(__get_turnstile(mm_id));
|
||||||
|
void exit_turnstile(struct mm_id *mm_id) __releases(__get_turnstile(mm_id));
|
||||||
|
|
||||||
void notify_mm_kill(int pid);
|
void notify_mm_kill(int pid);
|
||||||
|
|
||||||
|
|||||||
@@ -23,18 +23,21 @@ static_assert(sizeof(struct stub_data) == STUB_DATA_PAGES * UM_KERN_PAGE_SIZE);
|
|||||||
static spinlock_t mm_list_lock;
|
static spinlock_t mm_list_lock;
|
||||||
static struct list_head mm_list;
|
static struct list_head mm_list;
|
||||||
|
|
||||||
void enter_turnstile(struct mm_id *mm_id) __acquires(turnstile)
|
struct mutex *__get_turnstile(struct mm_id *mm_id)
|
||||||
{
|
{
|
||||||
struct mm_context *ctx = container_of(mm_id, struct mm_context, id);
|
struct mm_context *ctx = container_of(mm_id, struct mm_context, id);
|
||||||
|
|
||||||
mutex_lock(&ctx->turnstile);
|
return &ctx->turnstile;
|
||||||
}
|
}
|
||||||
|
|
||||||
void exit_turnstile(struct mm_id *mm_id) __releases(turnstile)
|
void enter_turnstile(struct mm_id *mm_id)
|
||||||
{
|
{
|
||||||
struct mm_context *ctx = container_of(mm_id, struct mm_context, id);
|
mutex_lock(__get_turnstile(mm_id));
|
||||||
|
}
|
||||||
|
|
||||||
mutex_unlock(&ctx->turnstile);
|
void exit_turnstile(struct mm_id *mm_id)
|
||||||
|
{
|
||||||
|
mutex_unlock(__get_turnstile(mm_id));
|
||||||
}
|
}
|
||||||
|
|
||||||
int init_new_context(struct task_struct *task, struct mm_struct *mm)
|
int init_new_context(struct task_struct *task, struct mm_struct *mm)
|
||||||
|
|||||||
@@ -9,6 +9,7 @@ endmenu
|
|||||||
config UML_X86
|
config UML_X86
|
||||||
def_bool y
|
def_bool y
|
||||||
select ARCH_USE_QUEUED_RWLOCKS
|
select ARCH_USE_QUEUED_RWLOCKS
|
||||||
|
select ARCH_SUPPORTS_ATOMIC_RMW
|
||||||
select ARCH_USE_QUEUED_SPINLOCKS
|
select ARCH_USE_QUEUED_SPINLOCKS
|
||||||
select DCACHE_WORD_ACCESS
|
select DCACHE_WORD_ACCESS
|
||||||
select HAVE_EFFICIENT_UNALIGNED_ACCESS
|
select HAVE_EFFICIENT_UNALIGNED_ACCESS
|
||||||
|
|||||||
@@ -3,6 +3,8 @@
|
|||||||
# Cryptographic API
|
# Cryptographic API
|
||||||
#
|
#
|
||||||
|
|
||||||
|
CONTEXT_ANALYSIS := y
|
||||||
|
|
||||||
obj-$(CONFIG_CRYPTO) += crypto.o
|
obj-$(CONFIG_CRYPTO) += crypto.o
|
||||||
crypto-y := api.o cipher.o
|
crypto-y := api.o cipher.o
|
||||||
|
|
||||||
|
|||||||
@@ -443,8 +443,8 @@ int crypto_acomp_alloc_streams(struct crypto_acomp_streams *s)
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(crypto_acomp_alloc_streams);
|
EXPORT_SYMBOL_GPL(crypto_acomp_alloc_streams);
|
||||||
|
|
||||||
struct crypto_acomp_stream *crypto_acomp_lock_stream_bh(
|
struct crypto_acomp_stream *_crypto_acomp_lock_stream_bh(
|
||||||
struct crypto_acomp_streams *s) __acquires(stream)
|
struct crypto_acomp_streams *s)
|
||||||
{
|
{
|
||||||
struct crypto_acomp_stream __percpu *streams = s->streams;
|
struct crypto_acomp_stream __percpu *streams = s->streams;
|
||||||
int cpu = raw_smp_processor_id();
|
int cpu = raw_smp_processor_id();
|
||||||
@@ -463,7 +463,7 @@ struct crypto_acomp_stream *crypto_acomp_lock_stream_bh(
|
|||||||
spin_lock(&ps->lock);
|
spin_lock(&ps->lock);
|
||||||
return ps;
|
return ps;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(crypto_acomp_lock_stream_bh);
|
EXPORT_SYMBOL_GPL(_crypto_acomp_lock_stream_bh);
|
||||||
|
|
||||||
void acomp_walk_done_src(struct acomp_walk *walk, int used)
|
void acomp_walk_done_src(struct acomp_walk *walk, int used)
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -244,6 +244,7 @@ EXPORT_SYMBOL_GPL(crypto_remove_spawns);
|
|||||||
|
|
||||||
static void crypto_alg_finish_registration(struct crypto_alg *alg,
|
static void crypto_alg_finish_registration(struct crypto_alg *alg,
|
||||||
struct list_head *algs_to_put)
|
struct list_head *algs_to_put)
|
||||||
|
__must_hold(&crypto_alg_sem)
|
||||||
{
|
{
|
||||||
struct crypto_alg *q;
|
struct crypto_alg *q;
|
||||||
|
|
||||||
@@ -299,6 +300,7 @@ static struct crypto_larval *crypto_alloc_test_larval(struct crypto_alg *alg)
|
|||||||
|
|
||||||
static struct crypto_larval *
|
static struct crypto_larval *
|
||||||
__crypto_register_alg(struct crypto_alg *alg, struct list_head *algs_to_put)
|
__crypto_register_alg(struct crypto_alg *alg, struct list_head *algs_to_put)
|
||||||
|
__must_hold(&crypto_alg_sem)
|
||||||
{
|
{
|
||||||
struct crypto_alg *q;
|
struct crypto_alg *q;
|
||||||
struct crypto_larval *larval;
|
struct crypto_larval *larval;
|
||||||
|
|||||||
@@ -57,6 +57,7 @@ EXPORT_SYMBOL_GPL(crypto_mod_put);
|
|||||||
|
|
||||||
static struct crypto_alg *__crypto_alg_lookup(const char *name, u32 type,
|
static struct crypto_alg *__crypto_alg_lookup(const char *name, u32 type,
|
||||||
u32 mask)
|
u32 mask)
|
||||||
|
__must_hold_shared(&crypto_alg_sem)
|
||||||
{
|
{
|
||||||
struct crypto_alg *q, *alg = NULL;
|
struct crypto_alg *q, *alg = NULL;
|
||||||
int best = -2;
|
int best = -2;
|
||||||
|
|||||||
@@ -453,8 +453,8 @@ struct crypto_engine *crypto_engine_alloc_init_and_set(struct device *dev,
|
|||||||
snprintf(engine->name, sizeof(engine->name),
|
snprintf(engine->name, sizeof(engine->name),
|
||||||
"%s-engine", dev_name(dev));
|
"%s-engine", dev_name(dev));
|
||||||
|
|
||||||
|
guard(spinlock_init)(&engine->queue_lock);
|
||||||
crypto_init_queue(&engine->queue, qlen);
|
crypto_init_queue(&engine->queue, qlen);
|
||||||
spin_lock_init(&engine->queue_lock);
|
|
||||||
|
|
||||||
engine->kworker = kthread_run_worker(0, "%s", engine->name);
|
engine->kworker = kthread_run_worker(0, "%s", engine->name);
|
||||||
if (IS_ERR(engine->kworker)) {
|
if (IS_ERR(engine->kworker)) {
|
||||||
|
|||||||
@@ -231,6 +231,7 @@ static inline unsigned short drbg_sec_strength(drbg_flag_t flags)
|
|||||||
*/
|
*/
|
||||||
static bool drbg_fips_continuous_test(struct drbg_state *drbg,
|
static bool drbg_fips_continuous_test(struct drbg_state *drbg,
|
||||||
const unsigned char *entropy)
|
const unsigned char *entropy)
|
||||||
|
__must_hold(&drbg->drbg_mutex)
|
||||||
{
|
{
|
||||||
unsigned short entropylen = drbg_sec_strength(drbg->core->flags);
|
unsigned short entropylen = drbg_sec_strength(drbg->core->flags);
|
||||||
|
|
||||||
@@ -845,6 +846,7 @@ static inline int __drbg_seed(struct drbg_state *drbg, struct list_head *seed,
|
|||||||
static inline void drbg_get_random_bytes(struct drbg_state *drbg,
|
static inline void drbg_get_random_bytes(struct drbg_state *drbg,
|
||||||
unsigned char *entropy,
|
unsigned char *entropy,
|
||||||
unsigned int entropylen)
|
unsigned int entropylen)
|
||||||
|
__must_hold(&drbg->drbg_mutex)
|
||||||
{
|
{
|
||||||
do
|
do
|
||||||
get_random_bytes(entropy, entropylen);
|
get_random_bytes(entropy, entropylen);
|
||||||
@@ -852,6 +854,7 @@ static inline void drbg_get_random_bytes(struct drbg_state *drbg,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static int drbg_seed_from_random(struct drbg_state *drbg)
|
static int drbg_seed_from_random(struct drbg_state *drbg)
|
||||||
|
__must_hold(&drbg->drbg_mutex)
|
||||||
{
|
{
|
||||||
struct drbg_string data;
|
struct drbg_string data;
|
||||||
LIST_HEAD(seedlist);
|
LIST_HEAD(seedlist);
|
||||||
@@ -906,6 +909,7 @@ static bool drbg_nopr_reseed_interval_elapsed(struct drbg_state *drbg)
|
|||||||
*/
|
*/
|
||||||
static int drbg_seed(struct drbg_state *drbg, struct drbg_string *pers,
|
static int drbg_seed(struct drbg_state *drbg, struct drbg_string *pers,
|
||||||
bool reseed)
|
bool reseed)
|
||||||
|
__must_hold(&drbg->drbg_mutex)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
unsigned char entropy[((32 + 16) * 2)];
|
unsigned char entropy[((32 + 16) * 2)];
|
||||||
@@ -1138,6 +1142,7 @@ err:
|
|||||||
static int drbg_generate(struct drbg_state *drbg,
|
static int drbg_generate(struct drbg_state *drbg,
|
||||||
unsigned char *buf, unsigned int buflen,
|
unsigned char *buf, unsigned int buflen,
|
||||||
struct drbg_string *addtl)
|
struct drbg_string *addtl)
|
||||||
|
__must_hold(&drbg->drbg_mutex)
|
||||||
{
|
{
|
||||||
int len = 0;
|
int len = 0;
|
||||||
LIST_HEAD(addtllist);
|
LIST_HEAD(addtllist);
|
||||||
@@ -1760,7 +1765,7 @@ static inline int __init drbg_healthcheck_sanity(void)
|
|||||||
if (!drbg)
|
if (!drbg)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
mutex_init(&drbg->drbg_mutex);
|
guard(mutex_init)(&drbg->drbg_mutex);
|
||||||
drbg->core = &drbg_cores[coreref];
|
drbg->core = &drbg_cores[coreref];
|
||||||
drbg->reseed_threshold = drbg_max_requests(drbg);
|
drbg->reseed_threshold = drbg_max_requests(drbg);
|
||||||
|
|
||||||
|
|||||||
@@ -61,8 +61,8 @@ enum {
|
|||||||
/* Maximum number of (rtattr) parameters for each template. */
|
/* Maximum number of (rtattr) parameters for each template. */
|
||||||
#define CRYPTO_MAX_ATTRS 32
|
#define CRYPTO_MAX_ATTRS 32
|
||||||
|
|
||||||
extern struct list_head crypto_alg_list;
|
|
||||||
extern struct rw_semaphore crypto_alg_sem;
|
extern struct rw_semaphore crypto_alg_sem;
|
||||||
|
extern struct list_head crypto_alg_list __guarded_by(&crypto_alg_sem);
|
||||||
extern struct blocking_notifier_head crypto_chain;
|
extern struct blocking_notifier_head crypto_chain;
|
||||||
|
|
||||||
int alg_test(const char *driver, const char *alg, u32 type, u32 mask);
|
int alg_test(const char *driver, const char *alg, u32 type, u32 mask);
|
||||||
|
|||||||
@@ -19,17 +19,20 @@
|
|||||||
#include "internal.h"
|
#include "internal.h"
|
||||||
|
|
||||||
static void *c_start(struct seq_file *m, loff_t *pos)
|
static void *c_start(struct seq_file *m, loff_t *pos)
|
||||||
|
__acquires_shared(&crypto_alg_sem)
|
||||||
{
|
{
|
||||||
down_read(&crypto_alg_sem);
|
down_read(&crypto_alg_sem);
|
||||||
return seq_list_start(&crypto_alg_list, *pos);
|
return seq_list_start(&crypto_alg_list, *pos);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void *c_next(struct seq_file *m, void *p, loff_t *pos)
|
static void *c_next(struct seq_file *m, void *p, loff_t *pos)
|
||||||
|
__must_hold_shared(&crypto_alg_sem)
|
||||||
{
|
{
|
||||||
return seq_list_next(p, &crypto_alg_list, pos);
|
return seq_list_next(p, &crypto_alg_list, pos);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void c_stop(struct seq_file *m, void *p)
|
static void c_stop(struct seq_file *m, void *p)
|
||||||
|
__releases_shared(&crypto_alg_sem)
|
||||||
{
|
{
|
||||||
up_read(&crypto_alg_sem);
|
up_read(&crypto_alg_sem);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -28,8 +28,8 @@
|
|||||||
struct scomp_scratch {
|
struct scomp_scratch {
|
||||||
spinlock_t lock;
|
spinlock_t lock;
|
||||||
union {
|
union {
|
||||||
void *src;
|
void *src __guarded_by(&lock);
|
||||||
unsigned long saddr;
|
unsigned long saddr __guarded_by(&lock);
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -38,8 +38,8 @@ static DEFINE_PER_CPU(struct scomp_scratch, scomp_scratch) = {
|
|||||||
};
|
};
|
||||||
|
|
||||||
static const struct crypto_type crypto_scomp_type;
|
static const struct crypto_type crypto_scomp_type;
|
||||||
static int scomp_scratch_users;
|
|
||||||
static DEFINE_MUTEX(scomp_lock);
|
static DEFINE_MUTEX(scomp_lock);
|
||||||
|
static int scomp_scratch_users __guarded_by(&scomp_lock);
|
||||||
|
|
||||||
static cpumask_t scomp_scratch_want;
|
static cpumask_t scomp_scratch_want;
|
||||||
static void scomp_scratch_workfn(struct work_struct *work);
|
static void scomp_scratch_workfn(struct work_struct *work);
|
||||||
@@ -65,6 +65,7 @@ static void __maybe_unused crypto_scomp_show(struct seq_file *m,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void crypto_scomp_free_scratches(void)
|
static void crypto_scomp_free_scratches(void)
|
||||||
|
__context_unsafe(/* frees @scratch */)
|
||||||
{
|
{
|
||||||
struct scomp_scratch *scratch;
|
struct scomp_scratch *scratch;
|
||||||
int i;
|
int i;
|
||||||
@@ -99,7 +100,7 @@ static void scomp_scratch_workfn(struct work_struct *work)
|
|||||||
struct scomp_scratch *scratch;
|
struct scomp_scratch *scratch;
|
||||||
|
|
||||||
scratch = per_cpu_ptr(&scomp_scratch, cpu);
|
scratch = per_cpu_ptr(&scomp_scratch, cpu);
|
||||||
if (scratch->src)
|
if (context_unsafe(scratch->src))
|
||||||
continue;
|
continue;
|
||||||
if (scomp_alloc_scratch(scratch, cpu))
|
if (scomp_alloc_scratch(scratch, cpu))
|
||||||
break;
|
break;
|
||||||
@@ -109,6 +110,7 @@ static void scomp_scratch_workfn(struct work_struct *work)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static int crypto_scomp_alloc_scratches(void)
|
static int crypto_scomp_alloc_scratches(void)
|
||||||
|
__context_unsafe(/* allocates @scratch */)
|
||||||
{
|
{
|
||||||
unsigned int i = cpumask_first(cpu_possible_mask);
|
unsigned int i = cpumask_first(cpu_possible_mask);
|
||||||
struct scomp_scratch *scratch;
|
struct scomp_scratch *scratch;
|
||||||
@@ -137,7 +139,8 @@ unlock:
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct scomp_scratch *scomp_lock_scratch(void) __acquires(scratch)
|
#define scomp_lock_scratch(...) __acquire_ret(_scomp_lock_scratch(__VA_ARGS__), &__ret->lock)
|
||||||
|
static struct scomp_scratch *_scomp_lock_scratch(void) __acquires_ret
|
||||||
{
|
{
|
||||||
int cpu = raw_smp_processor_id();
|
int cpu = raw_smp_processor_id();
|
||||||
struct scomp_scratch *scratch;
|
struct scomp_scratch *scratch;
|
||||||
@@ -157,7 +160,7 @@ static struct scomp_scratch *scomp_lock_scratch(void) __acquires(scratch)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void scomp_unlock_scratch(struct scomp_scratch *scratch)
|
static inline void scomp_unlock_scratch(struct scomp_scratch *scratch)
|
||||||
__releases(scratch)
|
__releases(&scratch->lock)
|
||||||
{
|
{
|
||||||
spin_unlock(&scratch->lock);
|
spin_unlock(&scratch->lock);
|
||||||
}
|
}
|
||||||
@@ -169,8 +172,6 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
|
|||||||
bool src_isvirt = acomp_request_src_isvirt(req);
|
bool src_isvirt = acomp_request_src_isvirt(req);
|
||||||
bool dst_isvirt = acomp_request_dst_isvirt(req);
|
bool dst_isvirt = acomp_request_dst_isvirt(req);
|
||||||
struct crypto_scomp *scomp = *tfm_ctx;
|
struct crypto_scomp *scomp = *tfm_ctx;
|
||||||
struct crypto_acomp_stream *stream;
|
|
||||||
struct scomp_scratch *scratch;
|
|
||||||
unsigned int slen = req->slen;
|
unsigned int slen = req->slen;
|
||||||
unsigned int dlen = req->dlen;
|
unsigned int dlen = req->dlen;
|
||||||
struct page *spage, *dpage;
|
struct page *spage, *dpage;
|
||||||
@@ -230,13 +231,12 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
|
|||||||
} while (0);
|
} while (0);
|
||||||
}
|
}
|
||||||
|
|
||||||
stream = crypto_acomp_lock_stream_bh(&crypto_scomp_alg(scomp)->streams);
|
struct crypto_acomp_stream *stream = crypto_acomp_lock_stream_bh(&crypto_scomp_alg(scomp)->streams);
|
||||||
|
|
||||||
if (!src_isvirt && !src) {
|
if (!src_isvirt && !src) {
|
||||||
const u8 *src;
|
struct scomp_scratch *scratch = scomp_lock_scratch();
|
||||||
|
const u8 *src = scratch->src;
|
||||||
|
|
||||||
scratch = scomp_lock_scratch();
|
|
||||||
src = scratch->src;
|
|
||||||
memcpy_from_sglist(scratch->src, req->src, 0, slen);
|
memcpy_from_sglist(scratch->src, req->src, 0, slen);
|
||||||
|
|
||||||
if (dir)
|
if (dir)
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ use kernel::{
|
|||||||
prelude::*,
|
prelude::*,
|
||||||
seq_file::SeqFile,
|
seq_file::SeqFile,
|
||||||
seq_print,
|
seq_print,
|
||||||
|
sync::atomic::{ordering::Relaxed, Atomic},
|
||||||
sync::poll::PollTable,
|
sync::poll::PollTable,
|
||||||
sync::Arc,
|
sync::Arc,
|
||||||
task::Pid,
|
task::Pid,
|
||||||
@@ -28,10 +29,7 @@ use kernel::{
|
|||||||
|
|
||||||
use crate::{context::Context, page_range::Shrinker, process::Process, thread::Thread};
|
use crate::{context::Context, page_range::Shrinker, process::Process, thread::Thread};
|
||||||
|
|
||||||
use core::{
|
use core::ptr::NonNull;
|
||||||
ptr::NonNull,
|
|
||||||
sync::atomic::{AtomicBool, AtomicUsize, Ordering},
|
|
||||||
};
|
|
||||||
|
|
||||||
mod allocation;
|
mod allocation;
|
||||||
mod context;
|
mod context;
|
||||||
@@ -90,9 +88,9 @@ module! {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn next_debug_id() -> usize {
|
fn next_debug_id() -> usize {
|
||||||
static NEXT_DEBUG_ID: AtomicUsize = AtomicUsize::new(0);
|
static NEXT_DEBUG_ID: Atomic<usize> = Atomic::new(0);
|
||||||
|
|
||||||
NEXT_DEBUG_ID.fetch_add(1, Ordering::Relaxed)
|
NEXT_DEBUG_ID.fetch_add(1, Relaxed)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Provides a single place to write Binder return values via the
|
/// Provides a single place to write Binder return values via the
|
||||||
@@ -215,7 +213,7 @@ impl<T: ListArcSafe> DTRWrap<T> {
|
|||||||
|
|
||||||
struct DeliverCode {
|
struct DeliverCode {
|
||||||
code: u32,
|
code: u32,
|
||||||
skip: AtomicBool,
|
skip: Atomic<bool>,
|
||||||
}
|
}
|
||||||
|
|
||||||
kernel::list::impl_list_arc_safe! {
|
kernel::list::impl_list_arc_safe! {
|
||||||
@@ -226,7 +224,7 @@ impl DeliverCode {
|
|||||||
fn new(code: u32) -> Self {
|
fn new(code: u32) -> Self {
|
||||||
Self {
|
Self {
|
||||||
code,
|
code,
|
||||||
skip: AtomicBool::new(false),
|
skip: Atomic::new(false),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -235,7 +233,7 @@ impl DeliverCode {
|
|||||||
/// This is used instead of removing it from the work list, since `LinkedList::remove` is
|
/// This is used instead of removing it from the work list, since `LinkedList::remove` is
|
||||||
/// unsafe, whereas this method is not.
|
/// unsafe, whereas this method is not.
|
||||||
fn skip(&self) {
|
fn skip(&self) {
|
||||||
self.skip.store(true, Ordering::Relaxed);
|
self.skip.store(true, Relaxed);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -245,7 +243,7 @@ impl DeliverToRead for DeliverCode {
|
|||||||
_thread: &Thread,
|
_thread: &Thread,
|
||||||
writer: &mut BinderReturnWriter<'_>,
|
writer: &mut BinderReturnWriter<'_>,
|
||||||
) -> Result<bool> {
|
) -> Result<bool> {
|
||||||
if !self.skip.load(Ordering::Relaxed) {
|
if !self.skip.load(Relaxed) {
|
||||||
writer.write_code(self.code)?;
|
writer.write_code(self.code)?;
|
||||||
}
|
}
|
||||||
Ok(true)
|
Ok(true)
|
||||||
@@ -259,7 +257,7 @@ impl DeliverToRead for DeliverCode {
|
|||||||
|
|
||||||
fn debug_print(&self, m: &SeqFile, prefix: &str, _tprefix: &str) -> Result<()> {
|
fn debug_print(&self, m: &SeqFile, prefix: &str, _tprefix: &str) -> Result<()> {
|
||||||
seq_print!(m, "{}", prefix);
|
seq_print!(m, "{}", prefix);
|
||||||
if self.skip.load(Ordering::Relaxed) {
|
if self.skip.load(Relaxed) {
|
||||||
seq_print!(m, "(skipped) ");
|
seq_print!(m, "(skipped) ");
|
||||||
}
|
}
|
||||||
if self.code == defs::BR_TRANSACTION_COMPLETE {
|
if self.code == defs::BR_TRANSACTION_COMPLETE {
|
||||||
|
|||||||
@@ -5,7 +5,7 @@
|
|||||||
//! Keep track of statistics for binder_logs.
|
//! Keep track of statistics for binder_logs.
|
||||||
|
|
||||||
use crate::defs::*;
|
use crate::defs::*;
|
||||||
use core::sync::atomic::{AtomicU32, Ordering::Relaxed};
|
use kernel::sync::atomic::{ordering::Relaxed, Atomic};
|
||||||
use kernel::{ioctl::_IOC_NR, seq_file::SeqFile, seq_print};
|
use kernel::{ioctl::_IOC_NR, seq_file::SeqFile, seq_print};
|
||||||
|
|
||||||
const BC_COUNT: usize = _IOC_NR(BC_REPLY_SG) as usize + 1;
|
const BC_COUNT: usize = _IOC_NR(BC_REPLY_SG) as usize + 1;
|
||||||
@@ -14,14 +14,14 @@ const BR_COUNT: usize = _IOC_NR(BR_TRANSACTION_PENDING_FROZEN) as usize + 1;
|
|||||||
pub(crate) static GLOBAL_STATS: BinderStats = BinderStats::new();
|
pub(crate) static GLOBAL_STATS: BinderStats = BinderStats::new();
|
||||||
|
|
||||||
pub(crate) struct BinderStats {
|
pub(crate) struct BinderStats {
|
||||||
bc: [AtomicU32; BC_COUNT],
|
bc: [Atomic<u32>; BC_COUNT],
|
||||||
br: [AtomicU32; BR_COUNT],
|
br: [Atomic<u32>; BR_COUNT],
|
||||||
}
|
}
|
||||||
|
|
||||||
impl BinderStats {
|
impl BinderStats {
|
||||||
pub(crate) const fn new() -> Self {
|
pub(crate) const fn new() -> Self {
|
||||||
#[expect(clippy::declare_interior_mutable_const)]
|
#[expect(clippy::declare_interior_mutable_const)]
|
||||||
const ZERO: AtomicU32 = AtomicU32::new(0);
|
const ZERO: Atomic<u32> = Atomic::new(0);
|
||||||
|
|
||||||
Self {
|
Self {
|
||||||
bc: [ZERO; BC_COUNT],
|
bc: [ZERO; BC_COUNT],
|
||||||
|
|||||||
@@ -15,6 +15,7 @@ use kernel::{
|
|||||||
security,
|
security,
|
||||||
seq_file::SeqFile,
|
seq_file::SeqFile,
|
||||||
seq_print,
|
seq_print,
|
||||||
|
sync::atomic::{ordering::Relaxed, Atomic},
|
||||||
sync::poll::{PollCondVar, PollTable},
|
sync::poll::{PollCondVar, PollTable},
|
||||||
sync::{Arc, SpinLock},
|
sync::{Arc, SpinLock},
|
||||||
task::Task,
|
task::Task,
|
||||||
@@ -34,10 +35,7 @@ use crate::{
|
|||||||
BinderReturnWriter, DArc, DLArc, DTRWrap, DeliverCode, DeliverToRead,
|
BinderReturnWriter, DArc, DLArc, DTRWrap, DeliverCode, DeliverToRead,
|
||||||
};
|
};
|
||||||
|
|
||||||
use core::{
|
use core::mem::size_of;
|
||||||
mem::size_of,
|
|
||||||
sync::atomic::{AtomicU32, Ordering},
|
|
||||||
};
|
|
||||||
|
|
||||||
fn is_aligned(value: usize, to: usize) -> bool {
|
fn is_aligned(value: usize, to: usize) -> bool {
|
||||||
value % to == 0
|
value % to == 0
|
||||||
@@ -284,8 +282,8 @@ const LOOPER_POLL: u32 = 0x40;
|
|||||||
impl InnerThread {
|
impl InnerThread {
|
||||||
fn new() -> Result<Self> {
|
fn new() -> Result<Self> {
|
||||||
fn next_err_id() -> u32 {
|
fn next_err_id() -> u32 {
|
||||||
static EE_ID: AtomicU32 = AtomicU32::new(0);
|
static EE_ID: Atomic<u32> = Atomic::new(0);
|
||||||
EE_ID.fetch_add(1, Ordering::Relaxed)
|
EE_ID.fetch_add(1, Relaxed)
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(Self {
|
Ok(Self {
|
||||||
@@ -1568,7 +1566,7 @@ impl Thread {
|
|||||||
|
|
||||||
#[pin_data]
|
#[pin_data]
|
||||||
struct ThreadError {
|
struct ThreadError {
|
||||||
error_code: AtomicU32,
|
error_code: Atomic<u32>,
|
||||||
#[pin]
|
#[pin]
|
||||||
links_track: AtomicTracker,
|
links_track: AtomicTracker,
|
||||||
}
|
}
|
||||||
@@ -1576,18 +1574,18 @@ struct ThreadError {
|
|||||||
impl ThreadError {
|
impl ThreadError {
|
||||||
fn try_new() -> Result<DArc<Self>> {
|
fn try_new() -> Result<DArc<Self>> {
|
||||||
DTRWrap::arc_pin_init(pin_init!(Self {
|
DTRWrap::arc_pin_init(pin_init!(Self {
|
||||||
error_code: AtomicU32::new(BR_OK),
|
error_code: Atomic::new(BR_OK),
|
||||||
links_track <- AtomicTracker::new(),
|
links_track <- AtomicTracker::new(),
|
||||||
}))
|
}))
|
||||||
.map(ListArc::into_arc)
|
.map(ListArc::into_arc)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn set_error_code(&self, code: u32) {
|
fn set_error_code(&self, code: u32) {
|
||||||
self.error_code.store(code, Ordering::Relaxed);
|
self.error_code.store(code, Relaxed);
|
||||||
}
|
}
|
||||||
|
|
||||||
fn is_unused(&self) -> bool {
|
fn is_unused(&self) -> bool {
|
||||||
self.error_code.load(Ordering::Relaxed) == BR_OK
|
self.error_code.load(Relaxed) == BR_OK
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1597,8 +1595,8 @@ impl DeliverToRead for ThreadError {
|
|||||||
_thread: &Thread,
|
_thread: &Thread,
|
||||||
writer: &mut BinderReturnWriter<'_>,
|
writer: &mut BinderReturnWriter<'_>,
|
||||||
) -> Result<bool> {
|
) -> Result<bool> {
|
||||||
let code = self.error_code.load(Ordering::Relaxed);
|
let code = self.error_code.load(Relaxed);
|
||||||
self.error_code.store(BR_OK, Ordering::Relaxed);
|
self.error_code.store(BR_OK, Relaxed);
|
||||||
writer.write_code(code)?;
|
writer.write_code(code)?;
|
||||||
Ok(true)
|
Ok(true)
|
||||||
}
|
}
|
||||||
@@ -1614,7 +1612,7 @@ impl DeliverToRead for ThreadError {
|
|||||||
m,
|
m,
|
||||||
"{}transaction error: {}\n",
|
"{}transaction error: {}\n",
|
||||||
prefix,
|
prefix,
|
||||||
self.error_code.load(Ordering::Relaxed)
|
self.error_code.load(Relaxed)
|
||||||
);
|
);
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,11 +2,11 @@
|
|||||||
|
|
||||||
// Copyright (C) 2025 Google LLC.
|
// Copyright (C) 2025 Google LLC.
|
||||||
|
|
||||||
use core::sync::atomic::{AtomicBool, Ordering};
|
|
||||||
use kernel::{
|
use kernel::{
|
||||||
prelude::*,
|
prelude::*,
|
||||||
seq_file::SeqFile,
|
seq_file::SeqFile,
|
||||||
seq_print,
|
seq_print,
|
||||||
|
sync::atomic::{ordering::Relaxed, Atomic},
|
||||||
sync::{Arc, SpinLock},
|
sync::{Arc, SpinLock},
|
||||||
task::Kuid,
|
task::Kuid,
|
||||||
time::{Instant, Monotonic},
|
time::{Instant, Monotonic},
|
||||||
@@ -33,7 +33,7 @@ pub(crate) struct Transaction {
|
|||||||
pub(crate) to: Arc<Process>,
|
pub(crate) to: Arc<Process>,
|
||||||
#[pin]
|
#[pin]
|
||||||
allocation: SpinLock<Option<Allocation>>,
|
allocation: SpinLock<Option<Allocation>>,
|
||||||
is_outstanding: AtomicBool,
|
is_outstanding: Atomic<bool>,
|
||||||
code: u32,
|
code: u32,
|
||||||
pub(crate) flags: u32,
|
pub(crate) flags: u32,
|
||||||
data_size: usize,
|
data_size: usize,
|
||||||
@@ -105,7 +105,7 @@ impl Transaction {
|
|||||||
offsets_size: trd.offsets_size as _,
|
offsets_size: trd.offsets_size as _,
|
||||||
data_address,
|
data_address,
|
||||||
allocation <- kernel::new_spinlock!(Some(alloc.success()), "Transaction::new"),
|
allocation <- kernel::new_spinlock!(Some(alloc.success()), "Transaction::new"),
|
||||||
is_outstanding: AtomicBool::new(false),
|
is_outstanding: Atomic::new(false),
|
||||||
txn_security_ctx_off,
|
txn_security_ctx_off,
|
||||||
oneway_spam_detected,
|
oneway_spam_detected,
|
||||||
start_time: Instant::now(),
|
start_time: Instant::now(),
|
||||||
@@ -145,7 +145,7 @@ impl Transaction {
|
|||||||
offsets_size: trd.offsets_size as _,
|
offsets_size: trd.offsets_size as _,
|
||||||
data_address: alloc.ptr,
|
data_address: alloc.ptr,
|
||||||
allocation <- kernel::new_spinlock!(Some(alloc.success()), "Transaction::new"),
|
allocation <- kernel::new_spinlock!(Some(alloc.success()), "Transaction::new"),
|
||||||
is_outstanding: AtomicBool::new(false),
|
is_outstanding: Atomic::new(false),
|
||||||
txn_security_ctx_off: None,
|
txn_security_ctx_off: None,
|
||||||
oneway_spam_detected,
|
oneway_spam_detected,
|
||||||
start_time: Instant::now(),
|
start_time: Instant::now(),
|
||||||
@@ -215,8 +215,8 @@ impl Transaction {
|
|||||||
|
|
||||||
pub(crate) fn set_outstanding(&self, to_process: &mut ProcessInner) {
|
pub(crate) fn set_outstanding(&self, to_process: &mut ProcessInner) {
|
||||||
// No race because this method is only called once.
|
// No race because this method is only called once.
|
||||||
if !self.is_outstanding.load(Ordering::Relaxed) {
|
if !self.is_outstanding.load(Relaxed) {
|
||||||
self.is_outstanding.store(true, Ordering::Relaxed);
|
self.is_outstanding.store(true, Relaxed);
|
||||||
to_process.add_outstanding_txn();
|
to_process.add_outstanding_txn();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -227,8 +227,8 @@ impl Transaction {
|
|||||||
// destructor, which is guaranteed to not race with any other operations on the
|
// destructor, which is guaranteed to not race with any other operations on the
|
||||||
// transaction. It also cannot race with `set_outstanding`, since submission happens
|
// transaction. It also cannot race with `set_outstanding`, since submission happens
|
||||||
// before delivery.
|
// before delivery.
|
||||||
if self.is_outstanding.load(Ordering::Relaxed) {
|
if self.is_outstanding.load(Relaxed) {
|
||||||
self.is_outstanding.store(false, Ordering::Relaxed);
|
self.is_outstanding.store(false, Relaxed);
|
||||||
self.to.drop_outstanding_txn();
|
self.to.drop_outstanding_txn();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -548,11 +548,11 @@ int iwl_trans_read_config32(struct iwl_trans *trans, u32 ofs,
|
|||||||
return iwl_trans_pcie_read_config32(trans, ofs, val);
|
return iwl_trans_pcie_read_config32(trans, ofs, val);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool _iwl_trans_grab_nic_access(struct iwl_trans *trans)
|
bool iwl_trans_grab_nic_access(struct iwl_trans *trans)
|
||||||
{
|
{
|
||||||
return iwl_trans_pcie_grab_nic_access(trans);
|
return iwl_trans_pcie_grab_nic_access(trans);
|
||||||
}
|
}
|
||||||
IWL_EXPORT_SYMBOL(_iwl_trans_grab_nic_access);
|
IWL_EXPORT_SYMBOL(iwl_trans_grab_nic_access);
|
||||||
|
|
||||||
void __releases(nic_access)
|
void __releases(nic_access)
|
||||||
iwl_trans_release_nic_access(struct iwl_trans *trans)
|
iwl_trans_release_nic_access(struct iwl_trans *trans)
|
||||||
|
|||||||
@@ -1063,11 +1063,7 @@ int iwl_trans_sw_reset(struct iwl_trans *trans);
|
|||||||
void iwl_trans_set_bits_mask(struct iwl_trans *trans, u32 reg,
|
void iwl_trans_set_bits_mask(struct iwl_trans *trans, u32 reg,
|
||||||
u32 mask, u32 value);
|
u32 mask, u32 value);
|
||||||
|
|
||||||
bool _iwl_trans_grab_nic_access(struct iwl_trans *trans);
|
bool iwl_trans_grab_nic_access(struct iwl_trans *trans);
|
||||||
|
|
||||||
#define iwl_trans_grab_nic_access(trans) \
|
|
||||||
__cond_lock(nic_access, \
|
|
||||||
likely(_iwl_trans_grab_nic_access(trans)))
|
|
||||||
|
|
||||||
void __releases(nic_access)
|
void __releases(nic_access)
|
||||||
iwl_trans_release_nic_access(struct iwl_trans *trans);
|
iwl_trans_release_nic_access(struct iwl_trans *trans);
|
||||||
|
|||||||
@@ -553,10 +553,7 @@ void iwl_trans_pcie_free(struct iwl_trans *trans);
|
|||||||
void iwl_trans_pcie_free_pnvm_dram_regions(struct iwl_dram_regions *dram_regions,
|
void iwl_trans_pcie_free_pnvm_dram_regions(struct iwl_dram_regions *dram_regions,
|
||||||
struct device *dev);
|
struct device *dev);
|
||||||
|
|
||||||
bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent);
|
bool _iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent);
|
||||||
#define _iwl_trans_pcie_grab_nic_access(trans, silent) \
|
|
||||||
__cond_lock(nic_access_nobh, \
|
|
||||||
likely(__iwl_trans_pcie_grab_nic_access(trans, silent)))
|
|
||||||
|
|
||||||
void iwl_trans_pcie_check_product_reset_status(struct pci_dev *pdev);
|
void iwl_trans_pcie_check_product_reset_status(struct pci_dev *pdev);
|
||||||
void iwl_trans_pcie_check_product_reset_mode(struct pci_dev *pdev);
|
void iwl_trans_pcie_check_product_reset_mode(struct pci_dev *pdev);
|
||||||
|
|||||||
@@ -2327,7 +2327,7 @@ EXPORT_SYMBOL(iwl_trans_pcie_reset);
|
|||||||
* This version doesn't disable BHs but rather assumes they're
|
* This version doesn't disable BHs but rather assumes they're
|
||||||
* already disabled.
|
* already disabled.
|
||||||
*/
|
*/
|
||||||
bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent)
|
bool _iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||||
@@ -2415,7 +2415,7 @@ bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans)
|
|||||||
bool ret;
|
bool ret;
|
||||||
|
|
||||||
local_bh_disable();
|
local_bh_disable();
|
||||||
ret = __iwl_trans_pcie_grab_nic_access(trans, false);
|
ret = _iwl_trans_pcie_grab_nic_access(trans, false);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
/* keep BHs disabled until iwl_trans_pcie_release_nic_access */
|
/* keep BHs disabled until iwl_trans_pcie_release_nic_access */
|
||||||
return ret;
|
return ret;
|
||||||
|
|||||||
@@ -343,7 +343,7 @@ void dlm_hold_rsb(struct dlm_rsb *r)
|
|||||||
/* TODO move this to lib/refcount.c */
|
/* TODO move this to lib/refcount.c */
|
||||||
static __must_check bool
|
static __must_check bool
|
||||||
dlm_refcount_dec_and_write_lock_bh(refcount_t *r, rwlock_t *lock)
|
dlm_refcount_dec_and_write_lock_bh(refcount_t *r, rwlock_t *lock)
|
||||||
__cond_acquires(lock)
|
__cond_acquires(true, lock)
|
||||||
{
|
{
|
||||||
if (refcount_dec_not_one(r))
|
if (refcount_dec_not_one(r))
|
||||||
return false;
|
return false;
|
||||||
|
|||||||
@@ -191,11 +191,12 @@ static inline bool crypto_acomp_req_virt(struct crypto_acomp *tfm)
|
|||||||
void crypto_acomp_free_streams(struct crypto_acomp_streams *s);
|
void crypto_acomp_free_streams(struct crypto_acomp_streams *s);
|
||||||
int crypto_acomp_alloc_streams(struct crypto_acomp_streams *s);
|
int crypto_acomp_alloc_streams(struct crypto_acomp_streams *s);
|
||||||
|
|
||||||
struct crypto_acomp_stream *crypto_acomp_lock_stream_bh(
|
#define crypto_acomp_lock_stream_bh(...) __acquire_ret(_crypto_acomp_lock_stream_bh(__VA_ARGS__), &__ret->lock);
|
||||||
struct crypto_acomp_streams *s) __acquires(stream);
|
struct crypto_acomp_stream *_crypto_acomp_lock_stream_bh(
|
||||||
|
struct crypto_acomp_streams *s) __acquires_ret;
|
||||||
|
|
||||||
static inline void crypto_acomp_unlock_stream_bh(
|
static inline void crypto_acomp_unlock_stream_bh(
|
||||||
struct crypto_acomp_stream *stream) __releases(stream)
|
struct crypto_acomp_stream *stream) __releases(&stream->lock)
|
||||||
{
|
{
|
||||||
spin_unlock_bh(&stream->lock);
|
spin_unlock_bh(&stream->lock);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -45,7 +45,7 @@ struct crypto_engine {
|
|||||||
|
|
||||||
struct list_head list;
|
struct list_head list;
|
||||||
spinlock_t queue_lock;
|
spinlock_t queue_lock;
|
||||||
struct crypto_queue queue;
|
struct crypto_queue queue __guarded_by(&queue_lock);
|
||||||
struct device *dev;
|
struct device *dev;
|
||||||
|
|
||||||
struct kthread_worker *kworker;
|
struct kthread_worker *kworker;
|
||||||
|
|||||||
@@ -2121,7 +2121,7 @@ raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
|
|||||||
*
|
*
|
||||||
* Safe to use in noinstr code; prefer atomic_try_cmpxchg() elsewhere.
|
* Safe to use in noinstr code; prefer atomic_try_cmpxchg() elsewhere.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
|
raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
|
||||||
@@ -2155,7 +2155,7 @@ raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
|
|||||||
*
|
*
|
||||||
* Safe to use in noinstr code; prefer atomic_try_cmpxchg_acquire() elsewhere.
|
* Safe to use in noinstr code; prefer atomic_try_cmpxchg_acquire() elsewhere.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
|
raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
|
||||||
@@ -2189,7 +2189,7 @@ raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
|
|||||||
*
|
*
|
||||||
* Safe to use in noinstr code; prefer atomic_try_cmpxchg_release() elsewhere.
|
* Safe to use in noinstr code; prefer atomic_try_cmpxchg_release() elsewhere.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
|
raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
|
||||||
@@ -2222,7 +2222,7 @@ raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
|
|||||||
*
|
*
|
||||||
* Safe to use in noinstr code; prefer atomic_try_cmpxchg_relaxed() elsewhere.
|
* Safe to use in noinstr code; prefer atomic_try_cmpxchg_relaxed() elsewhere.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
|
raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
|
||||||
@@ -4247,7 +4247,7 @@ raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
|
|||||||
*
|
*
|
||||||
* Safe to use in noinstr code; prefer atomic64_try_cmpxchg() elsewhere.
|
* Safe to use in noinstr code; prefer atomic64_try_cmpxchg() elsewhere.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
|
raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
|
||||||
@@ -4281,7 +4281,7 @@ raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
|
|||||||
*
|
*
|
||||||
* Safe to use in noinstr code; prefer atomic64_try_cmpxchg_acquire() elsewhere.
|
* Safe to use in noinstr code; prefer atomic64_try_cmpxchg_acquire() elsewhere.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
|
raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
|
||||||
@@ -4315,7 +4315,7 @@ raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
|
|||||||
*
|
*
|
||||||
* Safe to use in noinstr code; prefer atomic64_try_cmpxchg_release() elsewhere.
|
* Safe to use in noinstr code; prefer atomic64_try_cmpxchg_release() elsewhere.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
|
raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
|
||||||
@@ -4348,7 +4348,7 @@ raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
|
|||||||
*
|
*
|
||||||
* Safe to use in noinstr code; prefer atomic64_try_cmpxchg_relaxed() elsewhere.
|
* Safe to use in noinstr code; prefer atomic64_try_cmpxchg_relaxed() elsewhere.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
|
raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
|
||||||
@@ -4690,4 +4690,4 @@ raw_atomic64_dec_if_positive(atomic64_t *v)
|
|||||||
}
|
}
|
||||||
|
|
||||||
#endif /* _LINUX_ATOMIC_FALLBACK_H */
|
#endif /* _LINUX_ATOMIC_FALLBACK_H */
|
||||||
// b565db590afeeff0d7c9485ccbca5bb6e155749f
|
// 206314f82b8b73a5c3aa69cf7f35ac9e7b5d6b58
|
||||||
|
|||||||
@@ -1269,7 +1269,7 @@ atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
|
|||||||
*
|
*
|
||||||
* Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg() there.
|
* Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg() there.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
atomic_try_cmpxchg(atomic_t *v, int *old, int new)
|
atomic_try_cmpxchg(atomic_t *v, int *old, int new)
|
||||||
@@ -1292,7 +1292,7 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new)
|
|||||||
*
|
*
|
||||||
* Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_acquire() there.
|
* Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_acquire() there.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
|
atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
|
||||||
@@ -1314,7 +1314,7 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
|
|||||||
*
|
*
|
||||||
* Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_release() there.
|
* Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_release() there.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
|
atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
|
||||||
@@ -1337,7 +1337,7 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
|
|||||||
*
|
*
|
||||||
* Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_relaxed() there.
|
* Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_relaxed() there.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
|
atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
|
||||||
@@ -2847,7 +2847,7 @@ atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
|
|||||||
*
|
*
|
||||||
* Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg() there.
|
* Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg() there.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
|
atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
|
||||||
@@ -2870,7 +2870,7 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
|
|||||||
*
|
*
|
||||||
* Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_acquire() there.
|
* Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_acquire() there.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
|
atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
|
||||||
@@ -2892,7 +2892,7 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
|
|||||||
*
|
*
|
||||||
* Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_release() there.
|
* Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_release() there.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
|
atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
|
||||||
@@ -2915,7 +2915,7 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
|
|||||||
*
|
*
|
||||||
* Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_relaxed() there.
|
* Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_relaxed() there.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
|
atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
|
||||||
@@ -4425,7 +4425,7 @@ atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
|
|||||||
*
|
*
|
||||||
* Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg() there.
|
* Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg() there.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
|
atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
|
||||||
@@ -4448,7 +4448,7 @@ atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
|
|||||||
*
|
*
|
||||||
* Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_acquire() there.
|
* Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_acquire() there.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
|
atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
|
||||||
@@ -4470,7 +4470,7 @@ atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
|
|||||||
*
|
*
|
||||||
* Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_release() there.
|
* Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_release() there.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
|
atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
|
||||||
@@ -4493,7 +4493,7 @@ atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
|
|||||||
*
|
*
|
||||||
* Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_relaxed() there.
|
* Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_relaxed() there.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
|
atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
|
||||||
@@ -5050,4 +5050,4 @@ atomic_long_dec_if_positive(atomic_long_t *v)
|
|||||||
|
|
||||||
|
|
||||||
#endif /* _LINUX_ATOMIC_INSTRUMENTED_H */
|
#endif /* _LINUX_ATOMIC_INSTRUMENTED_H */
|
||||||
// f618ac667f868941a84ce0ab2242f1786e049ed4
|
// 9dd948d3012b22c4e75933a5172983f912e46439
|
||||||
|
|||||||
@@ -1449,7 +1449,7 @@ raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
|
|||||||
*
|
*
|
||||||
* Safe to use in noinstr code; prefer atomic_long_try_cmpxchg() elsewhere.
|
* Safe to use in noinstr code; prefer atomic_long_try_cmpxchg() elsewhere.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
|
raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
|
||||||
@@ -1473,7 +1473,7 @@ raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
|
|||||||
*
|
*
|
||||||
* Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_acquire() elsewhere.
|
* Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_acquire() elsewhere.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
|
raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
|
||||||
@@ -1497,7 +1497,7 @@ raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
|
|||||||
*
|
*
|
||||||
* Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_release() elsewhere.
|
* Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_release() elsewhere.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
|
raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
|
||||||
@@ -1521,7 +1521,7 @@ raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
|
|||||||
*
|
*
|
||||||
* Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_relaxed() elsewhere.
|
* Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_relaxed() elsewhere.
|
||||||
*
|
*
|
||||||
* Return: @true if the exchange occured, @false otherwise.
|
* Return: @true if the exchange occurred, @false otherwise.
|
||||||
*/
|
*/
|
||||||
static __always_inline bool
|
static __always_inline bool
|
||||||
raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
|
raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
|
||||||
@@ -1809,4 +1809,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v)
|
|||||||
}
|
}
|
||||||
|
|
||||||
#endif /* _LINUX_ATOMIC_LONG_H */
|
#endif /* _LINUX_ATOMIC_LONG_H */
|
||||||
// eadf183c3600b8b92b91839dd3be6bcc560c752d
|
// 4b882bf19018602c10816c52f8b4ae280adc887b
|
||||||
|
|||||||
@@ -7,6 +7,18 @@
|
|||||||
#include <linux/atomic.h>
|
#include <linux/atomic.h>
|
||||||
#include <linux/bug.h>
|
#include <linux/bug.h>
|
||||||
|
|
||||||
|
#include <asm/processor.h> /* for cpu_relax() */
|
||||||
|
|
||||||
|
/*
|
||||||
|
* For static context analysis, we need a unique token for each possible bit
|
||||||
|
* that can be used as a bit_spinlock. The easiest way to do that is to create a
|
||||||
|
* fake context that we can cast to with the __bitlock(bitnum, addr) macro
|
||||||
|
* below, which will give us unique instances for each (bit, addr) pair that the
|
||||||
|
* static analysis can use.
|
||||||
|
*/
|
||||||
|
context_lock_struct(__context_bitlock) { };
|
||||||
|
#define __bitlock(bitnum, addr) (struct __context_bitlock *)(bitnum + (addr))
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* bit-based spin_lock()
|
* bit-based spin_lock()
|
||||||
*
|
*
|
||||||
@@ -14,6 +26,7 @@
|
|||||||
* are significantly faster.
|
* are significantly faster.
|
||||||
*/
|
*/
|
||||||
static __always_inline void bit_spin_lock(int bitnum, unsigned long *addr)
|
static __always_inline void bit_spin_lock(int bitnum, unsigned long *addr)
|
||||||
|
__acquires(__bitlock(bitnum, addr))
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* Assuming the lock is uncontended, this never enters
|
* Assuming the lock is uncontended, this never enters
|
||||||
@@ -32,13 +45,14 @@ static __always_inline void bit_spin_lock(int bitnum, unsigned long *addr)
|
|||||||
preempt_disable();
|
preempt_disable();
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
__acquire(bitlock);
|
__acquire(__bitlock(bitnum, addr));
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Return true if it was acquired
|
* Return true if it was acquired
|
||||||
*/
|
*/
|
||||||
static __always_inline int bit_spin_trylock(int bitnum, unsigned long *addr)
|
static __always_inline int bit_spin_trylock(int bitnum, unsigned long *addr)
|
||||||
|
__cond_acquires(true, __bitlock(bitnum, addr))
|
||||||
{
|
{
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
|
#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
|
||||||
@@ -47,7 +61,7 @@ static __always_inline int bit_spin_trylock(int bitnum, unsigned long *addr)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
__acquire(bitlock);
|
__acquire(__bitlock(bitnum, addr));
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -55,6 +69,7 @@ static __always_inline int bit_spin_trylock(int bitnum, unsigned long *addr)
|
|||||||
* bit-based spin_unlock()
|
* bit-based spin_unlock()
|
||||||
*/
|
*/
|
||||||
static __always_inline void bit_spin_unlock(int bitnum, unsigned long *addr)
|
static __always_inline void bit_spin_unlock(int bitnum, unsigned long *addr)
|
||||||
|
__releases(__bitlock(bitnum, addr))
|
||||||
{
|
{
|
||||||
#ifdef CONFIG_DEBUG_SPINLOCK
|
#ifdef CONFIG_DEBUG_SPINLOCK
|
||||||
BUG_ON(!test_bit(bitnum, addr));
|
BUG_ON(!test_bit(bitnum, addr));
|
||||||
@@ -63,7 +78,7 @@ static __always_inline void bit_spin_unlock(int bitnum, unsigned long *addr)
|
|||||||
clear_bit_unlock(bitnum, addr);
|
clear_bit_unlock(bitnum, addr);
|
||||||
#endif
|
#endif
|
||||||
preempt_enable();
|
preempt_enable();
|
||||||
__release(bitlock);
|
__release(__bitlock(bitnum, addr));
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -72,6 +87,7 @@ static __always_inline void bit_spin_unlock(int bitnum, unsigned long *addr)
|
|||||||
* protecting the rest of the flags in the word.
|
* protecting the rest of the flags in the word.
|
||||||
*/
|
*/
|
||||||
static __always_inline void __bit_spin_unlock(int bitnum, unsigned long *addr)
|
static __always_inline void __bit_spin_unlock(int bitnum, unsigned long *addr)
|
||||||
|
__releases(__bitlock(bitnum, addr))
|
||||||
{
|
{
|
||||||
#ifdef CONFIG_DEBUG_SPINLOCK
|
#ifdef CONFIG_DEBUG_SPINLOCK
|
||||||
BUG_ON(!test_bit(bitnum, addr));
|
BUG_ON(!test_bit(bitnum, addr));
|
||||||
@@ -80,7 +96,7 @@ static __always_inline void __bit_spin_unlock(int bitnum, unsigned long *addr)
|
|||||||
__clear_bit_unlock(bitnum, addr);
|
__clear_bit_unlock(bitnum, addr);
|
||||||
#endif
|
#endif
|
||||||
preempt_enable();
|
preempt_enable();
|
||||||
__release(bitlock);
|
__release(__bitlock(bitnum, addr));
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -278,16 +278,21 @@ const volatile void * __must_check_fn(const volatile void *val)
|
|||||||
|
|
||||||
#define DEFINE_CLASS(_name, _type, _exit, _init, _init_args...) \
|
#define DEFINE_CLASS(_name, _type, _exit, _init, _init_args...) \
|
||||||
typedef _type class_##_name##_t; \
|
typedef _type class_##_name##_t; \
|
||||||
|
typedef _type lock_##_name##_t; \
|
||||||
static __always_inline void class_##_name##_destructor(_type *p) \
|
static __always_inline void class_##_name##_destructor(_type *p) \
|
||||||
|
__no_context_analysis \
|
||||||
{ _type _T = *p; _exit; } \
|
{ _type _T = *p; _exit; } \
|
||||||
static __always_inline _type class_##_name##_constructor(_init_args) \
|
static __always_inline _type class_##_name##_constructor(_init_args) \
|
||||||
|
__no_context_analysis \
|
||||||
{ _type t = _init; return t; }
|
{ _type t = _init; return t; }
|
||||||
|
|
||||||
#define EXTEND_CLASS(_name, ext, _init, _init_args...) \
|
#define EXTEND_CLASS(_name, ext, _init, _init_args...) \
|
||||||
|
typedef lock_##_name##_t lock_##_name##ext##_t; \
|
||||||
typedef class_##_name##_t class_##_name##ext##_t; \
|
typedef class_##_name##_t class_##_name##ext##_t; \
|
||||||
static __always_inline void class_##_name##ext##_destructor(class_##_name##_t *p) \
|
static __always_inline void class_##_name##ext##_destructor(class_##_name##_t *p) \
|
||||||
{ class_##_name##_destructor(p); } \
|
{ class_##_name##_destructor(p); } \
|
||||||
static __always_inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \
|
static __always_inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \
|
||||||
|
__no_context_analysis \
|
||||||
{ class_##_name##_t t = _init; return t; }
|
{ class_##_name##_t t = _init; return t; }
|
||||||
|
|
||||||
#define CLASS(_name, var) \
|
#define CLASS(_name, var) \
|
||||||
@@ -474,35 +479,80 @@ _label: \
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
#define __DEFINE_UNLOCK_GUARD(_name, _type, _unlock, ...) \
|
#define __DEFINE_UNLOCK_GUARD(_name, _type, _unlock, ...) \
|
||||||
|
typedef _type lock_##_name##_t; \
|
||||||
typedef struct { \
|
typedef struct { \
|
||||||
_type *lock; \
|
_type *lock; \
|
||||||
__VA_ARGS__; \
|
__VA_ARGS__; \
|
||||||
} class_##_name##_t; \
|
} class_##_name##_t; \
|
||||||
\
|
\
|
||||||
static __always_inline void class_##_name##_destructor(class_##_name##_t *_T) \
|
static __always_inline void class_##_name##_destructor(class_##_name##_t *_T) \
|
||||||
|
__no_context_analysis \
|
||||||
{ \
|
{ \
|
||||||
if (!__GUARD_IS_ERR(_T->lock)) { _unlock; } \
|
if (!__GUARD_IS_ERR(_T->lock)) { _unlock; } \
|
||||||
} \
|
} \
|
||||||
\
|
\
|
||||||
__DEFINE_GUARD_LOCK_PTR(_name, &_T->lock)
|
__DEFINE_GUARD_LOCK_PTR(_name, &_T->lock)
|
||||||
|
|
||||||
#define __DEFINE_LOCK_GUARD_1(_name, _type, _lock) \
|
#define __DEFINE_LOCK_GUARD_1(_name, _type, ...) \
|
||||||
static __always_inline class_##_name##_t class_##_name##_constructor(_type *l) \
|
static __always_inline class_##_name##_t class_##_name##_constructor(_type *l) \
|
||||||
|
__no_context_analysis \
|
||||||
{ \
|
{ \
|
||||||
class_##_name##_t _t = { .lock = l }, *_T = &_t; \
|
class_##_name##_t _t = { .lock = l }, *_T = &_t; \
|
||||||
_lock; \
|
__VA_ARGS__; \
|
||||||
return _t; \
|
return _t; \
|
||||||
}
|
}
|
||||||
|
|
||||||
#define __DEFINE_LOCK_GUARD_0(_name, _lock) \
|
#define __DEFINE_LOCK_GUARD_0(_name, ...) \
|
||||||
static __always_inline class_##_name##_t class_##_name##_constructor(void) \
|
static __always_inline class_##_name##_t class_##_name##_constructor(void) \
|
||||||
|
__no_context_analysis \
|
||||||
{ \
|
{ \
|
||||||
class_##_name##_t _t = { .lock = (void*)1 }, \
|
class_##_name##_t _t = { .lock = (void*)1 }, \
|
||||||
*_T __maybe_unused = &_t; \
|
*_T __maybe_unused = &_t; \
|
||||||
_lock; \
|
__VA_ARGS__; \
|
||||||
return _t; \
|
return _t; \
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#define DECLARE_LOCK_GUARD_0_ATTRS(_name, _lock, _unlock) \
|
||||||
|
static inline class_##_name##_t class_##_name##_constructor(void) _lock;\
|
||||||
|
static inline void class_##_name##_destructor(class_##_name##_t *_T) _unlock;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* To support Context Analysis, we need to allow the compiler to see the
|
||||||
|
* acquisition and release of the context lock. However, the "cleanup" helpers
|
||||||
|
* wrap the lock in a struct passed through separate helper functions, which
|
||||||
|
* hides the lock alias from the compiler (no inter-procedural analysis).
|
||||||
|
*
|
||||||
|
* To make it work, we introduce an explicit alias to the context lock instance
|
||||||
|
* that is "cleaned" up with a separate cleanup helper. This helper is a dummy
|
||||||
|
* function that does nothing at runtime, but has the "_unlock" attribute to
|
||||||
|
* tell the compiler what happens at the end of the scope.
|
||||||
|
*
|
||||||
|
* To generalize the pattern, the WITH_LOCK_GUARD_1_ATTRS() macro should be used
|
||||||
|
* to redefine the constructor, which then also creates the alias variable with
|
||||||
|
* the right "cleanup" attribute, *after* DECLARE_LOCK_GUARD_1_ATTRS() has been
|
||||||
|
* used.
|
||||||
|
*
|
||||||
|
* Example usage:
|
||||||
|
*
|
||||||
|
* DECLARE_LOCK_GUARD_1_ATTRS(mutex, __acquires(_T), __releases(*(struct mutex **)_T))
|
||||||
|
* #define class_mutex_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex, _T)
|
||||||
|
*
|
||||||
|
* Note: To support the for-loop based scoped helpers, the auxiliary variable
|
||||||
|
* must be a pointer to the "class" type because it is defined in the same
|
||||||
|
* statement as the guard variable. However, we initialize it with the lock
|
||||||
|
* pointer (despite the type mismatch, the compiler's alias analysis still works
|
||||||
|
* as expected). The "_unlock" attribute receives a pointer to the auxiliary
|
||||||
|
* variable (a double pointer to the class type), and must be cast and
|
||||||
|
* dereferenced appropriately.
|
||||||
|
*/
|
||||||
|
#define DECLARE_LOCK_GUARD_1_ATTRS(_name, _lock, _unlock) \
|
||||||
|
static inline class_##_name##_t class_##_name##_constructor(lock_##_name##_t *_T) _lock;\
|
||||||
|
static __always_inline void __class_##_name##_cleanup_ctx(class_##_name##_t **_T) \
|
||||||
|
__no_context_analysis _unlock { }
|
||||||
|
#define WITH_LOCK_GUARD_1_ATTRS(_name, _T) \
|
||||||
|
class_##_name##_constructor(_T), \
|
||||||
|
*__UNIQUE_ID(unlock) __cleanup(__class_##_name##_cleanup_ctx) = (void *)(unsigned long)(_T)
|
||||||
|
|
||||||
#define DEFINE_LOCK_GUARD_1(_name, _type, _lock, _unlock, ...) \
|
#define DEFINE_LOCK_GUARD_1(_name, _type, _lock, _unlock, ...) \
|
||||||
__DEFINE_CLASS_IS_CONDITIONAL(_name, false); \
|
__DEFINE_CLASS_IS_CONDITIONAL(_name, false); \
|
||||||
__DEFINE_UNLOCK_GUARD(_name, _type, _unlock, __VA_ARGS__) \
|
__DEFINE_UNLOCK_GUARD(_name, _type, _unlock, __VA_ARGS__) \
|
||||||
|
|||||||
436
include/linux/compiler-context-analysis.h
Normal file
436
include/linux/compiler-context-analysis.h
Normal file
@@ -0,0 +1,436 @@
|
|||||||
|
/* SPDX-License-Identifier: GPL-2.0 */
|
||||||
|
/*
|
||||||
|
* Macros and attributes for compiler-based static context analysis.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#ifndef _LINUX_COMPILER_CONTEXT_ANALYSIS_H
|
||||||
|
#define _LINUX_COMPILER_CONTEXT_ANALYSIS_H
|
||||||
|
|
||||||
|
#if defined(WARN_CONTEXT_ANALYSIS) && !defined(__CHECKER__) && !defined(__GENKSYMS__)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* These attributes define new context lock (Clang: capability) types.
|
||||||
|
* Internal only.
|
||||||
|
*/
|
||||||
|
# define __ctx_lock_type(name) __attribute__((capability(#name)))
|
||||||
|
# define __reentrant_ctx_lock __attribute__((reentrant_capability))
|
||||||
|
# define __acquires_ctx_lock(...) __attribute__((acquire_capability(__VA_ARGS__)))
|
||||||
|
# define __acquires_shared_ctx_lock(...) __attribute__((acquire_shared_capability(__VA_ARGS__)))
|
||||||
|
# define __try_acquires_ctx_lock(ret, var) __attribute__((try_acquire_capability(ret, var)))
|
||||||
|
# define __try_acquires_shared_ctx_lock(ret, var) __attribute__((try_acquire_shared_capability(ret, var)))
|
||||||
|
# define __releases_ctx_lock(...) __attribute__((release_capability(__VA_ARGS__)))
|
||||||
|
# define __releases_shared_ctx_lock(...) __attribute__((release_shared_capability(__VA_ARGS__)))
|
||||||
|
# define __returns_ctx_lock(var) __attribute__((lock_returned(var)))
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The below are used to annotate code being checked. Internal only.
|
||||||
|
*/
|
||||||
|
# define __excludes_ctx_lock(...) __attribute__((locks_excluded(__VA_ARGS__)))
|
||||||
|
# define __requires_ctx_lock(...) __attribute__((requires_capability(__VA_ARGS__)))
|
||||||
|
# define __requires_shared_ctx_lock(...) __attribute__((requires_shared_capability(__VA_ARGS__)))
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The "assert_capability" attribute is a bit confusingly named. It does not
|
||||||
|
* generate a check. Instead, it tells the analysis to *assume* the capability
|
||||||
|
* is held. This is used for augmenting runtime assertions, that can then help
|
||||||
|
* with patterns beyond the compiler's static reasoning abilities.
|
||||||
|
*/
|
||||||
|
# define __assumes_ctx_lock(...) __attribute__((assert_capability(__VA_ARGS__)))
|
||||||
|
# define __assumes_shared_ctx_lock(...) __attribute__((assert_shared_capability(__VA_ARGS__)))
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __guarded_by - struct member and globals attribute, declares variable
|
||||||
|
* only accessible within active context
|
||||||
|
*
|
||||||
|
* Declares that the struct member or global variable is only accessible within
|
||||||
|
* the context entered by the given context lock. Read operations on the data
|
||||||
|
* require shared access, while write operations require exclusive access.
|
||||||
|
*
|
||||||
|
* .. code-block:: c
|
||||||
|
*
|
||||||
|
* struct some_state {
|
||||||
|
* spinlock_t lock;
|
||||||
|
* long counter __guarded_by(&lock);
|
||||||
|
* };
|
||||||
|
*/
|
||||||
|
# define __guarded_by(...) __attribute__((guarded_by(__VA_ARGS__)))
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __pt_guarded_by - struct member and globals attribute, declares pointed-to
|
||||||
|
* data only accessible within active context
|
||||||
|
*
|
||||||
|
* Declares that the data pointed to by the struct member pointer or global
|
||||||
|
* pointer is only accessible within the context entered by the given context
|
||||||
|
* lock. Read operations on the data require shared access, while write
|
||||||
|
* operations require exclusive access.
|
||||||
|
*
|
||||||
|
* .. code-block:: c
|
||||||
|
*
|
||||||
|
* struct some_state {
|
||||||
|
* spinlock_t lock;
|
||||||
|
* long *counter __pt_guarded_by(&lock);
|
||||||
|
* };
|
||||||
|
*/
|
||||||
|
# define __pt_guarded_by(...) __attribute__((pt_guarded_by(__VA_ARGS__)))
|
||||||
|
|
||||||
|
/**
|
||||||
|
* context_lock_struct() - declare or define a context lock struct
|
||||||
|
* @name: struct name
|
||||||
|
*
|
||||||
|
* Helper to declare or define a struct type that is also a context lock.
|
||||||
|
*
|
||||||
|
* .. code-block:: c
|
||||||
|
*
|
||||||
|
* context_lock_struct(my_handle) {
|
||||||
|
* int foo;
|
||||||
|
* long bar;
|
||||||
|
* };
|
||||||
|
*
|
||||||
|
* struct some_state {
|
||||||
|
* ...
|
||||||
|
* };
|
||||||
|
* // ... declared elsewhere ...
|
||||||
|
* context_lock_struct(some_state);
|
||||||
|
*
|
||||||
|
* Note: The implementation defines several helper functions that can acquire
|
||||||
|
* and release the context lock.
|
||||||
|
*/
|
||||||
|
# define context_lock_struct(name, ...) \
|
||||||
|
struct __ctx_lock_type(name) __VA_ARGS__ name; \
|
||||||
|
static __always_inline void __acquire_ctx_lock(const struct name *var) \
|
||||||
|
__attribute__((overloadable)) __no_context_analysis __acquires_ctx_lock(var) { } \
|
||||||
|
static __always_inline void __acquire_shared_ctx_lock(const struct name *var) \
|
||||||
|
__attribute__((overloadable)) __no_context_analysis __acquires_shared_ctx_lock(var) { } \
|
||||||
|
static __always_inline bool __try_acquire_ctx_lock(const struct name *var, bool ret) \
|
||||||
|
__attribute__((overloadable)) __no_context_analysis __try_acquires_ctx_lock(1, var) \
|
||||||
|
{ return ret; } \
|
||||||
|
static __always_inline bool __try_acquire_shared_ctx_lock(const struct name *var, bool ret) \
|
||||||
|
__attribute__((overloadable)) __no_context_analysis __try_acquires_shared_ctx_lock(1, var) \
|
||||||
|
{ return ret; } \
|
||||||
|
static __always_inline void __release_ctx_lock(const struct name *var) \
|
||||||
|
__attribute__((overloadable)) __no_context_analysis __releases_ctx_lock(var) { } \
|
||||||
|
static __always_inline void __release_shared_ctx_lock(const struct name *var) \
|
||||||
|
__attribute__((overloadable)) __no_context_analysis __releases_shared_ctx_lock(var) { } \
|
||||||
|
static __always_inline void __assume_ctx_lock(const struct name *var) \
|
||||||
|
__attribute__((overloadable)) __assumes_ctx_lock(var) { } \
|
||||||
|
static __always_inline void __assume_shared_ctx_lock(const struct name *var) \
|
||||||
|
__attribute__((overloadable)) __assumes_shared_ctx_lock(var) { } \
|
||||||
|
struct name
|
||||||
|
|
||||||
|
/**
|
||||||
|
* disable_context_analysis() - disables context analysis
|
||||||
|
*
|
||||||
|
* Disables context analysis. Must be paired with a later
|
||||||
|
* enable_context_analysis().
|
||||||
|
*/
|
||||||
|
# define disable_context_analysis() \
|
||||||
|
__diag_push(); \
|
||||||
|
__diag_ignore_all("-Wunknown-warning-option", "") \
|
||||||
|
__diag_ignore_all("-Wthread-safety", "") \
|
||||||
|
__diag_ignore_all("-Wthread-safety-pointer", "")
|
||||||
|
|
||||||
|
/**
|
||||||
|
* enable_context_analysis() - re-enables context analysis
|
||||||
|
*
|
||||||
|
* Re-enables context analysis. Must be paired with a prior
|
||||||
|
* disable_context_analysis().
|
||||||
|
*/
|
||||||
|
# define enable_context_analysis() __diag_pop()
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __no_context_analysis - function attribute, disables context analysis
|
||||||
|
*
|
||||||
|
* Function attribute denoting that context analysis is disabled for the
|
||||||
|
* whole function. Prefer use of `context_unsafe()` where possible.
|
||||||
|
*/
|
||||||
|
# define __no_context_analysis __attribute__((no_thread_safety_analysis))
|
||||||
|
|
||||||
|
#else /* !WARN_CONTEXT_ANALYSIS */
|
||||||
|
|
||||||
|
# define __ctx_lock_type(name)
|
||||||
|
# define __reentrant_ctx_lock
|
||||||
|
# define __acquires_ctx_lock(...)
|
||||||
|
# define __acquires_shared_ctx_lock(...)
|
||||||
|
# define __try_acquires_ctx_lock(ret, var)
|
||||||
|
# define __try_acquires_shared_ctx_lock(ret, var)
|
||||||
|
# define __releases_ctx_lock(...)
|
||||||
|
# define __releases_shared_ctx_lock(...)
|
||||||
|
# define __assumes_ctx_lock(...)
|
||||||
|
# define __assumes_shared_ctx_lock(...)
|
||||||
|
# define __returns_ctx_lock(var)
|
||||||
|
# define __guarded_by(...)
|
||||||
|
# define __pt_guarded_by(...)
|
||||||
|
# define __excludes_ctx_lock(...)
|
||||||
|
# define __requires_ctx_lock(...)
|
||||||
|
# define __requires_shared_ctx_lock(...)
|
||||||
|
# define __acquire_ctx_lock(var) do { } while (0)
|
||||||
|
# define __acquire_shared_ctx_lock(var) do { } while (0)
|
||||||
|
# define __try_acquire_ctx_lock(var, ret) (ret)
|
||||||
|
# define __try_acquire_shared_ctx_lock(var, ret) (ret)
|
||||||
|
# define __release_ctx_lock(var) do { } while (0)
|
||||||
|
# define __release_shared_ctx_lock(var) do { } while (0)
|
||||||
|
# define __assume_ctx_lock(var) do { (void)(var); } while (0)
|
||||||
|
# define __assume_shared_ctx_lock(var) do { (void)(var); } while (0)
|
||||||
|
# define context_lock_struct(name, ...) struct __VA_ARGS__ name
|
||||||
|
# define disable_context_analysis()
|
||||||
|
# define enable_context_analysis()
|
||||||
|
# define __no_context_analysis
|
||||||
|
|
||||||
|
#endif /* WARN_CONTEXT_ANALYSIS */
|
||||||
|
|
||||||
|
/**
|
||||||
|
* context_unsafe() - disable context checking for contained code
|
||||||
|
*
|
||||||
|
* Disables context checking for contained statements or expression.
|
||||||
|
*
|
||||||
|
* .. code-block:: c
|
||||||
|
*
|
||||||
|
* struct some_data {
|
||||||
|
* spinlock_t lock;
|
||||||
|
* int counter __guarded_by(&lock);
|
||||||
|
* };
|
||||||
|
*
|
||||||
|
* int foo(struct some_data *d)
|
||||||
|
* {
|
||||||
|
* // ...
|
||||||
|
* // other code that is still checked ...
|
||||||
|
* // ...
|
||||||
|
* return context_unsafe(d->counter);
|
||||||
|
* }
|
||||||
|
*/
|
||||||
|
#define context_unsafe(...) \
|
||||||
|
({ \
|
||||||
|
disable_context_analysis(); \
|
||||||
|
__VA_ARGS__; \
|
||||||
|
enable_context_analysis() \
|
||||||
|
})
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __context_unsafe() - function attribute, disable context checking
|
||||||
|
* @comment: comment explaining why opt-out is safe
|
||||||
|
*
|
||||||
|
* Function attribute denoting that context analysis is disabled for the
|
||||||
|
* whole function. Forces adding an inline comment as argument.
|
||||||
|
*/
|
||||||
|
#define __context_unsafe(comment) __no_context_analysis
|
||||||
|
|
||||||
|
/**
|
||||||
|
* context_unsafe_alias() - helper to insert a context lock "alias barrier"
|
||||||
|
* @p: pointer aliasing a context lock or object containing context locks
|
||||||
|
*
|
||||||
|
* No-op function that acts as a "context lock alias barrier", where the
|
||||||
|
* analysis rightfully detects that we're switching aliases, but the switch is
|
||||||
|
* considered safe but beyond the analysis reasoning abilities.
|
||||||
|
*
|
||||||
|
* This should be inserted before the first use of such an alias.
|
||||||
|
*
|
||||||
|
* Implementation Note: The compiler ignores aliases that may be reassigned but
|
||||||
|
* their value cannot be determined (e.g. when passing a non-const pointer to an
|
||||||
|
* alias as a function argument).
|
||||||
|
*/
|
||||||
|
#define context_unsafe_alias(p) _context_unsafe_alias((void **)&(p))
|
||||||
|
static inline void _context_unsafe_alias(void **p) { }
|
||||||
|
|
||||||
|
/**
|
||||||
|
* token_context_lock() - declare an abstract global context lock instance
|
||||||
|
* @name: token context lock name
|
||||||
|
*
|
||||||
|
* Helper that declares an abstract global context lock instance @name, but not
|
||||||
|
* backed by a real data structure (linker error if accidentally referenced).
|
||||||
|
* The type name is `__ctx_lock_@name`.
|
||||||
|
*/
|
||||||
|
#define token_context_lock(name, ...) \
|
||||||
|
context_lock_struct(__ctx_lock_##name, ##__VA_ARGS__) {}; \
|
||||||
|
extern const struct __ctx_lock_##name *name
|
||||||
|
|
||||||
|
/**
|
||||||
|
* token_context_lock_instance() - declare another instance of a global context lock
|
||||||
|
* @ctx: token context lock previously declared with token_context_lock()
|
||||||
|
* @name: name of additional global context lock instance
|
||||||
|
*
|
||||||
|
* Helper that declares an additional instance @name of the same token context
|
||||||
|
* lock class @ctx. This is helpful where multiple related token contexts are
|
||||||
|
* declared, to allow using the same underlying type (`__ctx_lock_@ctx`) as
|
||||||
|
* function arguments.
|
||||||
|
*/
|
||||||
|
#define token_context_lock_instance(ctx, name) \
|
||||||
|
extern const struct __ctx_lock_##ctx *name
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Common keywords for static context analysis.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __must_hold() - function attribute, caller must hold exclusive context lock
|
||||||
|
*
|
||||||
|
* Function attribute declaring that the caller must hold the given context
|
||||||
|
* lock instance(s) exclusively.
|
||||||
|
*/
|
||||||
|
#define __must_hold(...) __requires_ctx_lock(__VA_ARGS__)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __must_not_hold() - function attribute, caller must not hold context lock
|
||||||
|
*
|
||||||
|
* Function attribute declaring that the caller must not hold the given context
|
||||||
|
* lock instance(s).
|
||||||
|
*/
|
||||||
|
#define __must_not_hold(...) __excludes_ctx_lock(__VA_ARGS__)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __acquires() - function attribute, function acquires context lock exclusively
|
||||||
|
*
|
||||||
|
* Function attribute declaring that the function acquires the given context
|
||||||
|
* lock instance(s) exclusively, but does not release them.
|
||||||
|
*/
|
||||||
|
#define __acquires(...) __acquires_ctx_lock(__VA_ARGS__)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Clang's analysis does not care precisely about the value, only that it is
|
||||||
|
* either zero or non-zero. So the __cond_acquires() interface might be
|
||||||
|
* misleading if we say that @ret is the value returned if acquired. Instead,
|
||||||
|
* provide symbolic variants which we translate.
|
||||||
|
*/
|
||||||
|
#define __cond_acquires_impl_true(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x)
|
||||||
|
#define __cond_acquires_impl_false(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x)
|
||||||
|
#define __cond_acquires_impl_nonzero(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x)
|
||||||
|
#define __cond_acquires_impl_0(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x)
|
||||||
|
#define __cond_acquires_impl_nonnull(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x)
|
||||||
|
#define __cond_acquires_impl_NULL(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __cond_acquires() - function attribute, function conditionally
|
||||||
|
* acquires a context lock exclusively
|
||||||
|
* @ret: abstract value returned by function if context lock acquired
|
||||||
|
* @x: context lock instance pointer
|
||||||
|
*
|
||||||
|
* Function attribute declaring that the function conditionally acquires the
|
||||||
|
* given context lock instance @x exclusively, but does not release it. The
|
||||||
|
* function return value @ret denotes when the context lock is acquired.
|
||||||
|
*
|
||||||
|
* @ret may be one of: true, false, nonzero, 0, nonnull, NULL.
|
||||||
|
*/
|
||||||
|
#define __cond_acquires(ret, x) __cond_acquires_impl_##ret(x)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __releases() - function attribute, function releases a context lock exclusively
|
||||||
|
*
|
||||||
|
* Function attribute declaring that the function releases the given context
|
||||||
|
* lock instance(s) exclusively. The associated context(s) must be active on
|
||||||
|
* entry.
|
||||||
|
*/
|
||||||
|
#define __releases(...) __releases_ctx_lock(__VA_ARGS__)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __acquire() - function to acquire context lock exclusively
|
||||||
|
* @x: context lock instance pointer
|
||||||
|
*
|
||||||
|
* No-op function that acquires the given context lock instance @x exclusively.
|
||||||
|
*/
|
||||||
|
#define __acquire(x) __acquire_ctx_lock(x)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __release() - function to release context lock exclusively
|
||||||
|
* @x: context lock instance pointer
|
||||||
|
*
|
||||||
|
* No-op function that releases the given context lock instance @x.
|
||||||
|
*/
|
||||||
|
#define __release(x) __release_ctx_lock(x)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __must_hold_shared() - function attribute, caller must hold shared context lock
|
||||||
|
*
|
||||||
|
* Function attribute declaring that the caller must hold the given context
|
||||||
|
* lock instance(s) with shared access.
|
||||||
|
*/
|
||||||
|
#define __must_hold_shared(...) __requires_shared_ctx_lock(__VA_ARGS__)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __acquires_shared() - function attribute, function acquires context lock shared
|
||||||
|
*
|
||||||
|
* Function attribute declaring that the function acquires the given
|
||||||
|
* context lock instance(s) with shared access, but does not release them.
|
||||||
|
*/
|
||||||
|
#define __acquires_shared(...) __acquires_shared_ctx_lock(__VA_ARGS__)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __cond_acquires_shared() - function attribute, function conditionally
|
||||||
|
* acquires a context lock shared
|
||||||
|
* @ret: abstract value returned by function if context lock acquired
|
||||||
|
* @x: context lock instance pointer
|
||||||
|
*
|
||||||
|
* Function attribute declaring that the function conditionally acquires the
|
||||||
|
* given context lock instance @x with shared access, but does not release it.
|
||||||
|
* The function return value @ret denotes when the context lock is acquired.
|
||||||
|
*
|
||||||
|
* @ret may be one of: true, false, nonzero, 0, nonnull, NULL.
|
||||||
|
*/
|
||||||
|
#define __cond_acquires_shared(ret, x) __cond_acquires_impl_##ret(x, _shared)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __releases_shared() - function attribute, function releases a
|
||||||
|
* context lock shared
|
||||||
|
*
|
||||||
|
* Function attribute declaring that the function releases the given context
|
||||||
|
* lock instance(s) with shared access. The associated context(s) must be
|
||||||
|
* active on entry.
|
||||||
|
*/
|
||||||
|
#define __releases_shared(...) __releases_shared_ctx_lock(__VA_ARGS__)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __acquire_shared() - function to acquire context lock shared
|
||||||
|
* @x: context lock instance pointer
|
||||||
|
*
|
||||||
|
* No-op function that acquires the given context lock instance @x with shared
|
||||||
|
* access.
|
||||||
|
*/
|
||||||
|
#define __acquire_shared(x) __acquire_shared_ctx_lock(x)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __release_shared() - function to release context lock shared
|
||||||
|
* @x: context lock instance pointer
|
||||||
|
*
|
||||||
|
* No-op function that releases the given context lock instance @x with shared
|
||||||
|
* access.
|
||||||
|
*/
|
||||||
|
#define __release_shared(x) __release_shared_ctx_lock(x)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __acquire_ret() - helper to acquire context lock of return value
|
||||||
|
* @call: call expression
|
||||||
|
* @ret_expr: acquire expression that uses __ret
|
||||||
|
*/
|
||||||
|
#define __acquire_ret(call, ret_expr) \
|
||||||
|
({ \
|
||||||
|
__auto_type __ret = call; \
|
||||||
|
__acquire(ret_expr); \
|
||||||
|
__ret; \
|
||||||
|
})
|
||||||
|
|
||||||
|
/**
|
||||||
|
* __acquire_shared_ret() - helper to acquire context lock shared of return value
|
||||||
|
* @call: call expression
|
||||||
|
* @ret_expr: acquire shared expression that uses __ret
|
||||||
|
*/
|
||||||
|
#define __acquire_shared_ret(call, ret_expr) \
|
||||||
|
({ \
|
||||||
|
__auto_type __ret = call; \
|
||||||
|
__acquire_shared(ret_expr); \
|
||||||
|
__ret; \
|
||||||
|
})
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Attributes to mark functions returning acquired context locks.
|
||||||
|
*
|
||||||
|
* This is purely cosmetic to help readability, and should be used with the
|
||||||
|
* above macros as follows:
|
||||||
|
*
|
||||||
|
* struct foo { spinlock_t lock; ... };
|
||||||
|
* ...
|
||||||
|
* #define myfunc(...) __acquire_ret(_myfunc(__VA_ARGS__), &__ret->lock)
|
||||||
|
* struct foo *_myfunc(int bar) __acquires_ret;
|
||||||
|
* ...
|
||||||
|
*/
|
||||||
|
#define __acquires_ret __no_context_analysis
|
||||||
|
#define __acquires_shared_ret __no_context_analysis
|
||||||
|
|
||||||
|
#endif /* _LINUX_COMPILER_CONTEXT_ANALYSIS_H */
|
||||||
@@ -190,7 +190,9 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
|
|||||||
#define data_race(expr) \
|
#define data_race(expr) \
|
||||||
({ \
|
({ \
|
||||||
__kcsan_disable_current(); \
|
__kcsan_disable_current(); \
|
||||||
|
disable_context_analysis(); \
|
||||||
auto __v = (expr); \
|
auto __v = (expr); \
|
||||||
|
enable_context_analysis(); \
|
||||||
__kcsan_enable_current(); \
|
__kcsan_enable_current(); \
|
||||||
__v; \
|
__v; \
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -41,6 +41,8 @@
|
|||||||
# define BTF_TYPE_TAG(value) /* nothing */
|
# define BTF_TYPE_TAG(value) /* nothing */
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
#include <linux/compiler-context-analysis.h>
|
||||||
|
|
||||||
/* sparse defines __CHECKER__; see Documentation/dev-tools/sparse.rst */
|
/* sparse defines __CHECKER__; see Documentation/dev-tools/sparse.rst */
|
||||||
#ifdef __CHECKER__
|
#ifdef __CHECKER__
|
||||||
/* address spaces */
|
/* address spaces */
|
||||||
@@ -51,14 +53,6 @@
|
|||||||
# define __rcu __attribute__((noderef, address_space(__rcu)))
|
# define __rcu __attribute__((noderef, address_space(__rcu)))
|
||||||
static inline void __chk_user_ptr(const volatile void __user *ptr) { }
|
static inline void __chk_user_ptr(const volatile void __user *ptr) { }
|
||||||
static inline void __chk_io_ptr(const volatile void __iomem *ptr) { }
|
static inline void __chk_io_ptr(const volatile void __iomem *ptr) { }
|
||||||
/* context/locking */
|
|
||||||
# define __must_hold(x) __attribute__((context(x,1,1)))
|
|
||||||
# define __acquires(x) __attribute__((context(x,0,1)))
|
|
||||||
# define __cond_acquires(x) __attribute__((context(x,0,-1)))
|
|
||||||
# define __releases(x) __attribute__((context(x,1,0)))
|
|
||||||
# define __acquire(x) __context__(x,1)
|
|
||||||
# define __release(x) __context__(x,-1)
|
|
||||||
# define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0)
|
|
||||||
/* other */
|
/* other */
|
||||||
# define __force __attribute__((force))
|
# define __force __attribute__((force))
|
||||||
# define __nocast __attribute__((nocast))
|
# define __nocast __attribute__((nocast))
|
||||||
@@ -79,14 +73,6 @@ static inline void __chk_io_ptr(const volatile void __iomem *ptr) { }
|
|||||||
|
|
||||||
# define __chk_user_ptr(x) (void)0
|
# define __chk_user_ptr(x) (void)0
|
||||||
# define __chk_io_ptr(x) (void)0
|
# define __chk_io_ptr(x) (void)0
|
||||||
/* context/locking */
|
|
||||||
# define __must_hold(x)
|
|
||||||
# define __acquires(x)
|
|
||||||
# define __cond_acquires(x)
|
|
||||||
# define __releases(x)
|
|
||||||
# define __acquire(x) (void)0
|
|
||||||
# define __release(x) (void)0
|
|
||||||
# define __cond_lock(x,c) (c)
|
|
||||||
/* other */
|
/* other */
|
||||||
# define __force
|
# define __force
|
||||||
# define __nocast
|
# define __nocast
|
||||||
|
|||||||
@@ -492,8 +492,8 @@ static inline bool console_srcu_read_lock_is_held(void)
|
|||||||
extern int console_srcu_read_lock(void);
|
extern int console_srcu_read_lock(void);
|
||||||
extern void console_srcu_read_unlock(int cookie);
|
extern void console_srcu_read_unlock(int cookie);
|
||||||
|
|
||||||
extern void console_list_lock(void) __acquires(console_mutex);
|
extern void console_list_lock(void);
|
||||||
extern void console_list_unlock(void) __releases(console_mutex);
|
extern void console_list_unlock(void);
|
||||||
|
|
||||||
extern struct hlist_head console_list;
|
extern struct hlist_head console_list;
|
||||||
|
|
||||||
|
|||||||
@@ -239,18 +239,16 @@ ssize_t debugfs_read_file_str(struct file *file, char __user *user_buf,
|
|||||||
* @cancel: callback to call
|
* @cancel: callback to call
|
||||||
* @cancel_data: extra data for the callback to call
|
* @cancel_data: extra data for the callback to call
|
||||||
*/
|
*/
|
||||||
struct debugfs_cancellation {
|
context_lock_struct(debugfs_cancellation) {
|
||||||
struct list_head list;
|
struct list_head list;
|
||||||
void (*cancel)(struct dentry *, void *);
|
void (*cancel)(struct dentry *, void *);
|
||||||
void *cancel_data;
|
void *cancel_data;
|
||||||
};
|
};
|
||||||
|
|
||||||
void __acquires(cancellation)
|
void debugfs_enter_cancellation(struct file *file,
|
||||||
debugfs_enter_cancellation(struct file *file,
|
struct debugfs_cancellation *cancellation) __acquires(cancellation);
|
||||||
struct debugfs_cancellation *cancellation);
|
void debugfs_leave_cancellation(struct file *file,
|
||||||
void __releases(cancellation)
|
struct debugfs_cancellation *cancellation) __releases(cancellation);
|
||||||
debugfs_leave_cancellation(struct file *file,
|
|
||||||
struct debugfs_cancellation *cancellation);
|
|
||||||
|
|
||||||
#else
|
#else
|
||||||
|
|
||||||
|
|||||||
@@ -81,6 +81,7 @@ static inline int kref_put(struct kref *kref, void (*release)(struct kref *kref)
|
|||||||
static inline int kref_put_mutex(struct kref *kref,
|
static inline int kref_put_mutex(struct kref *kref,
|
||||||
void (*release)(struct kref *kref),
|
void (*release)(struct kref *kref),
|
||||||
struct mutex *mutex)
|
struct mutex *mutex)
|
||||||
|
__cond_acquires(true, mutex)
|
||||||
{
|
{
|
||||||
if (refcount_dec_and_mutex_lock(&kref->refcount, mutex)) {
|
if (refcount_dec_and_mutex_lock(&kref->refcount, mutex)) {
|
||||||
release(kref);
|
release(kref);
|
||||||
@@ -102,6 +103,7 @@ static inline int kref_put_mutex(struct kref *kref,
|
|||||||
static inline int kref_put_lock(struct kref *kref,
|
static inline int kref_put_lock(struct kref *kref,
|
||||||
void (*release)(struct kref *kref),
|
void (*release)(struct kref *kref),
|
||||||
spinlock_t *lock)
|
spinlock_t *lock)
|
||||||
|
__cond_acquires(true, lock)
|
||||||
{
|
{
|
||||||
if (refcount_dec_and_lock(&kref->refcount, lock)) {
|
if (refcount_dec_and_lock(&kref->refcount, lock)) {
|
||||||
release(kref);
|
release(kref);
|
||||||
|
|||||||
@@ -144,11 +144,13 @@ static inline void hlist_bl_del_init(struct hlist_bl_node *n)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void hlist_bl_lock(struct hlist_bl_head *b)
|
static inline void hlist_bl_lock(struct hlist_bl_head *b)
|
||||||
|
__acquires(__bitlock(0, b))
|
||||||
{
|
{
|
||||||
bit_spin_lock(0, (unsigned long *)b);
|
bit_spin_lock(0, (unsigned long *)b);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void hlist_bl_unlock(struct hlist_bl_head *b)
|
static inline void hlist_bl_unlock(struct hlist_bl_head *b)
|
||||||
|
__releases(__bitlock(0, b))
|
||||||
{
|
{
|
||||||
__bit_spin_unlock(0, (unsigned long *)b);
|
__bit_spin_unlock(0, (unsigned long *)b);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -14,13 +14,13 @@
|
|||||||
* local_lock - Acquire a per CPU local lock
|
* local_lock - Acquire a per CPU local lock
|
||||||
* @lock: The lock variable
|
* @lock: The lock variable
|
||||||
*/
|
*/
|
||||||
#define local_lock(lock) __local_lock(this_cpu_ptr(lock))
|
#define local_lock(lock) __local_lock(__this_cpu_local_lock(lock))
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* local_lock_irq - Acquire a per CPU local lock and disable interrupts
|
* local_lock_irq - Acquire a per CPU local lock and disable interrupts
|
||||||
* @lock: The lock variable
|
* @lock: The lock variable
|
||||||
*/
|
*/
|
||||||
#define local_lock_irq(lock) __local_lock_irq(this_cpu_ptr(lock))
|
#define local_lock_irq(lock) __local_lock_irq(__this_cpu_local_lock(lock))
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* local_lock_irqsave - Acquire a per CPU local lock, save and disable
|
* local_lock_irqsave - Acquire a per CPU local lock, save and disable
|
||||||
@@ -29,19 +29,19 @@
|
|||||||
* @flags: Storage for interrupt flags
|
* @flags: Storage for interrupt flags
|
||||||
*/
|
*/
|
||||||
#define local_lock_irqsave(lock, flags) \
|
#define local_lock_irqsave(lock, flags) \
|
||||||
__local_lock_irqsave(this_cpu_ptr(lock), flags)
|
__local_lock_irqsave(__this_cpu_local_lock(lock), flags)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* local_unlock - Release a per CPU local lock
|
* local_unlock - Release a per CPU local lock
|
||||||
* @lock: The lock variable
|
* @lock: The lock variable
|
||||||
*/
|
*/
|
||||||
#define local_unlock(lock) __local_unlock(this_cpu_ptr(lock))
|
#define local_unlock(lock) __local_unlock(__this_cpu_local_lock(lock))
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* local_unlock_irq - Release a per CPU local lock and enable interrupts
|
* local_unlock_irq - Release a per CPU local lock and enable interrupts
|
||||||
* @lock: The lock variable
|
* @lock: The lock variable
|
||||||
*/
|
*/
|
||||||
#define local_unlock_irq(lock) __local_unlock_irq(this_cpu_ptr(lock))
|
#define local_unlock_irq(lock) __local_unlock_irq(__this_cpu_local_lock(lock))
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* local_unlock_irqrestore - Release a per CPU local lock and restore
|
* local_unlock_irqrestore - Release a per CPU local lock and restore
|
||||||
@@ -50,7 +50,7 @@
|
|||||||
* @flags: Interrupt flags to restore
|
* @flags: Interrupt flags to restore
|
||||||
*/
|
*/
|
||||||
#define local_unlock_irqrestore(lock, flags) \
|
#define local_unlock_irqrestore(lock, flags) \
|
||||||
__local_unlock_irqrestore(this_cpu_ptr(lock), flags)
|
__local_unlock_irqrestore(__this_cpu_local_lock(lock), flags)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* local_trylock_init - Runtime initialize a lock instance
|
* local_trylock_init - Runtime initialize a lock instance
|
||||||
@@ -66,7 +66,7 @@
|
|||||||
* locking constrains it will _always_ fail to acquire the lock in NMI or
|
* locking constrains it will _always_ fail to acquire the lock in NMI or
|
||||||
* HARDIRQ context on PREEMPT_RT.
|
* HARDIRQ context on PREEMPT_RT.
|
||||||
*/
|
*/
|
||||||
#define local_trylock(lock) __local_trylock(this_cpu_ptr(lock))
|
#define local_trylock(lock) __local_trylock(__this_cpu_local_lock(lock))
|
||||||
|
|
||||||
#define local_lock_is_locked(lock) __local_lock_is_locked(lock)
|
#define local_lock_is_locked(lock) __local_lock_is_locked(lock)
|
||||||
|
|
||||||
@@ -81,27 +81,44 @@
|
|||||||
* HARDIRQ context on PREEMPT_RT.
|
* HARDIRQ context on PREEMPT_RT.
|
||||||
*/
|
*/
|
||||||
#define local_trylock_irqsave(lock, flags) \
|
#define local_trylock_irqsave(lock, flags) \
|
||||||
__local_trylock_irqsave(this_cpu_ptr(lock), flags)
|
__local_trylock_irqsave(__this_cpu_local_lock(lock), flags)
|
||||||
|
|
||||||
DEFINE_GUARD(local_lock, local_lock_t __percpu*,
|
DEFINE_LOCK_GUARD_1(local_lock, local_lock_t __percpu,
|
||||||
local_lock(_T),
|
local_lock(_T->lock),
|
||||||
local_unlock(_T))
|
local_unlock(_T->lock))
|
||||||
DEFINE_GUARD(local_lock_irq, local_lock_t __percpu*,
|
DEFINE_LOCK_GUARD_1(local_lock_irq, local_lock_t __percpu,
|
||||||
local_lock_irq(_T),
|
local_lock_irq(_T->lock),
|
||||||
local_unlock_irq(_T))
|
local_unlock_irq(_T->lock))
|
||||||
DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __percpu,
|
DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __percpu,
|
||||||
local_lock_irqsave(_T->lock, _T->flags),
|
local_lock_irqsave(_T->lock, _T->flags),
|
||||||
local_unlock_irqrestore(_T->lock, _T->flags),
|
local_unlock_irqrestore(_T->lock, _T->flags),
|
||||||
unsigned long flags)
|
unsigned long flags)
|
||||||
|
|
||||||
#define local_lock_nested_bh(_lock) \
|
#define local_lock_nested_bh(_lock) \
|
||||||
__local_lock_nested_bh(this_cpu_ptr(_lock))
|
__local_lock_nested_bh(__this_cpu_local_lock(_lock))
|
||||||
|
|
||||||
#define local_unlock_nested_bh(_lock) \
|
#define local_unlock_nested_bh(_lock) \
|
||||||
__local_unlock_nested_bh(this_cpu_ptr(_lock))
|
__local_unlock_nested_bh(__this_cpu_local_lock(_lock))
|
||||||
|
|
||||||
DEFINE_GUARD(local_lock_nested_bh, local_lock_t __percpu*,
|
DEFINE_LOCK_GUARD_1(local_lock_nested_bh, local_lock_t __percpu,
|
||||||
local_lock_nested_bh(_T),
|
local_lock_nested_bh(_T->lock),
|
||||||
local_unlock_nested_bh(_T))
|
local_unlock_nested_bh(_T->lock))
|
||||||
|
|
||||||
|
DEFINE_LOCK_GUARD_1(local_lock_init, local_lock_t, local_lock_init(_T->lock), /* */)
|
||||||
|
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(local_lock, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
|
||||||
|
#define class_local_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock, _T)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(local_lock_irq, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
|
||||||
|
#define class_local_lock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_irq, _T)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(local_lock_irqsave, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
|
||||||
|
#define class_local_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_irqsave, _T)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(local_lock_nested_bh, __acquires(_T), __releases(*(local_lock_t __percpu **)_T))
|
||||||
|
#define class_local_lock_nested_bh_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_nested_bh, _T)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(local_lock_init, __acquires(_T), __releases(*(local_lock_t **)_T))
|
||||||
|
#define class_local_lock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_init, _T)
|
||||||
|
|
||||||
|
DEFINE_LOCK_GUARD_1(local_trylock_init, local_trylock_t, local_trylock_init(_T->lock), /* */)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(local_trylock_init, __acquires(_T), __releases(*(local_trylock_t **)_T))
|
||||||
|
#define class_local_trylock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_trylock_init, _T)
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|||||||
@@ -4,25 +4,30 @@
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
#include <linux/percpu-defs.h>
|
#include <linux/percpu-defs.h>
|
||||||
|
#include <linux/irqflags.h>
|
||||||
#include <linux/lockdep.h>
|
#include <linux/lockdep.h>
|
||||||
|
#include <linux/debug_locks.h>
|
||||||
|
#include <asm/current.h>
|
||||||
|
|
||||||
#ifndef CONFIG_PREEMPT_RT
|
#ifndef CONFIG_PREEMPT_RT
|
||||||
|
|
||||||
typedef struct {
|
context_lock_struct(local_lock) {
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
struct lockdep_map dep_map;
|
struct lockdep_map dep_map;
|
||||||
struct task_struct *owner;
|
struct task_struct *owner;
|
||||||
#endif
|
#endif
|
||||||
} local_lock_t;
|
};
|
||||||
|
typedef struct local_lock local_lock_t;
|
||||||
|
|
||||||
/* local_trylock() and local_trylock_irqsave() only work with local_trylock_t */
|
/* local_trylock() and local_trylock_irqsave() only work with local_trylock_t */
|
||||||
typedef struct {
|
context_lock_struct(local_trylock) {
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
struct lockdep_map dep_map;
|
struct lockdep_map dep_map;
|
||||||
struct task_struct *owner;
|
struct task_struct *owner;
|
||||||
#endif
|
#endif
|
||||||
u8 acquired;
|
u8 acquired;
|
||||||
} local_trylock_t;
|
};
|
||||||
|
typedef struct local_trylock local_trylock_t;
|
||||||
|
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
# define LOCAL_LOCK_DEBUG_INIT(lockname) \
|
# define LOCAL_LOCK_DEBUG_INIT(lockname) \
|
||||||
@@ -84,7 +89,10 @@ do { \
|
|||||||
local_lock_debug_init(lock); \
|
local_lock_debug_init(lock); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define __local_trylock_init(lock) __local_lock_init((local_lock_t *)lock)
|
#define __local_trylock_init(lock) \
|
||||||
|
do { \
|
||||||
|
__local_lock_init((local_lock_t *)lock); \
|
||||||
|
} while (0)
|
||||||
|
|
||||||
#define __spinlock_nested_bh_init(lock) \
|
#define __spinlock_nested_bh_init(lock) \
|
||||||
do { \
|
do { \
|
||||||
@@ -117,22 +125,25 @@ do { \
|
|||||||
do { \
|
do { \
|
||||||
preempt_disable(); \
|
preempt_disable(); \
|
||||||
__local_lock_acquire(lock); \
|
__local_lock_acquire(lock); \
|
||||||
|
__acquire(lock); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define __local_lock_irq(lock) \
|
#define __local_lock_irq(lock) \
|
||||||
do { \
|
do { \
|
||||||
local_irq_disable(); \
|
local_irq_disable(); \
|
||||||
__local_lock_acquire(lock); \
|
__local_lock_acquire(lock); \
|
||||||
|
__acquire(lock); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define __local_lock_irqsave(lock, flags) \
|
#define __local_lock_irqsave(lock, flags) \
|
||||||
do { \
|
do { \
|
||||||
local_irq_save(flags); \
|
local_irq_save(flags); \
|
||||||
__local_lock_acquire(lock); \
|
__local_lock_acquire(lock); \
|
||||||
|
__acquire(lock); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define __local_trylock(lock) \
|
#define __local_trylock(lock) \
|
||||||
({ \
|
__try_acquire_ctx_lock(lock, ({ \
|
||||||
local_trylock_t *__tl; \
|
local_trylock_t *__tl; \
|
||||||
\
|
\
|
||||||
preempt_disable(); \
|
preempt_disable(); \
|
||||||
@@ -146,10 +157,10 @@ do { \
|
|||||||
(local_lock_t *)__tl); \
|
(local_lock_t *)__tl); \
|
||||||
} \
|
} \
|
||||||
!!__tl; \
|
!!__tl; \
|
||||||
})
|
}))
|
||||||
|
|
||||||
#define __local_trylock_irqsave(lock, flags) \
|
#define __local_trylock_irqsave(lock, flags) \
|
||||||
({ \
|
__try_acquire_ctx_lock(lock, ({ \
|
||||||
local_trylock_t *__tl; \
|
local_trylock_t *__tl; \
|
||||||
\
|
\
|
||||||
local_irq_save(flags); \
|
local_irq_save(flags); \
|
||||||
@@ -163,7 +174,7 @@ do { \
|
|||||||
(local_lock_t *)__tl); \
|
(local_lock_t *)__tl); \
|
||||||
} \
|
} \
|
||||||
!!__tl; \
|
!!__tl; \
|
||||||
})
|
}))
|
||||||
|
|
||||||
/* preemption or migration must be disabled before calling __local_lock_is_locked */
|
/* preemption or migration must be disabled before calling __local_lock_is_locked */
|
||||||
#define __local_lock_is_locked(lock) READ_ONCE(this_cpu_ptr(lock)->acquired)
|
#define __local_lock_is_locked(lock) READ_ONCE(this_cpu_ptr(lock)->acquired)
|
||||||
@@ -186,18 +197,21 @@ do { \
|
|||||||
|
|
||||||
#define __local_unlock(lock) \
|
#define __local_unlock(lock) \
|
||||||
do { \
|
do { \
|
||||||
|
__release(lock); \
|
||||||
__local_lock_release(lock); \
|
__local_lock_release(lock); \
|
||||||
preempt_enable(); \
|
preempt_enable(); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define __local_unlock_irq(lock) \
|
#define __local_unlock_irq(lock) \
|
||||||
do { \
|
do { \
|
||||||
|
__release(lock); \
|
||||||
__local_lock_release(lock); \
|
__local_lock_release(lock); \
|
||||||
local_irq_enable(); \
|
local_irq_enable(); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define __local_unlock_irqrestore(lock, flags) \
|
#define __local_unlock_irqrestore(lock, flags) \
|
||||||
do { \
|
do { \
|
||||||
|
__release(lock); \
|
||||||
__local_lock_release(lock); \
|
__local_lock_release(lock); \
|
||||||
local_irq_restore(flags); \
|
local_irq_restore(flags); \
|
||||||
} while (0)
|
} while (0)
|
||||||
@@ -206,13 +220,20 @@ do { \
|
|||||||
do { \
|
do { \
|
||||||
lockdep_assert_in_softirq(); \
|
lockdep_assert_in_softirq(); \
|
||||||
local_lock_acquire((lock)); \
|
local_lock_acquire((lock)); \
|
||||||
|
__acquire(lock); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define __local_unlock_nested_bh(lock) \
|
#define __local_unlock_nested_bh(lock) \
|
||||||
local_lock_release((lock))
|
do { \
|
||||||
|
__release(lock); \
|
||||||
|
local_lock_release((lock)); \
|
||||||
|
} while (0)
|
||||||
|
|
||||||
#else /* !CONFIG_PREEMPT_RT */
|
#else /* !CONFIG_PREEMPT_RT */
|
||||||
|
|
||||||
|
#include <linux/sched.h>
|
||||||
|
#include <linux/spinlock.h>
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* On PREEMPT_RT local_lock maps to a per CPU spinlock, which protects the
|
* On PREEMPT_RT local_lock maps to a per CPU spinlock, which protects the
|
||||||
* critical section while staying preemptible.
|
* critical section while staying preemptible.
|
||||||
@@ -267,7 +288,7 @@ do { \
|
|||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define __local_trylock(lock) \
|
#define __local_trylock(lock) \
|
||||||
({ \
|
__try_acquire_ctx_lock(lock, context_unsafe(({ \
|
||||||
int __locked; \
|
int __locked; \
|
||||||
\
|
\
|
||||||
if (in_nmi() | in_hardirq()) { \
|
if (in_nmi() | in_hardirq()) { \
|
||||||
@@ -279,17 +300,40 @@ do { \
|
|||||||
migrate_enable(); \
|
migrate_enable(); \
|
||||||
} \
|
} \
|
||||||
__locked; \
|
__locked; \
|
||||||
})
|
})))
|
||||||
|
|
||||||
#define __local_trylock_irqsave(lock, flags) \
|
#define __local_trylock_irqsave(lock, flags) \
|
||||||
({ \
|
__try_acquire_ctx_lock(lock, ({ \
|
||||||
typecheck(unsigned long, flags); \
|
typecheck(unsigned long, flags); \
|
||||||
flags = 0; \
|
flags = 0; \
|
||||||
__local_trylock(lock); \
|
__local_trylock(lock); \
|
||||||
})
|
}))
|
||||||
|
|
||||||
/* migration must be disabled before calling __local_lock_is_locked */
|
/* migration must be disabled before calling __local_lock_is_locked */
|
||||||
#define __local_lock_is_locked(__lock) \
|
#define __local_lock_is_locked(__lock) \
|
||||||
(rt_mutex_owner(&this_cpu_ptr(__lock)->lock) == current)
|
(rt_mutex_owner(&this_cpu_ptr(__lock)->lock) == current)
|
||||||
|
|
||||||
#endif /* CONFIG_PREEMPT_RT */
|
#endif /* CONFIG_PREEMPT_RT */
|
||||||
|
|
||||||
|
#if defined(WARN_CONTEXT_ANALYSIS)
|
||||||
|
/*
|
||||||
|
* Because the compiler only knows about the base per-CPU variable, use this
|
||||||
|
* helper function to make the compiler think we lock/unlock the @base variable,
|
||||||
|
* and hide the fact we actually pass the per-CPU instance to lock/unlock
|
||||||
|
* functions.
|
||||||
|
*/
|
||||||
|
static __always_inline local_lock_t *__this_cpu_local_lock(local_lock_t __percpu *base)
|
||||||
|
__returns_ctx_lock(base) __attribute__((overloadable))
|
||||||
|
{
|
||||||
|
return this_cpu_ptr(base);
|
||||||
|
}
|
||||||
|
#ifndef CONFIG_PREEMPT_RT
|
||||||
|
static __always_inline local_trylock_t *__this_cpu_local_lock(local_trylock_t __percpu *base)
|
||||||
|
__returns_ctx_lock(base) __attribute__((overloadable))
|
||||||
|
{
|
||||||
|
return this_cpu_ptr(base);
|
||||||
|
}
|
||||||
|
#endif /* CONFIG_PREEMPT_RT */
|
||||||
|
#else /* WARN_CONTEXT_ANALYSIS */
|
||||||
|
#define __this_cpu_local_lock(base) this_cpu_ptr(base)
|
||||||
|
#endif /* WARN_CONTEXT_ANALYSIS */
|
||||||
|
|||||||
@@ -282,16 +282,16 @@ extern void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie);
|
|||||||
do { WARN_ON_ONCE(debug_locks && !(cond)); } while (0)
|
do { WARN_ON_ONCE(debug_locks && !(cond)); } while (0)
|
||||||
|
|
||||||
#define lockdep_assert_held(l) \
|
#define lockdep_assert_held(l) \
|
||||||
lockdep_assert(lockdep_is_held(l) != LOCK_STATE_NOT_HELD)
|
do { lockdep_assert(lockdep_is_held(l) != LOCK_STATE_NOT_HELD); __assume_ctx_lock(l); } while (0)
|
||||||
|
|
||||||
#define lockdep_assert_not_held(l) \
|
#define lockdep_assert_not_held(l) \
|
||||||
lockdep_assert(lockdep_is_held(l) != LOCK_STATE_HELD)
|
lockdep_assert(lockdep_is_held(l) != LOCK_STATE_HELD)
|
||||||
|
|
||||||
#define lockdep_assert_held_write(l) \
|
#define lockdep_assert_held_write(l) \
|
||||||
lockdep_assert(lockdep_is_held_type(l, 0))
|
do { lockdep_assert(lockdep_is_held_type(l, 0)); __assume_ctx_lock(l); } while (0)
|
||||||
|
|
||||||
#define lockdep_assert_held_read(l) \
|
#define lockdep_assert_held_read(l) \
|
||||||
lockdep_assert(lockdep_is_held_type(l, 1))
|
do { lockdep_assert(lockdep_is_held_type(l, 1)); __assume_shared_ctx_lock(l); } while (0)
|
||||||
|
|
||||||
#define lockdep_assert_held_once(l) \
|
#define lockdep_assert_held_once(l) \
|
||||||
lockdep_assert_once(lockdep_is_held(l) != LOCK_STATE_NOT_HELD)
|
lockdep_assert_once(lockdep_is_held(l) != LOCK_STATE_NOT_HELD)
|
||||||
@@ -389,10 +389,10 @@ extern int lockdep_is_held(const void *);
|
|||||||
#define lockdep_assert(c) do { } while (0)
|
#define lockdep_assert(c) do { } while (0)
|
||||||
#define lockdep_assert_once(c) do { } while (0)
|
#define lockdep_assert_once(c) do { } while (0)
|
||||||
|
|
||||||
#define lockdep_assert_held(l) do { (void)(l); } while (0)
|
#define lockdep_assert_held(l) __assume_ctx_lock(l)
|
||||||
#define lockdep_assert_not_held(l) do { (void)(l); } while (0)
|
#define lockdep_assert_not_held(l) do { (void)(l); } while (0)
|
||||||
#define lockdep_assert_held_write(l) do { (void)(l); } while (0)
|
#define lockdep_assert_held_write(l) __assume_ctx_lock(l)
|
||||||
#define lockdep_assert_held_read(l) do { (void)(l); } while (0)
|
#define lockdep_assert_held_read(l) __assume_shared_ctx_lock(l)
|
||||||
#define lockdep_assert_held_once(l) do { (void)(l); } while (0)
|
#define lockdep_assert_held_once(l) do { (void)(l); } while (0)
|
||||||
#define lockdep_assert_none_held_once() do { } while (0)
|
#define lockdep_assert_none_held_once() do { } while (0)
|
||||||
|
|
||||||
|
|||||||
@@ -49,9 +49,7 @@ static inline void lockref_init(struct lockref *lockref)
|
|||||||
void lockref_get(struct lockref *lockref);
|
void lockref_get(struct lockref *lockref);
|
||||||
int lockref_put_return(struct lockref *lockref);
|
int lockref_put_return(struct lockref *lockref);
|
||||||
bool lockref_get_not_zero(struct lockref *lockref);
|
bool lockref_get_not_zero(struct lockref *lockref);
|
||||||
bool lockref_put_or_lock(struct lockref *lockref);
|
bool lockref_put_or_lock(struct lockref *lockref) __cond_acquires(false, &lockref->lock);
|
||||||
#define lockref_put_or_lock(_lockref) \
|
|
||||||
(!__cond_lock((_lockref)->lock, !lockref_put_or_lock(_lockref)))
|
|
||||||
|
|
||||||
void lockref_mark_dead(struct lockref *lockref);
|
void lockref_mark_dead(struct lockref *lockref);
|
||||||
bool lockref_get_not_dead(struct lockref *lockref);
|
bool lockref_get_not_dead(struct lockref *lockref);
|
||||||
|
|||||||
@@ -2979,15 +2979,8 @@ static inline pud_t pud_mkspecial(pud_t pud)
|
|||||||
}
|
}
|
||||||
#endif /* CONFIG_ARCH_SUPPORTS_PUD_PFNMAP */
|
#endif /* CONFIG_ARCH_SUPPORTS_PUD_PFNMAP */
|
||||||
|
|
||||||
extern pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr,
|
extern pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr,
|
||||||
spinlock_t **ptl);
|
spinlock_t **ptl);
|
||||||
static inline pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr,
|
|
||||||
spinlock_t **ptl)
|
|
||||||
{
|
|
||||||
pte_t *ptep;
|
|
||||||
__cond_lock(*ptl, ptep = __get_locked_pte(mm, addr, ptl));
|
|
||||||
return ptep;
|
|
||||||
}
|
|
||||||
|
|
||||||
#ifdef __PAGETABLE_P4D_FOLDED
|
#ifdef __PAGETABLE_P4D_FOLDED
|
||||||
static inline int __p4d_alloc(struct mm_struct *mm, pgd_t *pgd,
|
static inline int __p4d_alloc(struct mm_struct *mm, pgd_t *pgd,
|
||||||
@@ -3341,31 +3334,15 @@ static inline bool pagetable_pte_ctor(struct mm_struct *mm,
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp);
|
pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp);
|
||||||
static inline pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr,
|
|
||||||
pmd_t *pmdvalp)
|
|
||||||
{
|
|
||||||
pte_t *pte;
|
|
||||||
|
|
||||||
__cond_lock(RCU, pte = ___pte_offset_map(pmd, addr, pmdvalp));
|
|
||||||
return pte;
|
|
||||||
}
|
|
||||||
static inline pte_t *pte_offset_map(pmd_t *pmd, unsigned long addr)
|
static inline pte_t *pte_offset_map(pmd_t *pmd, unsigned long addr)
|
||||||
{
|
{
|
||||||
return __pte_offset_map(pmd, addr, NULL);
|
return __pte_offset_map(pmd, addr, NULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
pte_t *__pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd,
|
pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd,
|
||||||
unsigned long addr, spinlock_t **ptlp);
|
unsigned long addr, spinlock_t **ptlp);
|
||||||
static inline pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd,
|
|
||||||
unsigned long addr, spinlock_t **ptlp)
|
|
||||||
{
|
|
||||||
pte_t *pte;
|
|
||||||
|
|
||||||
__cond_lock(RCU, __cond_lock(*ptlp,
|
|
||||||
pte = __pte_offset_map_lock(mm, pmd, addr, ptlp)));
|
|
||||||
return pte;
|
|
||||||
}
|
|
||||||
|
|
||||||
pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd,
|
pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd,
|
||||||
unsigned long addr, spinlock_t **ptlp);
|
unsigned long addr, spinlock_t **ptlp);
|
||||||
|
|||||||
@@ -182,13 +182,13 @@ static inline int __must_check __devm_mutex_init(struct device *dev, struct mute
|
|||||||
* Also see Documentation/locking/mutex-design.rst.
|
* Also see Documentation/locking/mutex-design.rst.
|
||||||
*/
|
*/
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass);
|
extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass) __acquires(lock);
|
||||||
extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock);
|
extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock);
|
||||||
extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock,
|
extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock,
|
||||||
unsigned int subclass);
|
unsigned int subclass) __cond_acquires(0, lock);
|
||||||
extern int __must_check _mutex_lock_killable(struct mutex *lock,
|
extern int __must_check _mutex_lock_killable(struct mutex *lock,
|
||||||
unsigned int subclass, struct lockdep_map *nest_lock);
|
unsigned int subclass, struct lockdep_map *nest_lock) __cond_acquires(0, lock);
|
||||||
extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass);
|
extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass) __acquires(lock);
|
||||||
|
|
||||||
#define mutex_lock(lock) mutex_lock_nested(lock, 0)
|
#define mutex_lock(lock) mutex_lock_nested(lock, 0)
|
||||||
#define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(lock, 0)
|
#define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(lock, 0)
|
||||||
@@ -211,10 +211,10 @@ do { \
|
|||||||
_mutex_lock_killable(lock, subclass, NULL)
|
_mutex_lock_killable(lock, subclass, NULL)
|
||||||
|
|
||||||
#else
|
#else
|
||||||
extern void mutex_lock(struct mutex *lock);
|
extern void mutex_lock(struct mutex *lock) __acquires(lock);
|
||||||
extern int __must_check mutex_lock_interruptible(struct mutex *lock);
|
extern int __must_check mutex_lock_interruptible(struct mutex *lock) __cond_acquires(0, lock);
|
||||||
extern int __must_check mutex_lock_killable(struct mutex *lock);
|
extern int __must_check mutex_lock_killable(struct mutex *lock) __cond_acquires(0, lock);
|
||||||
extern void mutex_lock_io(struct mutex *lock);
|
extern void mutex_lock_io(struct mutex *lock) __acquires(lock);
|
||||||
|
|
||||||
# define mutex_lock_nested(lock, subclass) mutex_lock(lock)
|
# define mutex_lock_nested(lock, subclass) mutex_lock(lock)
|
||||||
# define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interruptible(lock)
|
# define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interruptible(lock)
|
||||||
@@ -232,7 +232,7 @@ extern void mutex_lock_io(struct mutex *lock);
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
extern int _mutex_trylock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock);
|
extern int _mutex_trylock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock) __cond_acquires(true, lock);
|
||||||
|
|
||||||
#define mutex_trylock_nest_lock(lock, nest_lock) \
|
#define mutex_trylock_nest_lock(lock, nest_lock) \
|
||||||
( \
|
( \
|
||||||
@@ -242,17 +242,27 @@ extern int _mutex_trylock_nest_lock(struct mutex *lock, struct lockdep_map *nest
|
|||||||
|
|
||||||
#define mutex_trylock(lock) _mutex_trylock_nest_lock(lock, NULL)
|
#define mutex_trylock(lock) _mutex_trylock_nest_lock(lock, NULL)
|
||||||
#else
|
#else
|
||||||
extern int mutex_trylock(struct mutex *lock);
|
extern int mutex_trylock(struct mutex *lock) __cond_acquires(true, lock);
|
||||||
#define mutex_trylock_nest_lock(lock, nest_lock) mutex_trylock(lock)
|
#define mutex_trylock_nest_lock(lock, nest_lock) mutex_trylock(lock)
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
extern void mutex_unlock(struct mutex *lock);
|
extern void mutex_unlock(struct mutex *lock) __releases(lock);
|
||||||
|
|
||||||
extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock);
|
extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock) __cond_acquires(true, lock);
|
||||||
|
|
||||||
DEFINE_GUARD(mutex, struct mutex *, mutex_lock(_T), mutex_unlock(_T))
|
DEFINE_LOCK_GUARD_1(mutex, struct mutex, mutex_lock(_T->lock), mutex_unlock(_T->lock))
|
||||||
DEFINE_GUARD_COND(mutex, _try, mutex_trylock(_T))
|
DEFINE_LOCK_GUARD_1_COND(mutex, _try, mutex_trylock(_T->lock))
|
||||||
DEFINE_GUARD_COND(mutex, _intr, mutex_lock_interruptible(_T), _RET == 0)
|
DEFINE_LOCK_GUARD_1_COND(mutex, _intr, mutex_lock_interruptible(_T->lock), _RET == 0)
|
||||||
|
DEFINE_LOCK_GUARD_1(mutex_init, struct mutex, mutex_init(_T->lock), /* */)
|
||||||
|
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(mutex, __acquires(_T), __releases(*(struct mutex **)_T))
|
||||||
|
#define class_mutex_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex, _T)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(mutex_try, __acquires(_T), __releases(*(struct mutex **)_T))
|
||||||
|
#define class_mutex_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_try, _T)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(mutex_intr, __acquires(_T), __releases(*(struct mutex **)_T))
|
||||||
|
#define class_mutex_intr_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_intr, _T)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(mutex_init, __acquires(_T), __releases(*(struct mutex **)_T))
|
||||||
|
#define class_mutex_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_init, _T)
|
||||||
|
|
||||||
extern unsigned long mutex_get_owner(struct mutex *lock);
|
extern unsigned long mutex_get_owner(struct mutex *lock);
|
||||||
|
|
||||||
|
|||||||
@@ -38,7 +38,7 @@
|
|||||||
* - detects multi-task circular deadlocks and prints out all affected
|
* - detects multi-task circular deadlocks and prints out all affected
|
||||||
* locks and tasks (and only those tasks)
|
* locks and tasks (and only those tasks)
|
||||||
*/
|
*/
|
||||||
struct mutex {
|
context_lock_struct(mutex) {
|
||||||
atomic_long_t owner;
|
atomic_long_t owner;
|
||||||
raw_spinlock_t wait_lock;
|
raw_spinlock_t wait_lock;
|
||||||
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
|
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
|
||||||
@@ -59,7 +59,7 @@ struct mutex {
|
|||||||
*/
|
*/
|
||||||
#include <linux/rtmutex.h>
|
#include <linux/rtmutex.h>
|
||||||
|
|
||||||
struct mutex {
|
context_lock_struct(mutex) {
|
||||||
struct rt_mutex_base rtmutex;
|
struct rt_mutex_base rtmutex;
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
struct lockdep_map dep_map;
|
struct lockdep_map dep_map;
|
||||||
|
|||||||
@@ -31,6 +31,16 @@
|
|||||||
#include <asm/processor.h>
|
#include <asm/processor.h>
|
||||||
#include <linux/context_tracking_irq.h>
|
#include <linux/context_tracking_irq.h>
|
||||||
|
|
||||||
|
token_context_lock(RCU, __reentrant_ctx_lock);
|
||||||
|
token_context_lock_instance(RCU, RCU_SCHED);
|
||||||
|
token_context_lock_instance(RCU, RCU_BH);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* A convenience macro that can be used for RCU-protected globals or struct
|
||||||
|
* members; adds type qualifier __rcu, and also enforces __guarded_by(RCU).
|
||||||
|
*/
|
||||||
|
#define __rcu_guarded __rcu __guarded_by(RCU)
|
||||||
|
|
||||||
#define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b))
|
#define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b))
|
||||||
#define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b))
|
#define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b))
|
||||||
|
|
||||||
@@ -396,7 +406,8 @@ static inline void rcu_preempt_sleep_check(void) { }
|
|||||||
|
|
||||||
// See RCU_LOCKDEP_WARN() for an explanation of the double call to
|
// See RCU_LOCKDEP_WARN() for an explanation of the double call to
|
||||||
// debug_lockdep_rcu_enabled().
|
// debug_lockdep_rcu_enabled().
|
||||||
static inline bool lockdep_assert_rcu_helper(bool c)
|
static __always_inline bool lockdep_assert_rcu_helper(bool c, const struct __ctx_lock_RCU *ctx)
|
||||||
|
__assumes_shared_ctx_lock(RCU) __assumes_shared_ctx_lock(ctx)
|
||||||
{
|
{
|
||||||
return debug_lockdep_rcu_enabled() &&
|
return debug_lockdep_rcu_enabled() &&
|
||||||
(c || !rcu_is_watching() || !rcu_lockdep_current_cpu_online()) &&
|
(c || !rcu_is_watching() || !rcu_lockdep_current_cpu_online()) &&
|
||||||
@@ -409,7 +420,7 @@ static inline bool lockdep_assert_rcu_helper(bool c)
|
|||||||
* Splats if lockdep is enabled and there is no rcu_read_lock() in effect.
|
* Splats if lockdep is enabled and there is no rcu_read_lock() in effect.
|
||||||
*/
|
*/
|
||||||
#define lockdep_assert_in_rcu_read_lock() \
|
#define lockdep_assert_in_rcu_read_lock() \
|
||||||
WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map)))
|
WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map), RCU))
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* lockdep_assert_in_rcu_read_lock_bh - WARN if not protected by rcu_read_lock_bh()
|
* lockdep_assert_in_rcu_read_lock_bh - WARN if not protected by rcu_read_lock_bh()
|
||||||
@@ -419,7 +430,7 @@ static inline bool lockdep_assert_rcu_helper(bool c)
|
|||||||
* actual rcu_read_lock_bh() is required.
|
* actual rcu_read_lock_bh() is required.
|
||||||
*/
|
*/
|
||||||
#define lockdep_assert_in_rcu_read_lock_bh() \
|
#define lockdep_assert_in_rcu_read_lock_bh() \
|
||||||
WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map)))
|
WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map), RCU_BH))
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* lockdep_assert_in_rcu_read_lock_sched - WARN if not protected by rcu_read_lock_sched()
|
* lockdep_assert_in_rcu_read_lock_sched - WARN if not protected by rcu_read_lock_sched()
|
||||||
@@ -429,7 +440,7 @@ static inline bool lockdep_assert_rcu_helper(bool c)
|
|||||||
* instead an actual rcu_read_lock_sched() is required.
|
* instead an actual rcu_read_lock_sched() is required.
|
||||||
*/
|
*/
|
||||||
#define lockdep_assert_in_rcu_read_lock_sched() \
|
#define lockdep_assert_in_rcu_read_lock_sched() \
|
||||||
WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map)))
|
WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map), RCU_SCHED))
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* lockdep_assert_in_rcu_reader - WARN if not within some type of RCU reader
|
* lockdep_assert_in_rcu_reader - WARN if not within some type of RCU reader
|
||||||
@@ -447,17 +458,17 @@ static inline bool lockdep_assert_rcu_helper(bool c)
|
|||||||
WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map) && \
|
WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map) && \
|
||||||
!lock_is_held(&rcu_bh_lock_map) && \
|
!lock_is_held(&rcu_bh_lock_map) && \
|
||||||
!lock_is_held(&rcu_sched_lock_map) && \
|
!lock_is_held(&rcu_sched_lock_map) && \
|
||||||
preemptible()))
|
preemptible(), RCU))
|
||||||
|
|
||||||
#else /* #ifdef CONFIG_PROVE_RCU */
|
#else /* #ifdef CONFIG_PROVE_RCU */
|
||||||
|
|
||||||
#define RCU_LOCKDEP_WARN(c, s) do { } while (0 && (c))
|
#define RCU_LOCKDEP_WARN(c, s) do { } while (0 && (c))
|
||||||
#define rcu_sleep_check() do { } while (0)
|
#define rcu_sleep_check() do { } while (0)
|
||||||
|
|
||||||
#define lockdep_assert_in_rcu_read_lock() do { } while (0)
|
#define lockdep_assert_in_rcu_read_lock() __assume_shared_ctx_lock(RCU)
|
||||||
#define lockdep_assert_in_rcu_read_lock_bh() do { } while (0)
|
#define lockdep_assert_in_rcu_read_lock_bh() __assume_shared_ctx_lock(RCU_BH)
|
||||||
#define lockdep_assert_in_rcu_read_lock_sched() do { } while (0)
|
#define lockdep_assert_in_rcu_read_lock_sched() __assume_shared_ctx_lock(RCU_SCHED)
|
||||||
#define lockdep_assert_in_rcu_reader() do { } while (0)
|
#define lockdep_assert_in_rcu_reader() __assume_shared_ctx_lock(RCU)
|
||||||
|
|
||||||
#endif /* #else #ifdef CONFIG_PROVE_RCU */
|
#endif /* #else #ifdef CONFIG_PROVE_RCU */
|
||||||
|
|
||||||
@@ -477,11 +488,11 @@ static inline bool lockdep_assert_rcu_helper(bool c)
|
|||||||
#endif /* #else #ifdef __CHECKER__ */
|
#endif /* #else #ifdef __CHECKER__ */
|
||||||
|
|
||||||
#define __unrcu_pointer(p, local) \
|
#define __unrcu_pointer(p, local) \
|
||||||
({ \
|
context_unsafe( \
|
||||||
typeof(*p) *local = (typeof(*p) *__force)(p); \
|
typeof(*p) *local = (typeof(*p) *__force)(p); \
|
||||||
rcu_check_sparse(p, __rcu); \
|
rcu_check_sparse(p, __rcu); \
|
||||||
((typeof(*p) __force __kernel *)(local)); \
|
((typeof(*p) __force __kernel *)(local)) \
|
||||||
})
|
)
|
||||||
/**
|
/**
|
||||||
* unrcu_pointer - mark a pointer as not being RCU protected
|
* unrcu_pointer - mark a pointer as not being RCU protected
|
||||||
* @p: pointer needing to lose its __rcu property
|
* @p: pointer needing to lose its __rcu property
|
||||||
@@ -557,7 +568,7 @@ static inline bool lockdep_assert_rcu_helper(bool c)
|
|||||||
* other macros that it invokes.
|
* other macros that it invokes.
|
||||||
*/
|
*/
|
||||||
#define rcu_assign_pointer(p, v) \
|
#define rcu_assign_pointer(p, v) \
|
||||||
do { \
|
context_unsafe( \
|
||||||
uintptr_t _r_a_p__v = (uintptr_t)(v); \
|
uintptr_t _r_a_p__v = (uintptr_t)(v); \
|
||||||
rcu_check_sparse(p, __rcu); \
|
rcu_check_sparse(p, __rcu); \
|
||||||
\
|
\
|
||||||
@@ -565,7 +576,7 @@ do { \
|
|||||||
WRITE_ONCE((p), (typeof(p))(_r_a_p__v)); \
|
WRITE_ONCE((p), (typeof(p))(_r_a_p__v)); \
|
||||||
else \
|
else \
|
||||||
smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \
|
smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \
|
||||||
} while (0)
|
)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* rcu_replace_pointer() - replace an RCU pointer, returning its old value
|
* rcu_replace_pointer() - replace an RCU pointer, returning its old value
|
||||||
@@ -832,9 +843,10 @@ do { \
|
|||||||
* only when acquiring spinlocks that are subject to priority inheritance.
|
* only when acquiring spinlocks that are subject to priority inheritance.
|
||||||
*/
|
*/
|
||||||
static __always_inline void rcu_read_lock(void)
|
static __always_inline void rcu_read_lock(void)
|
||||||
|
__acquires_shared(RCU)
|
||||||
{
|
{
|
||||||
__rcu_read_lock();
|
__rcu_read_lock();
|
||||||
__acquire(RCU);
|
__acquire_shared(RCU);
|
||||||
rcu_lock_acquire(&rcu_lock_map);
|
rcu_lock_acquire(&rcu_lock_map);
|
||||||
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
||||||
"rcu_read_lock() used illegally while idle");
|
"rcu_read_lock() used illegally while idle");
|
||||||
@@ -862,11 +874,12 @@ static __always_inline void rcu_read_lock(void)
|
|||||||
* See rcu_read_lock() for more information.
|
* See rcu_read_lock() for more information.
|
||||||
*/
|
*/
|
||||||
static inline void rcu_read_unlock(void)
|
static inline void rcu_read_unlock(void)
|
||||||
|
__releases_shared(RCU)
|
||||||
{
|
{
|
||||||
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
||||||
"rcu_read_unlock() used illegally while idle");
|
"rcu_read_unlock() used illegally while idle");
|
||||||
rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */
|
rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */
|
||||||
__release(RCU);
|
__release_shared(RCU);
|
||||||
__rcu_read_unlock();
|
__rcu_read_unlock();
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -885,9 +898,11 @@ static inline void rcu_read_unlock(void)
|
|||||||
* was invoked from some other task.
|
* was invoked from some other task.
|
||||||
*/
|
*/
|
||||||
static inline void rcu_read_lock_bh(void)
|
static inline void rcu_read_lock_bh(void)
|
||||||
|
__acquires_shared(RCU) __acquires_shared(RCU_BH)
|
||||||
{
|
{
|
||||||
local_bh_disable();
|
local_bh_disable();
|
||||||
__acquire(RCU_BH);
|
__acquire_shared(RCU);
|
||||||
|
__acquire_shared(RCU_BH);
|
||||||
rcu_lock_acquire(&rcu_bh_lock_map);
|
rcu_lock_acquire(&rcu_bh_lock_map);
|
||||||
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
||||||
"rcu_read_lock_bh() used illegally while idle");
|
"rcu_read_lock_bh() used illegally while idle");
|
||||||
@@ -899,11 +914,13 @@ static inline void rcu_read_lock_bh(void)
|
|||||||
* See rcu_read_lock_bh() for more information.
|
* See rcu_read_lock_bh() for more information.
|
||||||
*/
|
*/
|
||||||
static inline void rcu_read_unlock_bh(void)
|
static inline void rcu_read_unlock_bh(void)
|
||||||
|
__releases_shared(RCU) __releases_shared(RCU_BH)
|
||||||
{
|
{
|
||||||
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
||||||
"rcu_read_unlock_bh() used illegally while idle");
|
"rcu_read_unlock_bh() used illegally while idle");
|
||||||
rcu_lock_release(&rcu_bh_lock_map);
|
rcu_lock_release(&rcu_bh_lock_map);
|
||||||
__release(RCU_BH);
|
__release_shared(RCU_BH);
|
||||||
|
__release_shared(RCU);
|
||||||
local_bh_enable();
|
local_bh_enable();
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -923,9 +940,11 @@ static inline void rcu_read_unlock_bh(void)
|
|||||||
* rcu_read_lock_sched() was invoked from an NMI handler.
|
* rcu_read_lock_sched() was invoked from an NMI handler.
|
||||||
*/
|
*/
|
||||||
static inline void rcu_read_lock_sched(void)
|
static inline void rcu_read_lock_sched(void)
|
||||||
|
__acquires_shared(RCU) __acquires_shared(RCU_SCHED)
|
||||||
{
|
{
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
__acquire(RCU_SCHED);
|
__acquire_shared(RCU);
|
||||||
|
__acquire_shared(RCU_SCHED);
|
||||||
rcu_lock_acquire(&rcu_sched_lock_map);
|
rcu_lock_acquire(&rcu_sched_lock_map);
|
||||||
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
||||||
"rcu_read_lock_sched() used illegally while idle");
|
"rcu_read_lock_sched() used illegally while idle");
|
||||||
@@ -933,9 +952,11 @@ static inline void rcu_read_lock_sched(void)
|
|||||||
|
|
||||||
/* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */
|
/* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */
|
||||||
static inline notrace void rcu_read_lock_sched_notrace(void)
|
static inline notrace void rcu_read_lock_sched_notrace(void)
|
||||||
|
__acquires_shared(RCU) __acquires_shared(RCU_SCHED)
|
||||||
{
|
{
|
||||||
preempt_disable_notrace();
|
preempt_disable_notrace();
|
||||||
__acquire(RCU_SCHED);
|
__acquire_shared(RCU);
|
||||||
|
__acquire_shared(RCU_SCHED);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -944,22 +965,27 @@ static inline notrace void rcu_read_lock_sched_notrace(void)
|
|||||||
* See rcu_read_lock_sched() for more information.
|
* See rcu_read_lock_sched() for more information.
|
||||||
*/
|
*/
|
||||||
static inline void rcu_read_unlock_sched(void)
|
static inline void rcu_read_unlock_sched(void)
|
||||||
|
__releases_shared(RCU) __releases_shared(RCU_SCHED)
|
||||||
{
|
{
|
||||||
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
RCU_LOCKDEP_WARN(!rcu_is_watching(),
|
||||||
"rcu_read_unlock_sched() used illegally while idle");
|
"rcu_read_unlock_sched() used illegally while idle");
|
||||||
rcu_lock_release(&rcu_sched_lock_map);
|
rcu_lock_release(&rcu_sched_lock_map);
|
||||||
__release(RCU_SCHED);
|
__release_shared(RCU_SCHED);
|
||||||
|
__release_shared(RCU);
|
||||||
preempt_enable();
|
preempt_enable();
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */
|
/* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */
|
||||||
static inline notrace void rcu_read_unlock_sched_notrace(void)
|
static inline notrace void rcu_read_unlock_sched_notrace(void)
|
||||||
|
__releases_shared(RCU) __releases_shared(RCU_SCHED)
|
||||||
{
|
{
|
||||||
__release(RCU_SCHED);
|
__release_shared(RCU_SCHED);
|
||||||
|
__release_shared(RCU);
|
||||||
preempt_enable_notrace();
|
preempt_enable_notrace();
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void rcu_read_lock_dont_migrate(void)
|
static __always_inline void rcu_read_lock_dont_migrate(void)
|
||||||
|
__acquires_shared(RCU)
|
||||||
{
|
{
|
||||||
if (IS_ENABLED(CONFIG_PREEMPT_RCU))
|
if (IS_ENABLED(CONFIG_PREEMPT_RCU))
|
||||||
migrate_disable();
|
migrate_disable();
|
||||||
@@ -967,6 +993,7 @@ static __always_inline void rcu_read_lock_dont_migrate(void)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void rcu_read_unlock_migrate(void)
|
static inline void rcu_read_unlock_migrate(void)
|
||||||
|
__releases_shared(RCU)
|
||||||
{
|
{
|
||||||
rcu_read_unlock();
|
rcu_read_unlock();
|
||||||
if (IS_ENABLED(CONFIG_PREEMPT_RCU))
|
if (IS_ENABLED(CONFIG_PREEMPT_RCU))
|
||||||
@@ -1012,10 +1039,10 @@ static inline void rcu_read_unlock_migrate(void)
|
|||||||
* ordering guarantees for either the CPU or the compiler.
|
* ordering guarantees for either the CPU or the compiler.
|
||||||
*/
|
*/
|
||||||
#define RCU_INIT_POINTER(p, v) \
|
#define RCU_INIT_POINTER(p, v) \
|
||||||
do { \
|
context_unsafe( \
|
||||||
rcu_check_sparse(p, __rcu); \
|
rcu_check_sparse(p, __rcu); \
|
||||||
WRITE_ONCE(p, RCU_INITIALIZER(v)); \
|
WRITE_ONCE(p, RCU_INITIALIZER(v)); \
|
||||||
} while (0)
|
)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer
|
* RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer
|
||||||
@@ -1163,18 +1190,7 @@ rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_callback_t f)
|
|||||||
extern int rcu_expedited;
|
extern int rcu_expedited;
|
||||||
extern int rcu_normal;
|
extern int rcu_normal;
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_0(rcu,
|
DEFINE_LOCK_GUARD_0(rcu, rcu_read_lock(), rcu_read_unlock())
|
||||||
do {
|
DECLARE_LOCK_GUARD_0_ATTRS(rcu, __acquires_shared(RCU), __releases_shared(RCU))
|
||||||
rcu_read_lock();
|
|
||||||
/*
|
|
||||||
* sparse doesn't call the cleanup function,
|
|
||||||
* so just release immediately and don't track
|
|
||||||
* the context. We don't need to anyway, since
|
|
||||||
* the whole point of the guard is to not need
|
|
||||||
* the explicit unlock.
|
|
||||||
*/
|
|
||||||
__release(RCU);
|
|
||||||
} while (0),
|
|
||||||
rcu_read_unlock())
|
|
||||||
|
|
||||||
#endif /* __LINUX_RCUPDATE_H */
|
#endif /* __LINUX_RCUPDATE_H */
|
||||||
|
|||||||
@@ -478,9 +478,9 @@ static inline void refcount_dec(refcount_t *r)
|
|||||||
|
|
||||||
extern __must_check bool refcount_dec_if_one(refcount_t *r);
|
extern __must_check bool refcount_dec_if_one(refcount_t *r);
|
||||||
extern __must_check bool refcount_dec_not_one(refcount_t *r);
|
extern __must_check bool refcount_dec_not_one(refcount_t *r);
|
||||||
extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) __cond_acquires(lock);
|
extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) __cond_acquires(true, lock);
|
||||||
extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) __cond_acquires(lock);
|
extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) __cond_acquires(true, lock);
|
||||||
extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r,
|
extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r,
|
||||||
spinlock_t *lock,
|
spinlock_t *lock,
|
||||||
unsigned long *flags) __cond_acquires(lock);
|
unsigned long *flags) __cond_acquires(true, lock);
|
||||||
#endif /* _LINUX_REFCOUNT_H */
|
#endif /* _LINUX_REFCOUNT_H */
|
||||||
|
|||||||
@@ -245,16 +245,17 @@ void *rhashtable_insert_slow(struct rhashtable *ht, const void *key,
|
|||||||
void rhashtable_walk_enter(struct rhashtable *ht,
|
void rhashtable_walk_enter(struct rhashtable *ht,
|
||||||
struct rhashtable_iter *iter);
|
struct rhashtable_iter *iter);
|
||||||
void rhashtable_walk_exit(struct rhashtable_iter *iter);
|
void rhashtable_walk_exit(struct rhashtable_iter *iter);
|
||||||
int rhashtable_walk_start_check(struct rhashtable_iter *iter) __acquires(RCU);
|
int rhashtable_walk_start_check(struct rhashtable_iter *iter) __acquires_shared(RCU);
|
||||||
|
|
||||||
static inline void rhashtable_walk_start(struct rhashtable_iter *iter)
|
static inline void rhashtable_walk_start(struct rhashtable_iter *iter)
|
||||||
|
__acquires_shared(RCU)
|
||||||
{
|
{
|
||||||
(void)rhashtable_walk_start_check(iter);
|
(void)rhashtable_walk_start_check(iter);
|
||||||
}
|
}
|
||||||
|
|
||||||
void *rhashtable_walk_next(struct rhashtable_iter *iter);
|
void *rhashtable_walk_next(struct rhashtable_iter *iter);
|
||||||
void *rhashtable_walk_peek(struct rhashtable_iter *iter);
|
void *rhashtable_walk_peek(struct rhashtable_iter *iter);
|
||||||
void rhashtable_walk_stop(struct rhashtable_iter *iter) __releases(RCU);
|
void rhashtable_walk_stop(struct rhashtable_iter *iter) __releases_shared(RCU);
|
||||||
|
|
||||||
void rhashtable_free_and_destroy(struct rhashtable *ht,
|
void rhashtable_free_and_destroy(struct rhashtable *ht,
|
||||||
void (*free_fn)(void *ptr, void *arg),
|
void (*free_fn)(void *ptr, void *arg),
|
||||||
@@ -325,6 +326,7 @@ static inline struct rhash_lock_head __rcu **rht_bucket_insert(
|
|||||||
|
|
||||||
static inline unsigned long rht_lock(struct bucket_table *tbl,
|
static inline unsigned long rht_lock(struct bucket_table *tbl,
|
||||||
struct rhash_lock_head __rcu **bkt)
|
struct rhash_lock_head __rcu **bkt)
|
||||||
|
__acquires(__bitlock(0, bkt))
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
@@ -337,6 +339,7 @@ static inline unsigned long rht_lock(struct bucket_table *tbl,
|
|||||||
static inline unsigned long rht_lock_nested(struct bucket_table *tbl,
|
static inline unsigned long rht_lock_nested(struct bucket_table *tbl,
|
||||||
struct rhash_lock_head __rcu **bucket,
|
struct rhash_lock_head __rcu **bucket,
|
||||||
unsigned int subclass)
|
unsigned int subclass)
|
||||||
|
__acquires(__bitlock(0, bucket))
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
@@ -349,6 +352,7 @@ static inline unsigned long rht_lock_nested(struct bucket_table *tbl,
|
|||||||
static inline void rht_unlock(struct bucket_table *tbl,
|
static inline void rht_unlock(struct bucket_table *tbl,
|
||||||
struct rhash_lock_head __rcu **bkt,
|
struct rhash_lock_head __rcu **bkt,
|
||||||
unsigned long flags)
|
unsigned long flags)
|
||||||
|
__releases(__bitlock(0, bkt))
|
||||||
{
|
{
|
||||||
lock_map_release(&tbl->dep_map);
|
lock_map_release(&tbl->dep_map);
|
||||||
bit_spin_unlock(0, (unsigned long *)bkt);
|
bit_spin_unlock(0, (unsigned long *)bkt);
|
||||||
@@ -424,13 +428,14 @@ static inline void rht_assign_unlock(struct bucket_table *tbl,
|
|||||||
struct rhash_lock_head __rcu **bkt,
|
struct rhash_lock_head __rcu **bkt,
|
||||||
struct rhash_head *obj,
|
struct rhash_head *obj,
|
||||||
unsigned long flags)
|
unsigned long flags)
|
||||||
|
__releases(__bitlock(0, bkt))
|
||||||
{
|
{
|
||||||
if (rht_is_a_nulls(obj))
|
if (rht_is_a_nulls(obj))
|
||||||
obj = NULL;
|
obj = NULL;
|
||||||
lock_map_release(&tbl->dep_map);
|
lock_map_release(&tbl->dep_map);
|
||||||
rcu_assign_pointer(*bkt, (void *)obj);
|
rcu_assign_pointer(*bkt, (void *)obj);
|
||||||
preempt_enable();
|
preempt_enable();
|
||||||
__release(bitlock);
|
__release(__bitlock(0, bkt));
|
||||||
local_irq_restore(flags);
|
local_irq_restore(flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -612,6 +617,7 @@ static __always_inline struct rhash_head *__rhashtable_lookup(
|
|||||||
struct rhashtable *ht, const void *key,
|
struct rhashtable *ht, const void *key,
|
||||||
const struct rhashtable_params params,
|
const struct rhashtable_params params,
|
||||||
const enum rht_lookup_freq freq)
|
const enum rht_lookup_freq freq)
|
||||||
|
__must_hold_shared(RCU)
|
||||||
{
|
{
|
||||||
struct rhashtable_compare_arg arg = {
|
struct rhashtable_compare_arg arg = {
|
||||||
.ht = ht,
|
.ht = ht,
|
||||||
@@ -666,6 +672,7 @@ restart:
|
|||||||
static __always_inline void *rhashtable_lookup(
|
static __always_inline void *rhashtable_lookup(
|
||||||
struct rhashtable *ht, const void *key,
|
struct rhashtable *ht, const void *key,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
|
__must_hold_shared(RCU)
|
||||||
{
|
{
|
||||||
struct rhash_head *he = __rhashtable_lookup(ht, key, params,
|
struct rhash_head *he = __rhashtable_lookup(ht, key, params,
|
||||||
RHT_LOOKUP_NORMAL);
|
RHT_LOOKUP_NORMAL);
|
||||||
@@ -676,6 +683,7 @@ static __always_inline void *rhashtable_lookup(
|
|||||||
static __always_inline void *rhashtable_lookup_likely(
|
static __always_inline void *rhashtable_lookup_likely(
|
||||||
struct rhashtable *ht, const void *key,
|
struct rhashtable *ht, const void *key,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
|
__must_hold_shared(RCU)
|
||||||
{
|
{
|
||||||
struct rhash_head *he = __rhashtable_lookup(ht, key, params,
|
struct rhash_head *he = __rhashtable_lookup(ht, key, params,
|
||||||
RHT_LOOKUP_LIKELY);
|
RHT_LOOKUP_LIKELY);
|
||||||
@@ -727,6 +735,7 @@ static __always_inline void *rhashtable_lookup_fast(
|
|||||||
static __always_inline struct rhlist_head *rhltable_lookup(
|
static __always_inline struct rhlist_head *rhltable_lookup(
|
||||||
struct rhltable *hlt, const void *key,
|
struct rhltable *hlt, const void *key,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
|
__must_hold_shared(RCU)
|
||||||
{
|
{
|
||||||
struct rhash_head *he = __rhashtable_lookup(&hlt->ht, key, params,
|
struct rhash_head *he = __rhashtable_lookup(&hlt->ht, key, params,
|
||||||
RHT_LOOKUP_NORMAL);
|
RHT_LOOKUP_NORMAL);
|
||||||
@@ -737,6 +746,7 @@ static __always_inline struct rhlist_head *rhltable_lookup(
|
|||||||
static __always_inline struct rhlist_head *rhltable_lookup_likely(
|
static __always_inline struct rhlist_head *rhltable_lookup_likely(
|
||||||
struct rhltable *hlt, const void *key,
|
struct rhltable *hlt, const void *key,
|
||||||
const struct rhashtable_params params)
|
const struct rhashtable_params params)
|
||||||
|
__must_hold_shared(RCU)
|
||||||
{
|
{
|
||||||
struct rhash_head *he = __rhashtable_lookup(&hlt->ht, key, params,
|
struct rhash_head *he = __rhashtable_lookup(&hlt->ht, key, params,
|
||||||
RHT_LOOKUP_LIKELY);
|
RHT_LOOKUP_LIKELY);
|
||||||
|
|||||||
@@ -29,16 +29,16 @@ do { \
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_DEBUG_SPINLOCK
|
#ifdef CONFIG_DEBUG_SPINLOCK
|
||||||
extern void do_raw_read_lock(rwlock_t *lock) __acquires(lock);
|
extern void do_raw_read_lock(rwlock_t *lock) __acquires_shared(lock);
|
||||||
extern int do_raw_read_trylock(rwlock_t *lock);
|
extern int do_raw_read_trylock(rwlock_t *lock);
|
||||||
extern void do_raw_read_unlock(rwlock_t *lock) __releases(lock);
|
extern void do_raw_read_unlock(rwlock_t *lock) __releases_shared(lock);
|
||||||
extern void do_raw_write_lock(rwlock_t *lock) __acquires(lock);
|
extern void do_raw_write_lock(rwlock_t *lock) __acquires(lock);
|
||||||
extern int do_raw_write_trylock(rwlock_t *lock);
|
extern int do_raw_write_trylock(rwlock_t *lock);
|
||||||
extern void do_raw_write_unlock(rwlock_t *lock) __releases(lock);
|
extern void do_raw_write_unlock(rwlock_t *lock) __releases(lock);
|
||||||
#else
|
#else
|
||||||
# define do_raw_read_lock(rwlock) do {__acquire(lock); arch_read_lock(&(rwlock)->raw_lock); } while (0)
|
# define do_raw_read_lock(rwlock) do {__acquire_shared(lock); arch_read_lock(&(rwlock)->raw_lock); } while (0)
|
||||||
# define do_raw_read_trylock(rwlock) arch_read_trylock(&(rwlock)->raw_lock)
|
# define do_raw_read_trylock(rwlock) arch_read_trylock(&(rwlock)->raw_lock)
|
||||||
# define do_raw_read_unlock(rwlock) do {arch_read_unlock(&(rwlock)->raw_lock); __release(lock); } while (0)
|
# define do_raw_read_unlock(rwlock) do {arch_read_unlock(&(rwlock)->raw_lock); __release_shared(lock); } while (0)
|
||||||
# define do_raw_write_lock(rwlock) do {__acquire(lock); arch_write_lock(&(rwlock)->raw_lock); } while (0)
|
# define do_raw_write_lock(rwlock) do {__acquire(lock); arch_write_lock(&(rwlock)->raw_lock); } while (0)
|
||||||
# define do_raw_write_trylock(rwlock) arch_write_trylock(&(rwlock)->raw_lock)
|
# define do_raw_write_trylock(rwlock) arch_write_trylock(&(rwlock)->raw_lock)
|
||||||
# define do_raw_write_unlock(rwlock) do {arch_write_unlock(&(rwlock)->raw_lock); __release(lock); } while (0)
|
# define do_raw_write_unlock(rwlock) do {arch_write_unlock(&(rwlock)->raw_lock); __release(lock); } while (0)
|
||||||
@@ -49,8 +49,8 @@ do { \
|
|||||||
* regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various
|
* regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various
|
||||||
* methods are defined as nops in the case they are not required.
|
* methods are defined as nops in the case they are not required.
|
||||||
*/
|
*/
|
||||||
#define read_trylock(lock) __cond_lock(lock, _raw_read_trylock(lock))
|
#define read_trylock(lock) _raw_read_trylock(lock)
|
||||||
#define write_trylock(lock) __cond_lock(lock, _raw_write_trylock(lock))
|
#define write_trylock(lock) _raw_write_trylock(lock)
|
||||||
|
|
||||||
#define write_lock(lock) _raw_write_lock(lock)
|
#define write_lock(lock) _raw_write_lock(lock)
|
||||||
#define read_lock(lock) _raw_read_lock(lock)
|
#define read_lock(lock) _raw_read_lock(lock)
|
||||||
@@ -112,12 +112,7 @@ do { \
|
|||||||
} while (0)
|
} while (0)
|
||||||
#define write_unlock_bh(lock) _raw_write_unlock_bh(lock)
|
#define write_unlock_bh(lock) _raw_write_unlock_bh(lock)
|
||||||
|
|
||||||
#define write_trylock_irqsave(lock, flags) \
|
#define write_trylock_irqsave(lock, flags) _raw_write_trylock_irqsave(lock, &(flags))
|
||||||
({ \
|
|
||||||
local_irq_save(flags); \
|
|
||||||
write_trylock(lock) ? \
|
|
||||||
1 : ({ local_irq_restore(flags); 0; }); \
|
|
||||||
})
|
|
||||||
|
|
||||||
#ifdef arch_rwlock_is_contended
|
#ifdef arch_rwlock_is_contended
|
||||||
#define rwlock_is_contended(lock) \
|
#define rwlock_is_contended(lock) \
|
||||||
|
|||||||
@@ -15,24 +15,24 @@
|
|||||||
* Released under the General Public License (GPL).
|
* Released under the General Public License (GPL).
|
||||||
*/
|
*/
|
||||||
|
|
||||||
void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires(lock);
|
void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires_shared(lock);
|
||||||
void __lockfunc _raw_write_lock(rwlock_t *lock) __acquires(lock);
|
void __lockfunc _raw_write_lock(rwlock_t *lock) __acquires(lock);
|
||||||
void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass) __acquires(lock);
|
void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass) __acquires(lock);
|
||||||
void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires(lock);
|
void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires_shared(lock);
|
||||||
void __lockfunc _raw_write_lock_bh(rwlock_t *lock) __acquires(lock);
|
void __lockfunc _raw_write_lock_bh(rwlock_t *lock) __acquires(lock);
|
||||||
void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires(lock);
|
void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires_shared(lock);
|
||||||
void __lockfunc _raw_write_lock_irq(rwlock_t *lock) __acquires(lock);
|
void __lockfunc _raw_write_lock_irq(rwlock_t *lock) __acquires(lock);
|
||||||
unsigned long __lockfunc _raw_read_lock_irqsave(rwlock_t *lock)
|
unsigned long __lockfunc _raw_read_lock_irqsave(rwlock_t *lock)
|
||||||
__acquires(lock);
|
__acquires(lock);
|
||||||
unsigned long __lockfunc _raw_write_lock_irqsave(rwlock_t *lock)
|
unsigned long __lockfunc _raw_write_lock_irqsave(rwlock_t *lock)
|
||||||
__acquires(lock);
|
__acquires(lock);
|
||||||
int __lockfunc _raw_read_trylock(rwlock_t *lock);
|
int __lockfunc _raw_read_trylock(rwlock_t *lock) __cond_acquires_shared(true, lock);
|
||||||
int __lockfunc _raw_write_trylock(rwlock_t *lock);
|
int __lockfunc _raw_write_trylock(rwlock_t *lock) __cond_acquires(true, lock);
|
||||||
void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases(lock);
|
void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases_shared(lock);
|
||||||
void __lockfunc _raw_write_unlock(rwlock_t *lock) __releases(lock);
|
void __lockfunc _raw_write_unlock(rwlock_t *lock) __releases(lock);
|
||||||
void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases(lock);
|
void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases_shared(lock);
|
||||||
void __lockfunc _raw_write_unlock_bh(rwlock_t *lock) __releases(lock);
|
void __lockfunc _raw_write_unlock_bh(rwlock_t *lock) __releases(lock);
|
||||||
void __lockfunc _raw_read_unlock_irq(rwlock_t *lock) __releases(lock);
|
void __lockfunc _raw_read_unlock_irq(rwlock_t *lock) __releases_shared(lock);
|
||||||
void __lockfunc _raw_write_unlock_irq(rwlock_t *lock) __releases(lock);
|
void __lockfunc _raw_write_unlock_irq(rwlock_t *lock) __releases(lock);
|
||||||
void __lockfunc
|
void __lockfunc
|
||||||
_raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
|
_raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
|
||||||
@@ -137,6 +137,16 @@ static inline int __raw_write_trylock(rwlock_t *lock)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline bool _raw_write_trylock_irqsave(rwlock_t *lock, unsigned long *flags)
|
||||||
|
__cond_acquires(true, lock) __no_context_analysis
|
||||||
|
{
|
||||||
|
local_irq_save(*flags);
|
||||||
|
if (_raw_write_trylock(lock))
|
||||||
|
return true;
|
||||||
|
local_irq_restore(*flags);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If lockdep is enabled then we use the non-preemption spin-ops
|
* If lockdep is enabled then we use the non-preemption spin-ops
|
||||||
* even on CONFIG_PREEMPT, because lockdep assumes that interrupts are
|
* even on CONFIG_PREEMPT, because lockdep assumes that interrupts are
|
||||||
@@ -145,6 +155,7 @@ static inline int __raw_write_trylock(rwlock_t *lock)
|
|||||||
#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC)
|
#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC)
|
||||||
|
|
||||||
static inline void __raw_read_lock(rwlock_t *lock)
|
static inline void __raw_read_lock(rwlock_t *lock)
|
||||||
|
__acquires_shared(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_);
|
rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_);
|
||||||
@@ -152,6 +163,7 @@ static inline void __raw_read_lock(rwlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned long __raw_read_lock_irqsave(rwlock_t *lock)
|
static inline unsigned long __raw_read_lock_irqsave(rwlock_t *lock)
|
||||||
|
__acquires_shared(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
@@ -163,6 +175,7 @@ static inline unsigned long __raw_read_lock_irqsave(rwlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_read_lock_irq(rwlock_t *lock)
|
static inline void __raw_read_lock_irq(rwlock_t *lock)
|
||||||
|
__acquires_shared(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
local_irq_disable();
|
local_irq_disable();
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
@@ -171,6 +184,7 @@ static inline void __raw_read_lock_irq(rwlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_read_lock_bh(rwlock_t *lock)
|
static inline void __raw_read_lock_bh(rwlock_t *lock)
|
||||||
|
__acquires_shared(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
|
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
|
||||||
rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_);
|
rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_);
|
||||||
@@ -178,6 +192,7 @@ static inline void __raw_read_lock_bh(rwlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned long __raw_write_lock_irqsave(rwlock_t *lock)
|
static inline unsigned long __raw_write_lock_irqsave(rwlock_t *lock)
|
||||||
|
__acquires(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
@@ -189,6 +204,7 @@ static inline unsigned long __raw_write_lock_irqsave(rwlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_write_lock_irq(rwlock_t *lock)
|
static inline void __raw_write_lock_irq(rwlock_t *lock)
|
||||||
|
__acquires(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
local_irq_disable();
|
local_irq_disable();
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
@@ -197,6 +213,7 @@ static inline void __raw_write_lock_irq(rwlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_write_lock_bh(rwlock_t *lock)
|
static inline void __raw_write_lock_bh(rwlock_t *lock)
|
||||||
|
__acquires(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
|
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
|
||||||
rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_);
|
rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_);
|
||||||
@@ -204,6 +221,7 @@ static inline void __raw_write_lock_bh(rwlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_write_lock(rwlock_t *lock)
|
static inline void __raw_write_lock(rwlock_t *lock)
|
||||||
|
__acquires(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_);
|
rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_);
|
||||||
@@ -211,6 +229,7 @@ static inline void __raw_write_lock(rwlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass)
|
static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass)
|
||||||
|
__acquires(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
rwlock_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
|
rwlock_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
|
||||||
@@ -220,6 +239,7 @@ static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass)
|
|||||||
#endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */
|
#endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */
|
||||||
|
|
||||||
static inline void __raw_write_unlock(rwlock_t *lock)
|
static inline void __raw_write_unlock(rwlock_t *lock)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
rwlock_release(&lock->dep_map, _RET_IP_);
|
rwlock_release(&lock->dep_map, _RET_IP_);
|
||||||
do_raw_write_unlock(lock);
|
do_raw_write_unlock(lock);
|
||||||
@@ -227,6 +247,7 @@ static inline void __raw_write_unlock(rwlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_read_unlock(rwlock_t *lock)
|
static inline void __raw_read_unlock(rwlock_t *lock)
|
||||||
|
__releases_shared(lock)
|
||||||
{
|
{
|
||||||
rwlock_release(&lock->dep_map, _RET_IP_);
|
rwlock_release(&lock->dep_map, _RET_IP_);
|
||||||
do_raw_read_unlock(lock);
|
do_raw_read_unlock(lock);
|
||||||
@@ -235,6 +256,7 @@ static inline void __raw_read_unlock(rwlock_t *lock)
|
|||||||
|
|
||||||
static inline void
|
static inline void
|
||||||
__raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
|
__raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
|
||||||
|
__releases_shared(lock)
|
||||||
{
|
{
|
||||||
rwlock_release(&lock->dep_map, _RET_IP_);
|
rwlock_release(&lock->dep_map, _RET_IP_);
|
||||||
do_raw_read_unlock(lock);
|
do_raw_read_unlock(lock);
|
||||||
@@ -243,6 +265,7 @@ __raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_read_unlock_irq(rwlock_t *lock)
|
static inline void __raw_read_unlock_irq(rwlock_t *lock)
|
||||||
|
__releases_shared(lock)
|
||||||
{
|
{
|
||||||
rwlock_release(&lock->dep_map, _RET_IP_);
|
rwlock_release(&lock->dep_map, _RET_IP_);
|
||||||
do_raw_read_unlock(lock);
|
do_raw_read_unlock(lock);
|
||||||
@@ -251,6 +274,7 @@ static inline void __raw_read_unlock_irq(rwlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_read_unlock_bh(rwlock_t *lock)
|
static inline void __raw_read_unlock_bh(rwlock_t *lock)
|
||||||
|
__releases_shared(lock)
|
||||||
{
|
{
|
||||||
rwlock_release(&lock->dep_map, _RET_IP_);
|
rwlock_release(&lock->dep_map, _RET_IP_);
|
||||||
do_raw_read_unlock(lock);
|
do_raw_read_unlock(lock);
|
||||||
@@ -259,6 +283,7 @@ static inline void __raw_read_unlock_bh(rwlock_t *lock)
|
|||||||
|
|
||||||
static inline void __raw_write_unlock_irqrestore(rwlock_t *lock,
|
static inline void __raw_write_unlock_irqrestore(rwlock_t *lock,
|
||||||
unsigned long flags)
|
unsigned long flags)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
rwlock_release(&lock->dep_map, _RET_IP_);
|
rwlock_release(&lock->dep_map, _RET_IP_);
|
||||||
do_raw_write_unlock(lock);
|
do_raw_write_unlock(lock);
|
||||||
@@ -267,6 +292,7 @@ static inline void __raw_write_unlock_irqrestore(rwlock_t *lock,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_write_unlock_irq(rwlock_t *lock)
|
static inline void __raw_write_unlock_irq(rwlock_t *lock)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
rwlock_release(&lock->dep_map, _RET_IP_);
|
rwlock_release(&lock->dep_map, _RET_IP_);
|
||||||
do_raw_write_unlock(lock);
|
do_raw_write_unlock(lock);
|
||||||
@@ -275,6 +301,7 @@ static inline void __raw_write_unlock_irq(rwlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_write_unlock_bh(rwlock_t *lock)
|
static inline void __raw_write_unlock_bh(rwlock_t *lock)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
rwlock_release(&lock->dep_map, _RET_IP_);
|
rwlock_release(&lock->dep_map, _RET_IP_);
|
||||||
do_raw_write_unlock(lock);
|
do_raw_write_unlock(lock);
|
||||||
|
|||||||
@@ -24,26 +24,29 @@ do { \
|
|||||||
__rt_rwlock_init(rwl, #rwl, &__key); \
|
__rt_rwlock_init(rwl, #rwl, &__key); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
extern void rt_read_lock(rwlock_t *rwlock) __acquires(rwlock);
|
extern void rt_read_lock(rwlock_t *rwlock) __acquires_shared(rwlock);
|
||||||
extern int rt_read_trylock(rwlock_t *rwlock);
|
extern int rt_read_trylock(rwlock_t *rwlock) __cond_acquires_shared(true, rwlock);
|
||||||
extern void rt_read_unlock(rwlock_t *rwlock) __releases(rwlock);
|
extern void rt_read_unlock(rwlock_t *rwlock) __releases_shared(rwlock);
|
||||||
extern void rt_write_lock(rwlock_t *rwlock) __acquires(rwlock);
|
extern void rt_write_lock(rwlock_t *rwlock) __acquires(rwlock);
|
||||||
extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass) __acquires(rwlock);
|
extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass) __acquires(rwlock);
|
||||||
extern int rt_write_trylock(rwlock_t *rwlock);
|
extern int rt_write_trylock(rwlock_t *rwlock) __cond_acquires(true, rwlock);
|
||||||
extern void rt_write_unlock(rwlock_t *rwlock) __releases(rwlock);
|
extern void rt_write_unlock(rwlock_t *rwlock) __releases(rwlock);
|
||||||
|
|
||||||
static __always_inline void read_lock(rwlock_t *rwlock)
|
static __always_inline void read_lock(rwlock_t *rwlock)
|
||||||
|
__acquires_shared(rwlock)
|
||||||
{
|
{
|
||||||
rt_read_lock(rwlock);
|
rt_read_lock(rwlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void read_lock_bh(rwlock_t *rwlock)
|
static __always_inline void read_lock_bh(rwlock_t *rwlock)
|
||||||
|
__acquires_shared(rwlock)
|
||||||
{
|
{
|
||||||
local_bh_disable();
|
local_bh_disable();
|
||||||
rt_read_lock(rwlock);
|
rt_read_lock(rwlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void read_lock_irq(rwlock_t *rwlock)
|
static __always_inline void read_lock_irq(rwlock_t *rwlock)
|
||||||
|
__acquires_shared(rwlock)
|
||||||
{
|
{
|
||||||
rt_read_lock(rwlock);
|
rt_read_lock(rwlock);
|
||||||
}
|
}
|
||||||
@@ -55,37 +58,43 @@ static __always_inline void read_lock_irq(rwlock_t *rwlock)
|
|||||||
flags = 0; \
|
flags = 0; \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define read_trylock(lock) __cond_lock(lock, rt_read_trylock(lock))
|
#define read_trylock(lock) rt_read_trylock(lock)
|
||||||
|
|
||||||
static __always_inline void read_unlock(rwlock_t *rwlock)
|
static __always_inline void read_unlock(rwlock_t *rwlock)
|
||||||
|
__releases_shared(rwlock)
|
||||||
{
|
{
|
||||||
rt_read_unlock(rwlock);
|
rt_read_unlock(rwlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void read_unlock_bh(rwlock_t *rwlock)
|
static __always_inline void read_unlock_bh(rwlock_t *rwlock)
|
||||||
|
__releases_shared(rwlock)
|
||||||
{
|
{
|
||||||
rt_read_unlock(rwlock);
|
rt_read_unlock(rwlock);
|
||||||
local_bh_enable();
|
local_bh_enable();
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void read_unlock_irq(rwlock_t *rwlock)
|
static __always_inline void read_unlock_irq(rwlock_t *rwlock)
|
||||||
|
__releases_shared(rwlock)
|
||||||
{
|
{
|
||||||
rt_read_unlock(rwlock);
|
rt_read_unlock(rwlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void read_unlock_irqrestore(rwlock_t *rwlock,
|
static __always_inline void read_unlock_irqrestore(rwlock_t *rwlock,
|
||||||
unsigned long flags)
|
unsigned long flags)
|
||||||
|
__releases_shared(rwlock)
|
||||||
{
|
{
|
||||||
rt_read_unlock(rwlock);
|
rt_read_unlock(rwlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void write_lock(rwlock_t *rwlock)
|
static __always_inline void write_lock(rwlock_t *rwlock)
|
||||||
|
__acquires(rwlock)
|
||||||
{
|
{
|
||||||
rt_write_lock(rwlock);
|
rt_write_lock(rwlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass)
|
static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass)
|
||||||
|
__acquires(rwlock)
|
||||||
{
|
{
|
||||||
rt_write_lock_nested(rwlock, subclass);
|
rt_write_lock_nested(rwlock, subclass);
|
||||||
}
|
}
|
||||||
@@ -94,12 +103,14 @@ static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass)
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
static __always_inline void write_lock_bh(rwlock_t *rwlock)
|
static __always_inline void write_lock_bh(rwlock_t *rwlock)
|
||||||
|
__acquires(rwlock)
|
||||||
{
|
{
|
||||||
local_bh_disable();
|
local_bh_disable();
|
||||||
rt_write_lock(rwlock);
|
rt_write_lock(rwlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void write_lock_irq(rwlock_t *rwlock)
|
static __always_inline void write_lock_irq(rwlock_t *rwlock)
|
||||||
|
__acquires(rwlock)
|
||||||
{
|
{
|
||||||
rt_write_lock(rwlock);
|
rt_write_lock(rwlock);
|
||||||
}
|
}
|
||||||
@@ -111,36 +122,38 @@ static __always_inline void write_lock_irq(rwlock_t *rwlock)
|
|||||||
flags = 0; \
|
flags = 0; \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define write_trylock(lock) __cond_lock(lock, rt_write_trylock(lock))
|
#define write_trylock(lock) rt_write_trylock(lock)
|
||||||
|
|
||||||
#define write_trylock_irqsave(lock, flags) \
|
static __always_inline bool _write_trylock_irqsave(rwlock_t *rwlock, unsigned long *flags)
|
||||||
({ \
|
__cond_acquires(true, rwlock)
|
||||||
int __locked; \
|
{
|
||||||
\
|
*flags = 0;
|
||||||
typecheck(unsigned long, flags); \
|
return rt_write_trylock(rwlock);
|
||||||
flags = 0; \
|
}
|
||||||
__locked = write_trylock(lock); \
|
#define write_trylock_irqsave(lock, flags) _write_trylock_irqsave(lock, &(flags))
|
||||||
__locked; \
|
|
||||||
})
|
|
||||||
|
|
||||||
static __always_inline void write_unlock(rwlock_t *rwlock)
|
static __always_inline void write_unlock(rwlock_t *rwlock)
|
||||||
|
__releases(rwlock)
|
||||||
{
|
{
|
||||||
rt_write_unlock(rwlock);
|
rt_write_unlock(rwlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void write_unlock_bh(rwlock_t *rwlock)
|
static __always_inline void write_unlock_bh(rwlock_t *rwlock)
|
||||||
|
__releases(rwlock)
|
||||||
{
|
{
|
||||||
rt_write_unlock(rwlock);
|
rt_write_unlock(rwlock);
|
||||||
local_bh_enable();
|
local_bh_enable();
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void write_unlock_irq(rwlock_t *rwlock)
|
static __always_inline void write_unlock_irq(rwlock_t *rwlock)
|
||||||
|
__releases(rwlock)
|
||||||
{
|
{
|
||||||
rt_write_unlock(rwlock);
|
rt_write_unlock(rwlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void write_unlock_irqrestore(rwlock_t *rwlock,
|
static __always_inline void write_unlock_irqrestore(rwlock_t *rwlock,
|
||||||
unsigned long flags)
|
unsigned long flags)
|
||||||
|
__releases(rwlock)
|
||||||
{
|
{
|
||||||
rt_write_unlock(rwlock);
|
rt_write_unlock(rwlock);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -22,7 +22,7 @@
|
|||||||
* portions Copyright 2005, Red Hat, Inc., Ingo Molnar
|
* portions Copyright 2005, Red Hat, Inc., Ingo Molnar
|
||||||
* Released under the General Public License (GPL).
|
* Released under the General Public License (GPL).
|
||||||
*/
|
*/
|
||||||
typedef struct {
|
context_lock_struct(rwlock) {
|
||||||
arch_rwlock_t raw_lock;
|
arch_rwlock_t raw_lock;
|
||||||
#ifdef CONFIG_DEBUG_SPINLOCK
|
#ifdef CONFIG_DEBUG_SPINLOCK
|
||||||
unsigned int magic, owner_cpu;
|
unsigned int magic, owner_cpu;
|
||||||
@@ -31,7 +31,8 @@ typedef struct {
|
|||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
struct lockdep_map dep_map;
|
struct lockdep_map dep_map;
|
||||||
#endif
|
#endif
|
||||||
} rwlock_t;
|
};
|
||||||
|
typedef struct rwlock rwlock_t;
|
||||||
|
|
||||||
#define RWLOCK_MAGIC 0xdeaf1eed
|
#define RWLOCK_MAGIC 0xdeaf1eed
|
||||||
|
|
||||||
@@ -54,13 +55,14 @@ typedef struct {
|
|||||||
|
|
||||||
#include <linux/rwbase_rt.h>
|
#include <linux/rwbase_rt.h>
|
||||||
|
|
||||||
typedef struct {
|
context_lock_struct(rwlock) {
|
||||||
struct rwbase_rt rwbase;
|
struct rwbase_rt rwbase;
|
||||||
atomic_t readers;
|
atomic_t readers;
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
struct lockdep_map dep_map;
|
struct lockdep_map dep_map;
|
||||||
#endif
|
#endif
|
||||||
} rwlock_t;
|
};
|
||||||
|
typedef struct rwlock rwlock_t;
|
||||||
|
|
||||||
#define __RWLOCK_RT_INITIALIZER(name) \
|
#define __RWLOCK_RT_INITIALIZER(name) \
|
||||||
{ \
|
{ \
|
||||||
|
|||||||
@@ -45,7 +45,7 @@
|
|||||||
* reduce the chance that they will share the same cacheline causing
|
* reduce the chance that they will share the same cacheline causing
|
||||||
* cacheline bouncing problem.
|
* cacheline bouncing problem.
|
||||||
*/
|
*/
|
||||||
struct rw_semaphore {
|
context_lock_struct(rw_semaphore) {
|
||||||
atomic_long_t count;
|
atomic_long_t count;
|
||||||
/*
|
/*
|
||||||
* Write owner or one of the read owners as well flags regarding
|
* Write owner or one of the read owners as well flags regarding
|
||||||
@@ -76,11 +76,13 @@ static inline int rwsem_is_locked(struct rw_semaphore *sem)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem)
|
static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem)
|
||||||
|
__assumes_ctx_lock(sem)
|
||||||
{
|
{
|
||||||
WARN_ON(atomic_long_read(&sem->count) == RWSEM_UNLOCKED_VALUE);
|
WARN_ON(atomic_long_read(&sem->count) == RWSEM_UNLOCKED_VALUE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem)
|
static inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem)
|
||||||
|
__assumes_ctx_lock(sem)
|
||||||
{
|
{
|
||||||
WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED));
|
WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED));
|
||||||
}
|
}
|
||||||
@@ -148,7 +150,7 @@ extern bool is_rwsem_reader_owned(struct rw_semaphore *sem);
|
|||||||
|
|
||||||
#include <linux/rwbase_rt.h>
|
#include <linux/rwbase_rt.h>
|
||||||
|
|
||||||
struct rw_semaphore {
|
context_lock_struct(rw_semaphore) {
|
||||||
struct rwbase_rt rwbase;
|
struct rwbase_rt rwbase;
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
struct lockdep_map dep_map;
|
struct lockdep_map dep_map;
|
||||||
@@ -180,11 +182,13 @@ static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem)
|
static __always_inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem)
|
||||||
|
__assumes_ctx_lock(sem)
|
||||||
{
|
{
|
||||||
WARN_ON(!rwsem_is_locked(sem));
|
WARN_ON(!rwsem_is_locked(sem));
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem)
|
static __always_inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem)
|
||||||
|
__assumes_ctx_lock(sem)
|
||||||
{
|
{
|
||||||
WARN_ON(!rw_base_is_write_locked(&sem->rwbase));
|
WARN_ON(!rw_base_is_write_locked(&sem->rwbase));
|
||||||
}
|
}
|
||||||
@@ -202,6 +206,7 @@ static __always_inline int rwsem_is_contended(struct rw_semaphore *sem)
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
static inline void rwsem_assert_held(const struct rw_semaphore *sem)
|
static inline void rwsem_assert_held(const struct rw_semaphore *sem)
|
||||||
|
__assumes_ctx_lock(sem)
|
||||||
{
|
{
|
||||||
if (IS_ENABLED(CONFIG_LOCKDEP))
|
if (IS_ENABLED(CONFIG_LOCKDEP))
|
||||||
lockdep_assert_held(sem);
|
lockdep_assert_held(sem);
|
||||||
@@ -210,6 +215,7 @@ static inline void rwsem_assert_held(const struct rw_semaphore *sem)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void rwsem_assert_held_write(const struct rw_semaphore *sem)
|
static inline void rwsem_assert_held_write(const struct rw_semaphore *sem)
|
||||||
|
__assumes_ctx_lock(sem)
|
||||||
{
|
{
|
||||||
if (IS_ENABLED(CONFIG_LOCKDEP))
|
if (IS_ENABLED(CONFIG_LOCKDEP))
|
||||||
lockdep_assert_held_write(sem);
|
lockdep_assert_held_write(sem);
|
||||||
@@ -220,48 +226,66 @@ static inline void rwsem_assert_held_write(const struct rw_semaphore *sem)
|
|||||||
/*
|
/*
|
||||||
* lock for reading
|
* lock for reading
|
||||||
*/
|
*/
|
||||||
extern void down_read(struct rw_semaphore *sem);
|
extern void down_read(struct rw_semaphore *sem) __acquires_shared(sem);
|
||||||
extern int __must_check down_read_interruptible(struct rw_semaphore *sem);
|
extern int __must_check down_read_interruptible(struct rw_semaphore *sem) __cond_acquires_shared(0, sem);
|
||||||
extern int __must_check down_read_killable(struct rw_semaphore *sem);
|
extern int __must_check down_read_killable(struct rw_semaphore *sem) __cond_acquires_shared(0, sem);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* trylock for reading -- returns 1 if successful, 0 if contention
|
* trylock for reading -- returns 1 if successful, 0 if contention
|
||||||
*/
|
*/
|
||||||
extern int down_read_trylock(struct rw_semaphore *sem);
|
extern int down_read_trylock(struct rw_semaphore *sem) __cond_acquires_shared(true, sem);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* lock for writing
|
* lock for writing
|
||||||
*/
|
*/
|
||||||
extern void down_write(struct rw_semaphore *sem);
|
extern void down_write(struct rw_semaphore *sem) __acquires(sem);
|
||||||
extern int __must_check down_write_killable(struct rw_semaphore *sem);
|
extern int __must_check down_write_killable(struct rw_semaphore *sem) __cond_acquires(0, sem);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* trylock for writing -- returns 1 if successful, 0 if contention
|
* trylock for writing -- returns 1 if successful, 0 if contention
|
||||||
*/
|
*/
|
||||||
extern int down_write_trylock(struct rw_semaphore *sem);
|
extern int down_write_trylock(struct rw_semaphore *sem) __cond_acquires(true, sem);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* release a read lock
|
* release a read lock
|
||||||
*/
|
*/
|
||||||
extern void up_read(struct rw_semaphore *sem);
|
extern void up_read(struct rw_semaphore *sem) __releases_shared(sem);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* release a write lock
|
* release a write lock
|
||||||
*/
|
*/
|
||||||
extern void up_write(struct rw_semaphore *sem);
|
extern void up_write(struct rw_semaphore *sem) __releases(sem);
|
||||||
|
|
||||||
DEFINE_GUARD(rwsem_read, struct rw_semaphore *, down_read(_T), up_read(_T))
|
DEFINE_LOCK_GUARD_1(rwsem_read, struct rw_semaphore, down_read(_T->lock), up_read(_T->lock))
|
||||||
DEFINE_GUARD_COND(rwsem_read, _try, down_read_trylock(_T))
|
DEFINE_LOCK_GUARD_1_COND(rwsem_read, _try, down_read_trylock(_T->lock))
|
||||||
DEFINE_GUARD_COND(rwsem_read, _intr, down_read_interruptible(_T), _RET == 0)
|
DEFINE_LOCK_GUARD_1_COND(rwsem_read, _intr, down_read_interruptible(_T->lock), _RET == 0)
|
||||||
|
|
||||||
DEFINE_GUARD(rwsem_write, struct rw_semaphore *, down_write(_T), up_write(_T))
|
DECLARE_LOCK_GUARD_1_ATTRS(rwsem_read, __acquires_shared(_T), __releases_shared(*(struct rw_semaphore **)_T))
|
||||||
DEFINE_GUARD_COND(rwsem_write, _try, down_write_trylock(_T))
|
#define class_rwsem_read_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_read, _T)
|
||||||
DEFINE_GUARD_COND(rwsem_write, _kill, down_write_killable(_T), _RET == 0)
|
DECLARE_LOCK_GUARD_1_ATTRS(rwsem_read_try, __acquires_shared(_T), __releases_shared(*(struct rw_semaphore **)_T))
|
||||||
|
#define class_rwsem_read_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_read_try, _T)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(rwsem_read_intr, __acquires_shared(_T), __releases_shared(*(struct rw_semaphore **)_T))
|
||||||
|
#define class_rwsem_read_intr_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_read_intr, _T)
|
||||||
|
|
||||||
|
DEFINE_LOCK_GUARD_1(rwsem_write, struct rw_semaphore, down_write(_T->lock), up_write(_T->lock))
|
||||||
|
DEFINE_LOCK_GUARD_1_COND(rwsem_write, _try, down_write_trylock(_T->lock))
|
||||||
|
DEFINE_LOCK_GUARD_1_COND(rwsem_write, _kill, down_write_killable(_T->lock), _RET == 0)
|
||||||
|
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write, __acquires(_T), __releases(*(struct rw_semaphore **)_T))
|
||||||
|
#define class_rwsem_write_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_write, _T)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write_try, __acquires(_T), __releases(*(struct rw_semaphore **)_T))
|
||||||
|
#define class_rwsem_write_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_write_try, _T)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write_kill, __acquires(_T), __releases(*(struct rw_semaphore **)_T))
|
||||||
|
#define class_rwsem_write_kill_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_write_kill, _T)
|
||||||
|
|
||||||
|
DEFINE_LOCK_GUARD_1(rwsem_init, struct rw_semaphore, init_rwsem(_T->lock), /* */)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(rwsem_init, __acquires(_T), __releases(*(struct rw_semaphore **)_T))
|
||||||
|
#define class_rwsem_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_init, _T)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* downgrade write lock to read lock
|
* downgrade write lock to read lock
|
||||||
*/
|
*/
|
||||||
extern void downgrade_write(struct rw_semaphore *sem);
|
extern void downgrade_write(struct rw_semaphore *sem) __releases(sem) __acquires_shared(sem);
|
||||||
|
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
/*
|
/*
|
||||||
@@ -277,11 +301,11 @@ extern void downgrade_write(struct rw_semaphore *sem);
|
|||||||
* lockdep_set_class() at lock initialization time.
|
* lockdep_set_class() at lock initialization time.
|
||||||
* See Documentation/locking/lockdep-design.rst for more details.)
|
* See Documentation/locking/lockdep-design.rst for more details.)
|
||||||
*/
|
*/
|
||||||
extern void down_read_nested(struct rw_semaphore *sem, int subclass);
|
extern void down_read_nested(struct rw_semaphore *sem, int subclass) __acquires_shared(sem);
|
||||||
extern int __must_check down_read_killable_nested(struct rw_semaphore *sem, int subclass);
|
extern int __must_check down_read_killable_nested(struct rw_semaphore *sem, int subclass) __cond_acquires_shared(0, sem);
|
||||||
extern void down_write_nested(struct rw_semaphore *sem, int subclass);
|
extern void down_write_nested(struct rw_semaphore *sem, int subclass) __acquires(sem);
|
||||||
extern int down_write_killable_nested(struct rw_semaphore *sem, int subclass);
|
extern int down_write_killable_nested(struct rw_semaphore *sem, int subclass) __cond_acquires(0, sem);
|
||||||
extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep_map *nest_lock);
|
extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep_map *nest_lock) __acquires(sem);
|
||||||
|
|
||||||
# define down_write_nest_lock(sem, nest_lock) \
|
# define down_write_nest_lock(sem, nest_lock) \
|
||||||
do { \
|
do { \
|
||||||
@@ -295,8 +319,8 @@ do { \
|
|||||||
* [ This API should be avoided as much as possible - the
|
* [ This API should be avoided as much as possible - the
|
||||||
* proper abstraction for this case is completions. ]
|
* proper abstraction for this case is completions. ]
|
||||||
*/
|
*/
|
||||||
extern void down_read_non_owner(struct rw_semaphore *sem);
|
extern void down_read_non_owner(struct rw_semaphore *sem) __acquires_shared(sem);
|
||||||
extern void up_read_non_owner(struct rw_semaphore *sem);
|
extern void up_read_non_owner(struct rw_semaphore *sem) __releases_shared(sem);
|
||||||
#else
|
#else
|
||||||
# define down_read_nested(sem, subclass) down_read(sem)
|
# define down_read_nested(sem, subclass) down_read(sem)
|
||||||
# define down_read_killable_nested(sem, subclass) down_read_killable(sem)
|
# define down_read_killable_nested(sem, subclass) down_read_killable(sem)
|
||||||
|
|||||||
@@ -2095,9 +2095,9 @@ static inline int _cond_resched(void)
|
|||||||
_cond_resched(); \
|
_cond_resched(); \
|
||||||
})
|
})
|
||||||
|
|
||||||
extern int __cond_resched_lock(spinlock_t *lock);
|
extern int __cond_resched_lock(spinlock_t *lock) __must_hold(lock);
|
||||||
extern int __cond_resched_rwlock_read(rwlock_t *lock);
|
extern int __cond_resched_rwlock_read(rwlock_t *lock) __must_hold_shared(lock);
|
||||||
extern int __cond_resched_rwlock_write(rwlock_t *lock);
|
extern int __cond_resched_rwlock_write(rwlock_t *lock) __must_hold(lock);
|
||||||
|
|
||||||
#define MIGHT_RESCHED_RCU_SHIFT 8
|
#define MIGHT_RESCHED_RCU_SHIFT 8
|
||||||
#define MIGHT_RESCHED_PREEMPT_MASK ((1U << MIGHT_RESCHED_RCU_SHIFT) - 1)
|
#define MIGHT_RESCHED_PREEMPT_MASK ((1U << MIGHT_RESCHED_RCU_SHIFT) - 1)
|
||||||
|
|||||||
@@ -737,21 +737,13 @@ static inline int thread_group_empty(struct task_struct *p)
|
|||||||
#define delay_group_leader(p) \
|
#define delay_group_leader(p) \
|
||||||
(thread_group_leader(p) && !thread_group_empty(p))
|
(thread_group_leader(p) && !thread_group_empty(p))
|
||||||
|
|
||||||
extern struct sighand_struct *__lock_task_sighand(struct task_struct *task,
|
extern struct sighand_struct *lock_task_sighand(struct task_struct *task,
|
||||||
unsigned long *flags);
|
unsigned long *flags)
|
||||||
|
__acquires(&task->sighand->siglock);
|
||||||
static inline struct sighand_struct *lock_task_sighand(struct task_struct *task,
|
|
||||||
unsigned long *flags)
|
|
||||||
{
|
|
||||||
struct sighand_struct *ret;
|
|
||||||
|
|
||||||
ret = __lock_task_sighand(task, flags);
|
|
||||||
(void)__cond_lock(&task->sighand->siglock, ret);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void unlock_task_sighand(struct task_struct *task,
|
static inline void unlock_task_sighand(struct task_struct *task,
|
||||||
unsigned long *flags)
|
unsigned long *flags)
|
||||||
|
__releases(&task->sighand->siglock)
|
||||||
{
|
{
|
||||||
spin_unlock_irqrestore(&task->sighand->siglock, *flags);
|
spin_unlock_irqrestore(&task->sighand->siglock, *flags);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -214,15 +214,19 @@ static inline struct vm_struct *task_stack_vm_area(const struct task_struct *t)
|
|||||||
* write_lock_irq(&tasklist_lock), neither inside nor outside.
|
* write_lock_irq(&tasklist_lock), neither inside nor outside.
|
||||||
*/
|
*/
|
||||||
static inline void task_lock(struct task_struct *p)
|
static inline void task_lock(struct task_struct *p)
|
||||||
|
__acquires(&p->alloc_lock)
|
||||||
{
|
{
|
||||||
spin_lock(&p->alloc_lock);
|
spin_lock(&p->alloc_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void task_unlock(struct task_struct *p)
|
static inline void task_unlock(struct task_struct *p)
|
||||||
|
__releases(&p->alloc_lock)
|
||||||
{
|
{
|
||||||
spin_unlock(&p->alloc_lock);
|
spin_unlock(&p->alloc_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
DEFINE_GUARD(task_lock, struct task_struct *, task_lock(_T), task_unlock(_T))
|
DEFINE_LOCK_GUARD_1(task_lock, struct task_struct, task_lock(_T->lock), task_unlock(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(task_lock, __acquires(&_T->alloc_lock), __releases(&(*(struct task_struct **)_T)->alloc_lock))
|
||||||
|
#define class_task_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(task_lock, _T)
|
||||||
|
|
||||||
#endif /* _LINUX_SCHED_TASK_H */
|
#endif /* _LINUX_SCHED_TASK_H */
|
||||||
|
|||||||
@@ -66,6 +66,7 @@ extern void wake_up_q(struct wake_q_head *head);
|
|||||||
/* Spin unlock helpers to unlock and call wake_up_q with preempt disabled */
|
/* Spin unlock helpers to unlock and call wake_up_q with preempt disabled */
|
||||||
static inline
|
static inline
|
||||||
void raw_spin_unlock_wake(raw_spinlock_t *lock, struct wake_q_head *wake_q)
|
void raw_spin_unlock_wake(raw_spinlock_t *lock, struct wake_q_head *wake_q)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
guard(preempt)();
|
guard(preempt)();
|
||||||
raw_spin_unlock(lock);
|
raw_spin_unlock(lock);
|
||||||
@@ -77,6 +78,7 @@ void raw_spin_unlock_wake(raw_spinlock_t *lock, struct wake_q_head *wake_q)
|
|||||||
|
|
||||||
static inline
|
static inline
|
||||||
void raw_spin_unlock_irq_wake(raw_spinlock_t *lock, struct wake_q_head *wake_q)
|
void raw_spin_unlock_irq_wake(raw_spinlock_t *lock, struct wake_q_head *wake_q)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
guard(preempt)();
|
guard(preempt)();
|
||||||
raw_spin_unlock_irq(lock);
|
raw_spin_unlock_irq(lock);
|
||||||
@@ -89,6 +91,7 @@ void raw_spin_unlock_irq_wake(raw_spinlock_t *lock, struct wake_q_head *wake_q)
|
|||||||
static inline
|
static inline
|
||||||
void raw_spin_unlock_irqrestore_wake(raw_spinlock_t *lock, unsigned long flags,
|
void raw_spin_unlock_irqrestore_wake(raw_spinlock_t *lock, unsigned long flags,
|
||||||
struct wake_q_head *wake_q)
|
struct wake_q_head *wake_q)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
guard(preempt)();
|
guard(preempt)();
|
||||||
raw_spin_unlock_irqrestore(lock, flags);
|
raw_spin_unlock_irqrestore(lock, flags);
|
||||||
|
|||||||
@@ -14,6 +14,7 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/compiler.h>
|
#include <linux/compiler.h>
|
||||||
|
#include <linux/cleanup.h>
|
||||||
#include <linux/kcsan-checks.h>
|
#include <linux/kcsan-checks.h>
|
||||||
#include <linux/lockdep.h>
|
#include <linux/lockdep.h>
|
||||||
#include <linux/mutex.h>
|
#include <linux/mutex.h>
|
||||||
@@ -832,6 +833,7 @@ static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s)
|
|||||||
* Return: count, to be passed to read_seqretry()
|
* Return: count, to be passed to read_seqretry()
|
||||||
*/
|
*/
|
||||||
static inline unsigned read_seqbegin(const seqlock_t *sl)
|
static inline unsigned read_seqbegin(const seqlock_t *sl)
|
||||||
|
__acquires_shared(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
return read_seqcount_begin(&sl->seqcount);
|
return read_seqcount_begin(&sl->seqcount);
|
||||||
}
|
}
|
||||||
@@ -848,6 +850,7 @@ static inline unsigned read_seqbegin(const seqlock_t *sl)
|
|||||||
* Return: true if a read section retry is required, else false
|
* Return: true if a read section retry is required, else false
|
||||||
*/
|
*/
|
||||||
static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
|
static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
|
||||||
|
__releases_shared(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
return read_seqcount_retry(&sl->seqcount, start);
|
return read_seqcount_retry(&sl->seqcount, start);
|
||||||
}
|
}
|
||||||
@@ -872,6 +875,7 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
|
|||||||
* _irqsave or _bh variants of this function instead.
|
* _irqsave or _bh variants of this function instead.
|
||||||
*/
|
*/
|
||||||
static inline void write_seqlock(seqlock_t *sl)
|
static inline void write_seqlock(seqlock_t *sl)
|
||||||
|
__acquires(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
spin_lock(&sl->lock);
|
spin_lock(&sl->lock);
|
||||||
do_write_seqcount_begin(&sl->seqcount.seqcount);
|
do_write_seqcount_begin(&sl->seqcount.seqcount);
|
||||||
@@ -885,6 +889,7 @@ static inline void write_seqlock(seqlock_t *sl)
|
|||||||
* critical section of given seqlock_t.
|
* critical section of given seqlock_t.
|
||||||
*/
|
*/
|
||||||
static inline void write_sequnlock(seqlock_t *sl)
|
static inline void write_sequnlock(seqlock_t *sl)
|
||||||
|
__releases(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
do_write_seqcount_end(&sl->seqcount.seqcount);
|
do_write_seqcount_end(&sl->seqcount.seqcount);
|
||||||
spin_unlock(&sl->lock);
|
spin_unlock(&sl->lock);
|
||||||
@@ -898,6 +903,7 @@ static inline void write_sequnlock(seqlock_t *sl)
|
|||||||
* other write side sections, can be invoked from softirq contexts.
|
* other write side sections, can be invoked from softirq contexts.
|
||||||
*/
|
*/
|
||||||
static inline void write_seqlock_bh(seqlock_t *sl)
|
static inline void write_seqlock_bh(seqlock_t *sl)
|
||||||
|
__acquires(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
spin_lock_bh(&sl->lock);
|
spin_lock_bh(&sl->lock);
|
||||||
do_write_seqcount_begin(&sl->seqcount.seqcount);
|
do_write_seqcount_begin(&sl->seqcount.seqcount);
|
||||||
@@ -912,6 +918,7 @@ static inline void write_seqlock_bh(seqlock_t *sl)
|
|||||||
* write_seqlock_bh().
|
* write_seqlock_bh().
|
||||||
*/
|
*/
|
||||||
static inline void write_sequnlock_bh(seqlock_t *sl)
|
static inline void write_sequnlock_bh(seqlock_t *sl)
|
||||||
|
__releases(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
do_write_seqcount_end(&sl->seqcount.seqcount);
|
do_write_seqcount_end(&sl->seqcount.seqcount);
|
||||||
spin_unlock_bh(&sl->lock);
|
spin_unlock_bh(&sl->lock);
|
||||||
@@ -925,6 +932,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl)
|
|||||||
* other write sections, can be invoked from hardirq contexts.
|
* other write sections, can be invoked from hardirq contexts.
|
||||||
*/
|
*/
|
||||||
static inline void write_seqlock_irq(seqlock_t *sl)
|
static inline void write_seqlock_irq(seqlock_t *sl)
|
||||||
|
__acquires(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
spin_lock_irq(&sl->lock);
|
spin_lock_irq(&sl->lock);
|
||||||
do_write_seqcount_begin(&sl->seqcount.seqcount);
|
do_write_seqcount_begin(&sl->seqcount.seqcount);
|
||||||
@@ -938,12 +946,14 @@ static inline void write_seqlock_irq(seqlock_t *sl)
|
|||||||
* seqlock_t write side section opened with write_seqlock_irq().
|
* seqlock_t write side section opened with write_seqlock_irq().
|
||||||
*/
|
*/
|
||||||
static inline void write_sequnlock_irq(seqlock_t *sl)
|
static inline void write_sequnlock_irq(seqlock_t *sl)
|
||||||
|
__releases(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
do_write_seqcount_end(&sl->seqcount.seqcount);
|
do_write_seqcount_end(&sl->seqcount.seqcount);
|
||||||
spin_unlock_irq(&sl->lock);
|
spin_unlock_irq(&sl->lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
|
static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
|
||||||
|
__acquires(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
@@ -976,6 +986,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
|
|||||||
*/
|
*/
|
||||||
static inline void
|
static inline void
|
||||||
write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
|
write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
|
||||||
|
__releases(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
do_write_seqcount_end(&sl->seqcount.seqcount);
|
do_write_seqcount_end(&sl->seqcount.seqcount);
|
||||||
spin_unlock_irqrestore(&sl->lock, flags);
|
spin_unlock_irqrestore(&sl->lock, flags);
|
||||||
@@ -998,6 +1009,7 @@ write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
|
|||||||
* The opened read section must be closed with read_sequnlock_excl().
|
* The opened read section must be closed with read_sequnlock_excl().
|
||||||
*/
|
*/
|
||||||
static inline void read_seqlock_excl(seqlock_t *sl)
|
static inline void read_seqlock_excl(seqlock_t *sl)
|
||||||
|
__acquires_shared(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
spin_lock(&sl->lock);
|
spin_lock(&sl->lock);
|
||||||
}
|
}
|
||||||
@@ -1007,6 +1019,7 @@ static inline void read_seqlock_excl(seqlock_t *sl)
|
|||||||
* @sl: Pointer to seqlock_t
|
* @sl: Pointer to seqlock_t
|
||||||
*/
|
*/
|
||||||
static inline void read_sequnlock_excl(seqlock_t *sl)
|
static inline void read_sequnlock_excl(seqlock_t *sl)
|
||||||
|
__releases_shared(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
spin_unlock(&sl->lock);
|
spin_unlock(&sl->lock);
|
||||||
}
|
}
|
||||||
@@ -1021,6 +1034,7 @@ static inline void read_sequnlock_excl(seqlock_t *sl)
|
|||||||
* from softirq contexts.
|
* from softirq contexts.
|
||||||
*/
|
*/
|
||||||
static inline void read_seqlock_excl_bh(seqlock_t *sl)
|
static inline void read_seqlock_excl_bh(seqlock_t *sl)
|
||||||
|
__acquires_shared(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
spin_lock_bh(&sl->lock);
|
spin_lock_bh(&sl->lock);
|
||||||
}
|
}
|
||||||
@@ -1031,6 +1045,7 @@ static inline void read_seqlock_excl_bh(seqlock_t *sl)
|
|||||||
* @sl: Pointer to seqlock_t
|
* @sl: Pointer to seqlock_t
|
||||||
*/
|
*/
|
||||||
static inline void read_sequnlock_excl_bh(seqlock_t *sl)
|
static inline void read_sequnlock_excl_bh(seqlock_t *sl)
|
||||||
|
__releases_shared(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
spin_unlock_bh(&sl->lock);
|
spin_unlock_bh(&sl->lock);
|
||||||
}
|
}
|
||||||
@@ -1045,6 +1060,7 @@ static inline void read_sequnlock_excl_bh(seqlock_t *sl)
|
|||||||
* hardirq context.
|
* hardirq context.
|
||||||
*/
|
*/
|
||||||
static inline void read_seqlock_excl_irq(seqlock_t *sl)
|
static inline void read_seqlock_excl_irq(seqlock_t *sl)
|
||||||
|
__acquires_shared(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
spin_lock_irq(&sl->lock);
|
spin_lock_irq(&sl->lock);
|
||||||
}
|
}
|
||||||
@@ -1055,11 +1071,13 @@ static inline void read_seqlock_excl_irq(seqlock_t *sl)
|
|||||||
* @sl: Pointer to seqlock_t
|
* @sl: Pointer to seqlock_t
|
||||||
*/
|
*/
|
||||||
static inline void read_sequnlock_excl_irq(seqlock_t *sl)
|
static inline void read_sequnlock_excl_irq(seqlock_t *sl)
|
||||||
|
__releases_shared(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
spin_unlock_irq(&sl->lock);
|
spin_unlock_irq(&sl->lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl)
|
static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl)
|
||||||
|
__acquires_shared(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
@@ -1089,6 +1107,7 @@ static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl)
|
|||||||
*/
|
*/
|
||||||
static inline void
|
static inline void
|
||||||
read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags)
|
read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags)
|
||||||
|
__releases_shared(sl) __no_context_analysis
|
||||||
{
|
{
|
||||||
spin_unlock_irqrestore(&sl->lock, flags);
|
spin_unlock_irqrestore(&sl->lock, flags);
|
||||||
}
|
}
|
||||||
@@ -1125,6 +1144,7 @@ read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags)
|
|||||||
* parameter of the next read_seqbegin_or_lock() iteration.
|
* parameter of the next read_seqbegin_or_lock() iteration.
|
||||||
*/
|
*/
|
||||||
static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq)
|
static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq)
|
||||||
|
__acquires_shared(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
if (!(*seq & 1)) /* Even */
|
if (!(*seq & 1)) /* Even */
|
||||||
*seq = read_seqbegin(lock);
|
*seq = read_seqbegin(lock);
|
||||||
@@ -1140,6 +1160,7 @@ static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq)
|
|||||||
* Return: true if a read section retry is required, false otherwise
|
* Return: true if a read section retry is required, false otherwise
|
||||||
*/
|
*/
|
||||||
static inline int need_seqretry(seqlock_t *lock, int seq)
|
static inline int need_seqretry(seqlock_t *lock, int seq)
|
||||||
|
__releases_shared(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
return !(seq & 1) && read_seqretry(lock, seq);
|
return !(seq & 1) && read_seqretry(lock, seq);
|
||||||
}
|
}
|
||||||
@@ -1153,6 +1174,7 @@ static inline int need_seqretry(seqlock_t *lock, int seq)
|
|||||||
* with read_seqbegin_or_lock() and validated by need_seqretry().
|
* with read_seqbegin_or_lock() and validated by need_seqretry().
|
||||||
*/
|
*/
|
||||||
static inline void done_seqretry(seqlock_t *lock, int seq)
|
static inline void done_seqretry(seqlock_t *lock, int seq)
|
||||||
|
__no_context_analysis
|
||||||
{
|
{
|
||||||
if (seq & 1)
|
if (seq & 1)
|
||||||
read_sequnlock_excl(lock);
|
read_sequnlock_excl(lock);
|
||||||
@@ -1180,6 +1202,7 @@ static inline void done_seqretry(seqlock_t *lock, int seq)
|
|||||||
*/
|
*/
|
||||||
static inline unsigned long
|
static inline unsigned long
|
||||||
read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq)
|
read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq)
|
||||||
|
__acquires_shared(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
unsigned long flags = 0;
|
unsigned long flags = 0;
|
||||||
|
|
||||||
@@ -1205,6 +1228,7 @@ read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq)
|
|||||||
*/
|
*/
|
||||||
static inline void
|
static inline void
|
||||||
done_seqretry_irqrestore(seqlock_t *lock, int seq, unsigned long flags)
|
done_seqretry_irqrestore(seqlock_t *lock, int seq, unsigned long flags)
|
||||||
|
__no_context_analysis
|
||||||
{
|
{
|
||||||
if (seq & 1)
|
if (seq & 1)
|
||||||
read_sequnlock_excl_irqrestore(lock, flags);
|
read_sequnlock_excl_irqrestore(lock, flags);
|
||||||
@@ -1225,6 +1249,7 @@ struct ss_tmp {
|
|||||||
};
|
};
|
||||||
|
|
||||||
static __always_inline void __scoped_seqlock_cleanup(struct ss_tmp *sst)
|
static __always_inline void __scoped_seqlock_cleanup(struct ss_tmp *sst)
|
||||||
|
__no_context_analysis
|
||||||
{
|
{
|
||||||
if (sst->lock)
|
if (sst->lock)
|
||||||
spin_unlock(sst->lock);
|
spin_unlock(sst->lock);
|
||||||
@@ -1254,6 +1279,7 @@ extern void __scoped_seqlock_bug(void);
|
|||||||
|
|
||||||
static __always_inline void
|
static __always_inline void
|
||||||
__scoped_seqlock_next(struct ss_tmp *sst, seqlock_t *lock, enum ss_state target)
|
__scoped_seqlock_next(struct ss_tmp *sst, seqlock_t *lock, enum ss_state target)
|
||||||
|
__no_context_analysis
|
||||||
{
|
{
|
||||||
switch (sst->state) {
|
switch (sst->state) {
|
||||||
case ss_done:
|
case ss_done:
|
||||||
@@ -1296,22 +1322,31 @@ __scoped_seqlock_next(struct ss_tmp *sst, seqlock_t *lock, enum ss_state target)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Context analysis no-op helper to release seqlock at the end of the for-scope;
|
||||||
|
* the alias analysis of the compiler will recognize that the pointer @s is an
|
||||||
|
* alias to @_seqlock passed to read_seqbegin(_seqlock) below.
|
||||||
|
*/
|
||||||
|
static __always_inline void __scoped_seqlock_cleanup_ctx(struct ss_tmp **s)
|
||||||
|
__releases_shared(*((seqlock_t **)s)) __no_context_analysis {}
|
||||||
|
|
||||||
#define __scoped_seqlock_read(_seqlock, _target, _s) \
|
#define __scoped_seqlock_read(_seqlock, _target, _s) \
|
||||||
for (struct ss_tmp _s __cleanup(__scoped_seqlock_cleanup) = \
|
for (struct ss_tmp _s __cleanup(__scoped_seqlock_cleanup) = \
|
||||||
{ .state = ss_lockless, .data = read_seqbegin(_seqlock) }; \
|
{ .state = ss_lockless, .data = read_seqbegin(_seqlock) }, \
|
||||||
|
*__UNIQUE_ID(ctx) __cleanup(__scoped_seqlock_cleanup_ctx) =\
|
||||||
|
(struct ss_tmp *)_seqlock; \
|
||||||
_s.state != ss_done; \
|
_s.state != ss_done; \
|
||||||
__scoped_seqlock_next(&_s, _seqlock, _target))
|
__scoped_seqlock_next(&_s, _seqlock, _target))
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* scoped_seqlock_read (lock, ss_state) - execute the read side critical
|
* scoped_seqlock_read() - execute the read-side critical section
|
||||||
* section without manual sequence
|
* without manual sequence counter handling
|
||||||
* counter handling or calls to other
|
* or calls to other helpers
|
||||||
* helpers
|
* @_seqlock: pointer to seqlock_t protecting the data
|
||||||
* @lock: pointer to seqlock_t protecting the data
|
* @_target: an enum ss_state: one of {ss_lock, ss_lock_irqsave, ss_lockless}
|
||||||
* @ss_state: one of {ss_lock, ss_lock_irqsave, ss_lockless} indicating
|
* indicating the type of critical read section
|
||||||
* the type of critical read section
|
|
||||||
*
|
*
|
||||||
* Example:
|
* Example::
|
||||||
*
|
*
|
||||||
* scoped_seqlock_read (&lock, ss_lock) {
|
* scoped_seqlock_read (&lock, ss_lock) {
|
||||||
* // read-side critical section
|
* // read-side critical section
|
||||||
@@ -1323,4 +1358,8 @@ __scoped_seqlock_next(struct ss_tmp *sst, seqlock_t *lock, enum ss_state target)
|
|||||||
#define scoped_seqlock_read(_seqlock, _target) \
|
#define scoped_seqlock_read(_seqlock, _target) \
|
||||||
__scoped_seqlock_read(_seqlock, _target, __UNIQUE_ID(seqlock))
|
__scoped_seqlock_read(_seqlock, _target, __UNIQUE_ID(seqlock))
|
||||||
|
|
||||||
|
DEFINE_LOCK_GUARD_1(seqlock_init, seqlock_t, seqlock_init(_T->lock), /* */)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(seqlock_init, __acquires(_T), __releases(*(seqlock_t **)_T))
|
||||||
|
#define class_seqlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(seqlock_init, _T)
|
||||||
|
|
||||||
#endif /* __LINUX_SEQLOCK_H */
|
#endif /* __LINUX_SEQLOCK_H */
|
||||||
|
|||||||
@@ -81,13 +81,14 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, mutex)
|
|||||||
* - Comments on top of seqcount_t
|
* - Comments on top of seqcount_t
|
||||||
* - Documentation/locking/seqlock.rst
|
* - Documentation/locking/seqlock.rst
|
||||||
*/
|
*/
|
||||||
typedef struct {
|
context_lock_struct(seqlock) {
|
||||||
/*
|
/*
|
||||||
* Make sure that readers don't starve writers on PREEMPT_RT: use
|
* Make sure that readers don't starve writers on PREEMPT_RT: use
|
||||||
* seqcount_spinlock_t instead of seqcount_t. Check __SEQ_LOCK().
|
* seqcount_spinlock_t instead of seqcount_t. Check __SEQ_LOCK().
|
||||||
*/
|
*/
|
||||||
seqcount_spinlock_t seqcount;
|
seqcount_spinlock_t seqcount;
|
||||||
spinlock_t lock;
|
spinlock_t lock;
|
||||||
} seqlock_t;
|
};
|
||||||
|
typedef struct seqlock seqlock_t;
|
||||||
|
|
||||||
#endif /* __LINUX_SEQLOCK_TYPES_H */
|
#endif /* __LINUX_SEQLOCK_TYPES_H */
|
||||||
|
|||||||
@@ -212,7 +212,7 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock)
|
|||||||
* various methods are defined as nops in the case they are not
|
* various methods are defined as nops in the case they are not
|
||||||
* required.
|
* required.
|
||||||
*/
|
*/
|
||||||
#define raw_spin_trylock(lock) __cond_lock(lock, _raw_spin_trylock(lock))
|
#define raw_spin_trylock(lock) _raw_spin_trylock(lock)
|
||||||
|
|
||||||
#define raw_spin_lock(lock) _raw_spin_lock(lock)
|
#define raw_spin_lock(lock) _raw_spin_lock(lock)
|
||||||
|
|
||||||
@@ -283,22 +283,11 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock)
|
|||||||
} while (0)
|
} while (0)
|
||||||
#define raw_spin_unlock_bh(lock) _raw_spin_unlock_bh(lock)
|
#define raw_spin_unlock_bh(lock) _raw_spin_unlock_bh(lock)
|
||||||
|
|
||||||
#define raw_spin_trylock_bh(lock) \
|
#define raw_spin_trylock_bh(lock) _raw_spin_trylock_bh(lock)
|
||||||
__cond_lock(lock, _raw_spin_trylock_bh(lock))
|
|
||||||
|
|
||||||
#define raw_spin_trylock_irq(lock) \
|
#define raw_spin_trylock_irq(lock) _raw_spin_trylock_irq(lock)
|
||||||
({ \
|
|
||||||
local_irq_disable(); \
|
|
||||||
raw_spin_trylock(lock) ? \
|
|
||||||
1 : ({ local_irq_enable(); 0; }); \
|
|
||||||
})
|
|
||||||
|
|
||||||
#define raw_spin_trylock_irqsave(lock, flags) \
|
#define raw_spin_trylock_irqsave(lock, flags) _raw_spin_trylock_irqsave(lock, &(flags))
|
||||||
({ \
|
|
||||||
local_irq_save(flags); \
|
|
||||||
raw_spin_trylock(lock) ? \
|
|
||||||
1 : ({ local_irq_restore(flags); 0; }); \
|
|
||||||
})
|
|
||||||
|
|
||||||
#ifndef CONFIG_PREEMPT_RT
|
#ifndef CONFIG_PREEMPT_RT
|
||||||
/* Include rwlock functions for !RT */
|
/* Include rwlock functions for !RT */
|
||||||
@@ -347,16 +336,19 @@ do { \
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
static __always_inline void spin_lock(spinlock_t *lock)
|
static __always_inline void spin_lock(spinlock_t *lock)
|
||||||
|
__acquires(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
raw_spin_lock(&lock->rlock);
|
raw_spin_lock(&lock->rlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void spin_lock_bh(spinlock_t *lock)
|
static __always_inline void spin_lock_bh(spinlock_t *lock)
|
||||||
|
__acquires(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
raw_spin_lock_bh(&lock->rlock);
|
raw_spin_lock_bh(&lock->rlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline int spin_trylock(spinlock_t *lock)
|
static __always_inline int spin_trylock(spinlock_t *lock)
|
||||||
|
__cond_acquires(true, lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
return raw_spin_trylock(&lock->rlock);
|
return raw_spin_trylock(&lock->rlock);
|
||||||
}
|
}
|
||||||
@@ -364,14 +356,17 @@ static __always_inline int spin_trylock(spinlock_t *lock)
|
|||||||
#define spin_lock_nested(lock, subclass) \
|
#define spin_lock_nested(lock, subclass) \
|
||||||
do { \
|
do { \
|
||||||
raw_spin_lock_nested(spinlock_check(lock), subclass); \
|
raw_spin_lock_nested(spinlock_check(lock), subclass); \
|
||||||
|
__release(spinlock_check(lock)); __acquire(lock); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define spin_lock_nest_lock(lock, nest_lock) \
|
#define spin_lock_nest_lock(lock, nest_lock) \
|
||||||
do { \
|
do { \
|
||||||
raw_spin_lock_nest_lock(spinlock_check(lock), nest_lock); \
|
raw_spin_lock_nest_lock(spinlock_check(lock), nest_lock); \
|
||||||
|
__release(spinlock_check(lock)); __acquire(lock); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
static __always_inline void spin_lock_irq(spinlock_t *lock)
|
static __always_inline void spin_lock_irq(spinlock_t *lock)
|
||||||
|
__acquires(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
raw_spin_lock_irq(&lock->rlock);
|
raw_spin_lock_irq(&lock->rlock);
|
||||||
}
|
}
|
||||||
@@ -379,47 +374,57 @@ static __always_inline void spin_lock_irq(spinlock_t *lock)
|
|||||||
#define spin_lock_irqsave(lock, flags) \
|
#define spin_lock_irqsave(lock, flags) \
|
||||||
do { \
|
do { \
|
||||||
raw_spin_lock_irqsave(spinlock_check(lock), flags); \
|
raw_spin_lock_irqsave(spinlock_check(lock), flags); \
|
||||||
|
__release(spinlock_check(lock)); __acquire(lock); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define spin_lock_irqsave_nested(lock, flags, subclass) \
|
#define spin_lock_irqsave_nested(lock, flags, subclass) \
|
||||||
do { \
|
do { \
|
||||||
raw_spin_lock_irqsave_nested(spinlock_check(lock), flags, subclass); \
|
raw_spin_lock_irqsave_nested(spinlock_check(lock), flags, subclass); \
|
||||||
|
__release(spinlock_check(lock)); __acquire(lock); \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
static __always_inline void spin_unlock(spinlock_t *lock)
|
static __always_inline void spin_unlock(spinlock_t *lock)
|
||||||
|
__releases(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
raw_spin_unlock(&lock->rlock);
|
raw_spin_unlock(&lock->rlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void spin_unlock_bh(spinlock_t *lock)
|
static __always_inline void spin_unlock_bh(spinlock_t *lock)
|
||||||
|
__releases(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
raw_spin_unlock_bh(&lock->rlock);
|
raw_spin_unlock_bh(&lock->rlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void spin_unlock_irq(spinlock_t *lock)
|
static __always_inline void spin_unlock_irq(spinlock_t *lock)
|
||||||
|
__releases(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
raw_spin_unlock_irq(&lock->rlock);
|
raw_spin_unlock_irq(&lock->rlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
|
static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
|
||||||
|
__releases(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
raw_spin_unlock_irqrestore(&lock->rlock, flags);
|
raw_spin_unlock_irqrestore(&lock->rlock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline int spin_trylock_bh(spinlock_t *lock)
|
static __always_inline int spin_trylock_bh(spinlock_t *lock)
|
||||||
|
__cond_acquires(true, lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
return raw_spin_trylock_bh(&lock->rlock);
|
return raw_spin_trylock_bh(&lock->rlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline int spin_trylock_irq(spinlock_t *lock)
|
static __always_inline int spin_trylock_irq(spinlock_t *lock)
|
||||||
|
__cond_acquires(true, lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
return raw_spin_trylock_irq(&lock->rlock);
|
return raw_spin_trylock_irq(&lock->rlock);
|
||||||
}
|
}
|
||||||
|
|
||||||
#define spin_trylock_irqsave(lock, flags) \
|
static __always_inline bool _spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags)
|
||||||
({ \
|
__cond_acquires(true, lock) __no_context_analysis
|
||||||
raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
|
{
|
||||||
})
|
return raw_spin_trylock_irqsave(spinlock_check(lock), *flags);
|
||||||
|
}
|
||||||
|
#define spin_trylock_irqsave(lock, flags) _spin_trylock_irqsave(lock, &(flags))
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* spin_is_locked() - Check whether a spinlock is locked.
|
* spin_is_locked() - Check whether a spinlock is locked.
|
||||||
@@ -497,23 +502,17 @@ static inline int rwlock_needbreak(rwlock_t *lock)
|
|||||||
* Decrements @atomic by 1. If the result is 0, returns true and locks
|
* Decrements @atomic by 1. If the result is 0, returns true and locks
|
||||||
* @lock. Returns false for all other cases.
|
* @lock. Returns false for all other cases.
|
||||||
*/
|
*/
|
||||||
extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
|
extern int atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) __cond_acquires(true, lock);
|
||||||
#define atomic_dec_and_lock(atomic, lock) \
|
|
||||||
__cond_lock(lock, _atomic_dec_and_lock(atomic, lock))
|
|
||||||
|
|
||||||
extern int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock,
|
extern int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock,
|
||||||
unsigned long *flags);
|
unsigned long *flags) __cond_acquires(true, lock);
|
||||||
#define atomic_dec_and_lock_irqsave(atomic, lock, flags) \
|
#define atomic_dec_and_lock_irqsave(atomic, lock, flags) _atomic_dec_and_lock_irqsave(atomic, lock, &(flags))
|
||||||
__cond_lock(lock, _atomic_dec_and_lock_irqsave(atomic, lock, &(flags)))
|
|
||||||
|
|
||||||
extern int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock);
|
extern int atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) __cond_acquires(true, lock);
|
||||||
#define atomic_dec_and_raw_lock(atomic, lock) \
|
|
||||||
__cond_lock(lock, _atomic_dec_and_raw_lock(atomic, lock))
|
|
||||||
|
|
||||||
extern int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock,
|
extern int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock,
|
||||||
unsigned long *flags);
|
unsigned long *flags) __cond_acquires(true, lock);
|
||||||
#define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) \
|
#define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags))
|
||||||
__cond_lock(lock, _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags)))
|
|
||||||
|
|
||||||
int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *lock_mask,
|
int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *lock_mask,
|
||||||
size_t max_size, unsigned int cpu_mult,
|
size_t max_size, unsigned int cpu_mult,
|
||||||
@@ -535,86 +534,144 @@ void free_bucket_spinlocks(spinlock_t *locks);
|
|||||||
DEFINE_LOCK_GUARD_1(raw_spinlock, raw_spinlock_t,
|
DEFINE_LOCK_GUARD_1(raw_spinlock, raw_spinlock_t,
|
||||||
raw_spin_lock(_T->lock),
|
raw_spin_lock(_T->lock),
|
||||||
raw_spin_unlock(_T->lock))
|
raw_spin_unlock(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
|
||||||
|
#define class_raw_spinlock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1_COND(raw_spinlock, _try, raw_spin_trylock(_T->lock))
|
DEFINE_LOCK_GUARD_1_COND(raw_spinlock, _try, raw_spin_trylock(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_try, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
|
||||||
|
#define class_raw_spinlock_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_try, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(raw_spinlock_nested, raw_spinlock_t,
|
DEFINE_LOCK_GUARD_1(raw_spinlock_nested, raw_spinlock_t,
|
||||||
raw_spin_lock_nested(_T->lock, SINGLE_DEPTH_NESTING),
|
raw_spin_lock_nested(_T->lock, SINGLE_DEPTH_NESTING),
|
||||||
raw_spin_unlock(_T->lock))
|
raw_spin_unlock(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_nested, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
|
||||||
|
#define class_raw_spinlock_nested_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_nested, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(raw_spinlock_irq, raw_spinlock_t,
|
DEFINE_LOCK_GUARD_1(raw_spinlock_irq, raw_spinlock_t,
|
||||||
raw_spin_lock_irq(_T->lock),
|
raw_spin_lock_irq(_T->lock),
|
||||||
raw_spin_unlock_irq(_T->lock))
|
raw_spin_unlock_irq(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irq, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
|
||||||
|
#define class_raw_spinlock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_irq, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irq, _try, raw_spin_trylock_irq(_T->lock))
|
DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irq, _try, raw_spin_trylock_irq(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irq_try, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
|
||||||
|
#define class_raw_spinlock_irq_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_irq_try, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(raw_spinlock_bh, raw_spinlock_t,
|
DEFINE_LOCK_GUARD_1(raw_spinlock_bh, raw_spinlock_t,
|
||||||
raw_spin_lock_bh(_T->lock),
|
raw_spin_lock_bh(_T->lock),
|
||||||
raw_spin_unlock_bh(_T->lock))
|
raw_spin_unlock_bh(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_bh, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
|
||||||
|
#define class_raw_spinlock_bh_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_bh, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1_COND(raw_spinlock_bh, _try, raw_spin_trylock_bh(_T->lock))
|
DEFINE_LOCK_GUARD_1_COND(raw_spinlock_bh, _try, raw_spin_trylock_bh(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_bh_try, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
|
||||||
|
#define class_raw_spinlock_bh_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_bh_try, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(raw_spinlock_irqsave, raw_spinlock_t,
|
DEFINE_LOCK_GUARD_1(raw_spinlock_irqsave, raw_spinlock_t,
|
||||||
raw_spin_lock_irqsave(_T->lock, _T->flags),
|
raw_spin_lock_irqsave(_T->lock, _T->flags),
|
||||||
raw_spin_unlock_irqrestore(_T->lock, _T->flags),
|
raw_spin_unlock_irqrestore(_T->lock, _T->flags),
|
||||||
unsigned long flags)
|
unsigned long flags)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
|
||||||
|
#define class_raw_spinlock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irqsave, _try,
|
DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irqsave, _try,
|
||||||
raw_spin_trylock_irqsave(_T->lock, _T->flags))
|
raw_spin_trylock_irqsave(_T->lock, _T->flags))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave_try, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
|
||||||
|
#define class_raw_spinlock_irqsave_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave_try, _T)
|
||||||
|
|
||||||
|
DEFINE_LOCK_GUARD_1(raw_spinlock_init, raw_spinlock_t, raw_spin_lock_init(_T->lock), /* */)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_init, __acquires(_T), __releases(*(raw_spinlock_t **)_T))
|
||||||
|
#define class_raw_spinlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_init, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(spinlock, spinlock_t,
|
DEFINE_LOCK_GUARD_1(spinlock, spinlock_t,
|
||||||
spin_lock(_T->lock),
|
spin_lock(_T->lock),
|
||||||
spin_unlock(_T->lock))
|
spin_unlock(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(spinlock, __acquires(_T), __releases(*(spinlock_t **)_T))
|
||||||
|
#define class_spinlock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1_COND(spinlock, _try, spin_trylock(_T->lock))
|
DEFINE_LOCK_GUARD_1_COND(spinlock, _try, spin_trylock(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(spinlock_try, __acquires(_T), __releases(*(spinlock_t **)_T))
|
||||||
|
#define class_spinlock_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_try, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(spinlock_irq, spinlock_t,
|
DEFINE_LOCK_GUARD_1(spinlock_irq, spinlock_t,
|
||||||
spin_lock_irq(_T->lock),
|
spin_lock_irq(_T->lock),
|
||||||
spin_unlock_irq(_T->lock))
|
spin_unlock_irq(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irq, __acquires(_T), __releases(*(spinlock_t **)_T))
|
||||||
|
#define class_spinlock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_irq, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1_COND(spinlock_irq, _try,
|
DEFINE_LOCK_GUARD_1_COND(spinlock_irq, _try,
|
||||||
spin_trylock_irq(_T->lock))
|
spin_trylock_irq(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irq_try, __acquires(_T), __releases(*(spinlock_t **)_T))
|
||||||
|
#define class_spinlock_irq_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_irq_try, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(spinlock_bh, spinlock_t,
|
DEFINE_LOCK_GUARD_1(spinlock_bh, spinlock_t,
|
||||||
spin_lock_bh(_T->lock),
|
spin_lock_bh(_T->lock),
|
||||||
spin_unlock_bh(_T->lock))
|
spin_unlock_bh(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(spinlock_bh, __acquires(_T), __releases(*(spinlock_t **)_T))
|
||||||
|
#define class_spinlock_bh_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_bh, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1_COND(spinlock_bh, _try,
|
DEFINE_LOCK_GUARD_1_COND(spinlock_bh, _try,
|
||||||
spin_trylock_bh(_T->lock))
|
spin_trylock_bh(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(spinlock_bh_try, __acquires(_T), __releases(*(spinlock_t **)_T))
|
||||||
|
#define class_spinlock_bh_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_bh_try, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(spinlock_irqsave, spinlock_t,
|
DEFINE_LOCK_GUARD_1(spinlock_irqsave, spinlock_t,
|
||||||
spin_lock_irqsave(_T->lock, _T->flags),
|
spin_lock_irqsave(_T->lock, _T->flags),
|
||||||
spin_unlock_irqrestore(_T->lock, _T->flags),
|
spin_unlock_irqrestore(_T->lock, _T->flags),
|
||||||
unsigned long flags)
|
unsigned long flags)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irqsave, __acquires(_T), __releases(*(spinlock_t **)_T))
|
||||||
|
#define class_spinlock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_irqsave, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1_COND(spinlock_irqsave, _try,
|
DEFINE_LOCK_GUARD_1_COND(spinlock_irqsave, _try,
|
||||||
spin_trylock_irqsave(_T->lock, _T->flags))
|
spin_trylock_irqsave(_T->lock, _T->flags))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irqsave_try, __acquires(_T), __releases(*(spinlock_t **)_T))
|
||||||
|
#define class_spinlock_irqsave_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_irqsave_try, _T)
|
||||||
|
|
||||||
|
DEFINE_LOCK_GUARD_1(spinlock_init, spinlock_t, spin_lock_init(_T->lock), /* */)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(spinlock_init, __acquires(_T), __releases(*(spinlock_t **)_T))
|
||||||
|
#define class_spinlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_init, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(read_lock, rwlock_t,
|
DEFINE_LOCK_GUARD_1(read_lock, rwlock_t,
|
||||||
read_lock(_T->lock),
|
read_lock(_T->lock),
|
||||||
read_unlock(_T->lock))
|
read_unlock(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(read_lock, __acquires(_T), __releases(*(rwlock_t **)_T))
|
||||||
|
#define class_read_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(read_lock, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(read_lock_irq, rwlock_t,
|
DEFINE_LOCK_GUARD_1(read_lock_irq, rwlock_t,
|
||||||
read_lock_irq(_T->lock),
|
read_lock_irq(_T->lock),
|
||||||
read_unlock_irq(_T->lock))
|
read_unlock_irq(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(read_lock_irq, __acquires(_T), __releases(*(rwlock_t **)_T))
|
||||||
|
#define class_read_lock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(read_lock_irq, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(read_lock_irqsave, rwlock_t,
|
DEFINE_LOCK_GUARD_1(read_lock_irqsave, rwlock_t,
|
||||||
read_lock_irqsave(_T->lock, _T->flags),
|
read_lock_irqsave(_T->lock, _T->flags),
|
||||||
read_unlock_irqrestore(_T->lock, _T->flags),
|
read_unlock_irqrestore(_T->lock, _T->flags),
|
||||||
unsigned long flags)
|
unsigned long flags)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(read_lock_irqsave, __acquires(_T), __releases(*(rwlock_t **)_T))
|
||||||
|
#define class_read_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(read_lock_irqsave, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(write_lock, rwlock_t,
|
DEFINE_LOCK_GUARD_1(write_lock, rwlock_t,
|
||||||
write_lock(_T->lock),
|
write_lock(_T->lock),
|
||||||
write_unlock(_T->lock))
|
write_unlock(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(write_lock, __acquires(_T), __releases(*(rwlock_t **)_T))
|
||||||
|
#define class_write_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(write_lock, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(write_lock_irq, rwlock_t,
|
DEFINE_LOCK_GUARD_1(write_lock_irq, rwlock_t,
|
||||||
write_lock_irq(_T->lock),
|
write_lock_irq(_T->lock),
|
||||||
write_unlock_irq(_T->lock))
|
write_unlock_irq(_T->lock))
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(write_lock_irq, __acquires(_T), __releases(*(rwlock_t **)_T))
|
||||||
|
#define class_write_lock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(write_lock_irq, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(write_lock_irqsave, rwlock_t,
|
DEFINE_LOCK_GUARD_1(write_lock_irqsave, rwlock_t,
|
||||||
write_lock_irqsave(_T->lock, _T->flags),
|
write_lock_irqsave(_T->lock, _T->flags),
|
||||||
write_unlock_irqrestore(_T->lock, _T->flags),
|
write_unlock_irqrestore(_T->lock, _T->flags),
|
||||||
unsigned long flags)
|
unsigned long flags)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(write_lock_irqsave, __acquires(_T), __releases(*(rwlock_t **)_T))
|
||||||
|
#define class_write_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(write_lock_irqsave, _T)
|
||||||
|
|
||||||
|
DEFINE_LOCK_GUARD_1(rwlock_init, rwlock_t, rwlock_init(_T->lock), /* */)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(rwlock_init, __acquires(_T), __releases(*(rwlock_t **)_T))
|
||||||
|
#define class_rwlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwlock_init, _T)
|
||||||
|
|
||||||
#undef __LINUX_INSIDE_SPINLOCK_H
|
#undef __LINUX_INSIDE_SPINLOCK_H
|
||||||
#endif /* __LINUX_SPINLOCK_H */
|
#endif /* __LINUX_SPINLOCK_H */
|
||||||
|
|||||||
@@ -34,8 +34,8 @@ unsigned long __lockfunc _raw_spin_lock_irqsave(raw_spinlock_t *lock)
|
|||||||
unsigned long __lockfunc
|
unsigned long __lockfunc
|
||||||
_raw_spin_lock_irqsave_nested(raw_spinlock_t *lock, int subclass)
|
_raw_spin_lock_irqsave_nested(raw_spinlock_t *lock, int subclass)
|
||||||
__acquires(lock);
|
__acquires(lock);
|
||||||
int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock);
|
int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(true, lock);
|
||||||
int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock);
|
int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(true, lock);
|
||||||
void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock);
|
void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock);
|
||||||
void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock);
|
void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock);
|
||||||
void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock);
|
void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock);
|
||||||
@@ -84,6 +84,7 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags)
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
static inline int __raw_spin_trylock(raw_spinlock_t *lock)
|
static inline int __raw_spin_trylock(raw_spinlock_t *lock)
|
||||||
|
__cond_acquires(true, lock)
|
||||||
{
|
{
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
if (do_raw_spin_trylock(lock)) {
|
if (do_raw_spin_trylock(lock)) {
|
||||||
@@ -94,6 +95,26 @@ static inline int __raw_spin_trylock(raw_spinlock_t *lock)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static __always_inline bool _raw_spin_trylock_irq(raw_spinlock_t *lock)
|
||||||
|
__cond_acquires(true, lock)
|
||||||
|
{
|
||||||
|
local_irq_disable();
|
||||||
|
if (_raw_spin_trylock(lock))
|
||||||
|
return true;
|
||||||
|
local_irq_enable();
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __always_inline bool _raw_spin_trylock_irqsave(raw_spinlock_t *lock, unsigned long *flags)
|
||||||
|
__cond_acquires(true, lock)
|
||||||
|
{
|
||||||
|
local_irq_save(*flags);
|
||||||
|
if (_raw_spin_trylock(lock))
|
||||||
|
return true;
|
||||||
|
local_irq_restore(*flags);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If lockdep is enabled then we use the non-preemption spin-ops
|
* If lockdep is enabled then we use the non-preemption spin-ops
|
||||||
* even on CONFIG_PREEMPTION, because lockdep assumes that interrupts are
|
* even on CONFIG_PREEMPTION, because lockdep assumes that interrupts are
|
||||||
@@ -102,6 +123,7 @@ static inline int __raw_spin_trylock(raw_spinlock_t *lock)
|
|||||||
#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC)
|
#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC)
|
||||||
|
|
||||||
static inline unsigned long __raw_spin_lock_irqsave(raw_spinlock_t *lock)
|
static inline unsigned long __raw_spin_lock_irqsave(raw_spinlock_t *lock)
|
||||||
|
__acquires(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
@@ -113,6 +135,7 @@ static inline unsigned long __raw_spin_lock_irqsave(raw_spinlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_spin_lock_irq(raw_spinlock_t *lock)
|
static inline void __raw_spin_lock_irq(raw_spinlock_t *lock)
|
||||||
|
__acquires(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
local_irq_disable();
|
local_irq_disable();
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
@@ -121,6 +144,7 @@ static inline void __raw_spin_lock_irq(raw_spinlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_spin_lock_bh(raw_spinlock_t *lock)
|
static inline void __raw_spin_lock_bh(raw_spinlock_t *lock)
|
||||||
|
__acquires(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
|
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
|
||||||
spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
|
spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
|
||||||
@@ -128,6 +152,7 @@ static inline void __raw_spin_lock_bh(raw_spinlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_spin_lock(raw_spinlock_t *lock)
|
static inline void __raw_spin_lock(raw_spinlock_t *lock)
|
||||||
|
__acquires(lock) __no_context_analysis
|
||||||
{
|
{
|
||||||
preempt_disable();
|
preempt_disable();
|
||||||
spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
|
spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
|
||||||
@@ -137,6 +162,7 @@ static inline void __raw_spin_lock(raw_spinlock_t *lock)
|
|||||||
#endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */
|
#endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */
|
||||||
|
|
||||||
static inline void __raw_spin_unlock(raw_spinlock_t *lock)
|
static inline void __raw_spin_unlock(raw_spinlock_t *lock)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
spin_release(&lock->dep_map, _RET_IP_);
|
spin_release(&lock->dep_map, _RET_IP_);
|
||||||
do_raw_spin_unlock(lock);
|
do_raw_spin_unlock(lock);
|
||||||
@@ -145,6 +171,7 @@ static inline void __raw_spin_unlock(raw_spinlock_t *lock)
|
|||||||
|
|
||||||
static inline void __raw_spin_unlock_irqrestore(raw_spinlock_t *lock,
|
static inline void __raw_spin_unlock_irqrestore(raw_spinlock_t *lock,
|
||||||
unsigned long flags)
|
unsigned long flags)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
spin_release(&lock->dep_map, _RET_IP_);
|
spin_release(&lock->dep_map, _RET_IP_);
|
||||||
do_raw_spin_unlock(lock);
|
do_raw_spin_unlock(lock);
|
||||||
@@ -153,6 +180,7 @@ static inline void __raw_spin_unlock_irqrestore(raw_spinlock_t *lock,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_spin_unlock_irq(raw_spinlock_t *lock)
|
static inline void __raw_spin_unlock_irq(raw_spinlock_t *lock)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
spin_release(&lock->dep_map, _RET_IP_);
|
spin_release(&lock->dep_map, _RET_IP_);
|
||||||
do_raw_spin_unlock(lock);
|
do_raw_spin_unlock(lock);
|
||||||
@@ -161,6 +189,7 @@ static inline void __raw_spin_unlock_irq(raw_spinlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock)
|
static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
spin_release(&lock->dep_map, _RET_IP_);
|
spin_release(&lock->dep_map, _RET_IP_);
|
||||||
do_raw_spin_unlock(lock);
|
do_raw_spin_unlock(lock);
|
||||||
@@ -168,6 +197,7 @@ static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock)
|
static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock)
|
||||||
|
__cond_acquires(true, lock)
|
||||||
{
|
{
|
||||||
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
|
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
|
||||||
if (do_raw_spin_trylock(lock)) {
|
if (do_raw_spin_trylock(lock)) {
|
||||||
|
|||||||
@@ -24,68 +24,120 @@
|
|||||||
* flags straight, to suppress compiler warnings of unused lock
|
* flags straight, to suppress compiler warnings of unused lock
|
||||||
* variables, and to add the proper checker annotations:
|
* variables, and to add the proper checker annotations:
|
||||||
*/
|
*/
|
||||||
#define ___LOCK(lock) \
|
#define ___LOCK_(lock) \
|
||||||
do { __acquire(lock); (void)(lock); } while (0)
|
do { __acquire(lock); (void)(lock); } while (0)
|
||||||
|
|
||||||
#define __LOCK(lock) \
|
#define ___LOCK_shared(lock) \
|
||||||
do { preempt_disable(); ___LOCK(lock); } while (0)
|
do { __acquire_shared(lock); (void)(lock); } while (0)
|
||||||
|
|
||||||
#define __LOCK_BH(lock) \
|
#define __LOCK(lock, ...) \
|
||||||
do { __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); ___LOCK(lock); } while (0)
|
do { preempt_disable(); ___LOCK_##__VA_ARGS__(lock); } while (0)
|
||||||
|
|
||||||
#define __LOCK_IRQ(lock) \
|
#define __LOCK_BH(lock, ...) \
|
||||||
do { local_irq_disable(); __LOCK(lock); } while (0)
|
do { __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); ___LOCK_##__VA_ARGS__(lock); } while (0)
|
||||||
|
|
||||||
#define __LOCK_IRQSAVE(lock, flags) \
|
#define __LOCK_IRQ(lock, ...) \
|
||||||
do { local_irq_save(flags); __LOCK(lock); } while (0)
|
do { local_irq_disable(); __LOCK(lock, ##__VA_ARGS__); } while (0)
|
||||||
|
|
||||||
#define ___UNLOCK(lock) \
|
#define __LOCK_IRQSAVE(lock, flags, ...) \
|
||||||
|
do { local_irq_save(flags); __LOCK(lock, ##__VA_ARGS__); } while (0)
|
||||||
|
|
||||||
|
#define ___UNLOCK_(lock) \
|
||||||
do { __release(lock); (void)(lock); } while (0)
|
do { __release(lock); (void)(lock); } while (0)
|
||||||
|
|
||||||
#define __UNLOCK(lock) \
|
#define ___UNLOCK_shared(lock) \
|
||||||
do { preempt_enable(); ___UNLOCK(lock); } while (0)
|
do { __release_shared(lock); (void)(lock); } while (0)
|
||||||
|
|
||||||
#define __UNLOCK_BH(lock) \
|
#define __UNLOCK(lock, ...) \
|
||||||
|
do { preempt_enable(); ___UNLOCK_##__VA_ARGS__(lock); } while (0)
|
||||||
|
|
||||||
|
#define __UNLOCK_BH(lock, ...) \
|
||||||
do { __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); \
|
do { __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); \
|
||||||
___UNLOCK(lock); } while (0)
|
___UNLOCK_##__VA_ARGS__(lock); } while (0)
|
||||||
|
|
||||||
#define __UNLOCK_IRQ(lock) \
|
#define __UNLOCK_IRQ(lock, ...) \
|
||||||
do { local_irq_enable(); __UNLOCK(lock); } while (0)
|
do { local_irq_enable(); __UNLOCK(lock, ##__VA_ARGS__); } while (0)
|
||||||
|
|
||||||
#define __UNLOCK_IRQRESTORE(lock, flags) \
|
#define __UNLOCK_IRQRESTORE(lock, flags, ...) \
|
||||||
do { local_irq_restore(flags); __UNLOCK(lock); } while (0)
|
do { local_irq_restore(flags); __UNLOCK(lock, ##__VA_ARGS__); } while (0)
|
||||||
|
|
||||||
#define _raw_spin_lock(lock) __LOCK(lock)
|
#define _raw_spin_lock(lock) __LOCK(lock)
|
||||||
#define _raw_spin_lock_nested(lock, subclass) __LOCK(lock)
|
#define _raw_spin_lock_nested(lock, subclass) __LOCK(lock)
|
||||||
#define _raw_read_lock(lock) __LOCK(lock)
|
#define _raw_read_lock(lock) __LOCK(lock, shared)
|
||||||
#define _raw_write_lock(lock) __LOCK(lock)
|
#define _raw_write_lock(lock) __LOCK(lock)
|
||||||
#define _raw_write_lock_nested(lock, subclass) __LOCK(lock)
|
#define _raw_write_lock_nested(lock, subclass) __LOCK(lock)
|
||||||
#define _raw_spin_lock_bh(lock) __LOCK_BH(lock)
|
#define _raw_spin_lock_bh(lock) __LOCK_BH(lock)
|
||||||
#define _raw_read_lock_bh(lock) __LOCK_BH(lock)
|
#define _raw_read_lock_bh(lock) __LOCK_BH(lock, shared)
|
||||||
#define _raw_write_lock_bh(lock) __LOCK_BH(lock)
|
#define _raw_write_lock_bh(lock) __LOCK_BH(lock)
|
||||||
#define _raw_spin_lock_irq(lock) __LOCK_IRQ(lock)
|
#define _raw_spin_lock_irq(lock) __LOCK_IRQ(lock)
|
||||||
#define _raw_read_lock_irq(lock) __LOCK_IRQ(lock)
|
#define _raw_read_lock_irq(lock) __LOCK_IRQ(lock, shared)
|
||||||
#define _raw_write_lock_irq(lock) __LOCK_IRQ(lock)
|
#define _raw_write_lock_irq(lock) __LOCK_IRQ(lock)
|
||||||
#define _raw_spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags)
|
#define _raw_spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags)
|
||||||
#define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags)
|
#define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags, shared)
|
||||||
#define _raw_write_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags)
|
#define _raw_write_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags)
|
||||||
#define _raw_spin_trylock(lock) ({ __LOCK(lock); 1; })
|
|
||||||
#define _raw_read_trylock(lock) ({ __LOCK(lock); 1; })
|
static __always_inline int _raw_spin_trylock(raw_spinlock_t *lock)
|
||||||
#define _raw_write_trylock(lock) ({ __LOCK(lock); 1; })
|
__cond_acquires(true, lock)
|
||||||
#define _raw_spin_trylock_bh(lock) ({ __LOCK_BH(lock); 1; })
|
{
|
||||||
|
__LOCK(lock);
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __always_inline int _raw_spin_trylock_bh(raw_spinlock_t *lock)
|
||||||
|
__cond_acquires(true, lock)
|
||||||
|
{
|
||||||
|
__LOCK_BH(lock);
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __always_inline int _raw_spin_trylock_irq(raw_spinlock_t *lock)
|
||||||
|
__cond_acquires(true, lock)
|
||||||
|
{
|
||||||
|
__LOCK_IRQ(lock);
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __always_inline int _raw_spin_trylock_irqsave(raw_spinlock_t *lock, unsigned long *flags)
|
||||||
|
__cond_acquires(true, lock)
|
||||||
|
{
|
||||||
|
__LOCK_IRQSAVE(lock, *(flags));
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __always_inline int _raw_read_trylock(rwlock_t *lock)
|
||||||
|
__cond_acquires_shared(true, lock)
|
||||||
|
{
|
||||||
|
__LOCK(lock, shared);
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __always_inline int _raw_write_trylock(rwlock_t *lock)
|
||||||
|
__cond_acquires(true, lock)
|
||||||
|
{
|
||||||
|
__LOCK(lock);
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
static __always_inline int _raw_write_trylock_irqsave(rwlock_t *lock, unsigned long *flags)
|
||||||
|
__cond_acquires(true, lock)
|
||||||
|
{
|
||||||
|
__LOCK_IRQSAVE(lock, *(flags));
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
#define _raw_spin_unlock(lock) __UNLOCK(lock)
|
#define _raw_spin_unlock(lock) __UNLOCK(lock)
|
||||||
#define _raw_read_unlock(lock) __UNLOCK(lock)
|
#define _raw_read_unlock(lock) __UNLOCK(lock, shared)
|
||||||
#define _raw_write_unlock(lock) __UNLOCK(lock)
|
#define _raw_write_unlock(lock) __UNLOCK(lock)
|
||||||
#define _raw_spin_unlock_bh(lock) __UNLOCK_BH(lock)
|
#define _raw_spin_unlock_bh(lock) __UNLOCK_BH(lock)
|
||||||
#define _raw_write_unlock_bh(lock) __UNLOCK_BH(lock)
|
#define _raw_write_unlock_bh(lock) __UNLOCK_BH(lock)
|
||||||
#define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock)
|
#define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock, shared)
|
||||||
#define _raw_spin_unlock_irq(lock) __UNLOCK_IRQ(lock)
|
#define _raw_spin_unlock_irq(lock) __UNLOCK_IRQ(lock)
|
||||||
#define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock)
|
#define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock, shared)
|
||||||
#define _raw_write_unlock_irq(lock) __UNLOCK_IRQ(lock)
|
#define _raw_write_unlock_irq(lock) __UNLOCK_IRQ(lock)
|
||||||
#define _raw_spin_unlock_irqrestore(lock, flags) \
|
#define _raw_spin_unlock_irqrestore(lock, flags) \
|
||||||
__UNLOCK_IRQRESTORE(lock, flags)
|
__UNLOCK_IRQRESTORE(lock, flags)
|
||||||
#define _raw_read_unlock_irqrestore(lock, flags) \
|
#define _raw_read_unlock_irqrestore(lock, flags) \
|
||||||
__UNLOCK_IRQRESTORE(lock, flags)
|
__UNLOCK_IRQRESTORE(lock, flags, shared)
|
||||||
#define _raw_write_unlock_irqrestore(lock, flags) \
|
#define _raw_write_unlock_irqrestore(lock, flags) \
|
||||||
__UNLOCK_IRQRESTORE(lock, flags)
|
__UNLOCK_IRQRESTORE(lock, flags)
|
||||||
|
|
||||||
|
|||||||
@@ -36,10 +36,11 @@ extern void rt_spin_lock_nested(spinlock_t *lock, int subclass) __acquires(lock)
|
|||||||
extern void rt_spin_lock_nest_lock(spinlock_t *lock, struct lockdep_map *nest_lock) __acquires(lock);
|
extern void rt_spin_lock_nest_lock(spinlock_t *lock, struct lockdep_map *nest_lock) __acquires(lock);
|
||||||
extern void rt_spin_unlock(spinlock_t *lock) __releases(lock);
|
extern void rt_spin_unlock(spinlock_t *lock) __releases(lock);
|
||||||
extern void rt_spin_lock_unlock(spinlock_t *lock);
|
extern void rt_spin_lock_unlock(spinlock_t *lock);
|
||||||
extern int rt_spin_trylock_bh(spinlock_t *lock);
|
extern int rt_spin_trylock_bh(spinlock_t *lock) __cond_acquires(true, lock);
|
||||||
extern int rt_spin_trylock(spinlock_t *lock);
|
extern int rt_spin_trylock(spinlock_t *lock) __cond_acquires(true, lock);
|
||||||
|
|
||||||
static __always_inline void spin_lock(spinlock_t *lock)
|
static __always_inline void spin_lock(spinlock_t *lock)
|
||||||
|
__acquires(lock)
|
||||||
{
|
{
|
||||||
rt_spin_lock(lock);
|
rt_spin_lock(lock);
|
||||||
}
|
}
|
||||||
@@ -82,6 +83,7 @@ static __always_inline void spin_lock(spinlock_t *lock)
|
|||||||
__spin_lock_irqsave_nested(lock, flags, subclass)
|
__spin_lock_irqsave_nested(lock, flags, subclass)
|
||||||
|
|
||||||
static __always_inline void spin_lock_bh(spinlock_t *lock)
|
static __always_inline void spin_lock_bh(spinlock_t *lock)
|
||||||
|
__acquires(lock)
|
||||||
{
|
{
|
||||||
/* Investigate: Drop bh when blocking ? */
|
/* Investigate: Drop bh when blocking ? */
|
||||||
local_bh_disable();
|
local_bh_disable();
|
||||||
@@ -89,6 +91,7 @@ static __always_inline void spin_lock_bh(spinlock_t *lock)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void spin_lock_irq(spinlock_t *lock)
|
static __always_inline void spin_lock_irq(spinlock_t *lock)
|
||||||
|
__acquires(lock)
|
||||||
{
|
{
|
||||||
rt_spin_lock(lock);
|
rt_spin_lock(lock);
|
||||||
}
|
}
|
||||||
@@ -101,45 +104,44 @@ static __always_inline void spin_lock_irq(spinlock_t *lock)
|
|||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
static __always_inline void spin_unlock(spinlock_t *lock)
|
static __always_inline void spin_unlock(spinlock_t *lock)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
rt_spin_unlock(lock);
|
rt_spin_unlock(lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void spin_unlock_bh(spinlock_t *lock)
|
static __always_inline void spin_unlock_bh(spinlock_t *lock)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
rt_spin_unlock(lock);
|
rt_spin_unlock(lock);
|
||||||
local_bh_enable();
|
local_bh_enable();
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void spin_unlock_irq(spinlock_t *lock)
|
static __always_inline void spin_unlock_irq(spinlock_t *lock)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
rt_spin_unlock(lock);
|
rt_spin_unlock(lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void spin_unlock_irqrestore(spinlock_t *lock,
|
static __always_inline void spin_unlock_irqrestore(spinlock_t *lock,
|
||||||
unsigned long flags)
|
unsigned long flags)
|
||||||
|
__releases(lock)
|
||||||
{
|
{
|
||||||
rt_spin_unlock(lock);
|
rt_spin_unlock(lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
#define spin_trylock(lock) \
|
#define spin_trylock(lock) rt_spin_trylock(lock)
|
||||||
__cond_lock(lock, rt_spin_trylock(lock))
|
|
||||||
|
|
||||||
#define spin_trylock_bh(lock) \
|
#define spin_trylock_bh(lock) rt_spin_trylock_bh(lock)
|
||||||
__cond_lock(lock, rt_spin_trylock_bh(lock))
|
|
||||||
|
|
||||||
#define spin_trylock_irq(lock) \
|
#define spin_trylock_irq(lock) rt_spin_trylock(lock)
|
||||||
__cond_lock(lock, rt_spin_trylock(lock))
|
|
||||||
|
|
||||||
#define spin_trylock_irqsave(lock, flags) \
|
static __always_inline bool _spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags)
|
||||||
({ \
|
__cond_acquires(true, lock)
|
||||||
int __locked; \
|
{
|
||||||
\
|
*flags = 0;
|
||||||
typecheck(unsigned long, flags); \
|
return rt_spin_trylock(lock);
|
||||||
flags = 0; \
|
}
|
||||||
__locked = spin_trylock(lock); \
|
#define spin_trylock_irqsave(lock, flags) _spin_trylock_irqsave(lock, &(flags))
|
||||||
__locked; \
|
|
||||||
})
|
|
||||||
|
|
||||||
#define spin_is_contended(lock) (((void)(lock), 0))
|
#define spin_is_contended(lock) (((void)(lock), 0))
|
||||||
|
|
||||||
|
|||||||
@@ -14,7 +14,7 @@
|
|||||||
#ifndef CONFIG_PREEMPT_RT
|
#ifndef CONFIG_PREEMPT_RT
|
||||||
|
|
||||||
/* Non PREEMPT_RT kernels map spinlock to raw_spinlock */
|
/* Non PREEMPT_RT kernels map spinlock to raw_spinlock */
|
||||||
typedef struct spinlock {
|
context_lock_struct(spinlock) {
|
||||||
union {
|
union {
|
||||||
struct raw_spinlock rlock;
|
struct raw_spinlock rlock;
|
||||||
|
|
||||||
@@ -26,7 +26,8 @@ typedef struct spinlock {
|
|||||||
};
|
};
|
||||||
#endif
|
#endif
|
||||||
};
|
};
|
||||||
} spinlock_t;
|
};
|
||||||
|
typedef struct spinlock spinlock_t;
|
||||||
|
|
||||||
#define ___SPIN_LOCK_INITIALIZER(lockname) \
|
#define ___SPIN_LOCK_INITIALIZER(lockname) \
|
||||||
{ \
|
{ \
|
||||||
@@ -47,12 +48,13 @@ typedef struct spinlock {
|
|||||||
/* PREEMPT_RT kernels map spinlock to rt_mutex */
|
/* PREEMPT_RT kernels map spinlock to rt_mutex */
|
||||||
#include <linux/rtmutex.h>
|
#include <linux/rtmutex.h>
|
||||||
|
|
||||||
typedef struct spinlock {
|
context_lock_struct(spinlock) {
|
||||||
struct rt_mutex_base lock;
|
struct rt_mutex_base lock;
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
struct lockdep_map dep_map;
|
struct lockdep_map dep_map;
|
||||||
#endif
|
#endif
|
||||||
} spinlock_t;
|
};
|
||||||
|
typedef struct spinlock spinlock_t;
|
||||||
|
|
||||||
#define __SPIN_LOCK_UNLOCKED(name) \
|
#define __SPIN_LOCK_UNLOCKED(name) \
|
||||||
{ \
|
{ \
|
||||||
|
|||||||
@@ -11,7 +11,7 @@
|
|||||||
|
|
||||||
#include <linux/lockdep_types.h>
|
#include <linux/lockdep_types.h>
|
||||||
|
|
||||||
typedef struct raw_spinlock {
|
context_lock_struct(raw_spinlock) {
|
||||||
arch_spinlock_t raw_lock;
|
arch_spinlock_t raw_lock;
|
||||||
#ifdef CONFIG_DEBUG_SPINLOCK
|
#ifdef CONFIG_DEBUG_SPINLOCK
|
||||||
unsigned int magic, owner_cpu;
|
unsigned int magic, owner_cpu;
|
||||||
@@ -20,7 +20,8 @@ typedef struct raw_spinlock {
|
|||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
struct lockdep_map dep_map;
|
struct lockdep_map dep_map;
|
||||||
#endif
|
#endif
|
||||||
} raw_spinlock_t;
|
};
|
||||||
|
typedef struct raw_spinlock raw_spinlock_t;
|
||||||
|
|
||||||
#define SPINLOCK_MAGIC 0xdead4ead
|
#define SPINLOCK_MAGIC 0xdead4ead
|
||||||
|
|
||||||
|
|||||||
@@ -21,7 +21,7 @@
|
|||||||
#include <linux/workqueue.h>
|
#include <linux/workqueue.h>
|
||||||
#include <linux/rcu_segcblist.h>
|
#include <linux/rcu_segcblist.h>
|
||||||
|
|
||||||
struct srcu_struct;
|
context_lock_struct(srcu_struct, __reentrant_ctx_lock);
|
||||||
|
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
|
|
||||||
@@ -77,7 +77,7 @@ int init_srcu_struct_fast_updown(struct srcu_struct *ssp);
|
|||||||
#define SRCU_READ_FLAVOR_SLOWGP (SRCU_READ_FLAVOR_FAST | SRCU_READ_FLAVOR_FAST_UPDOWN)
|
#define SRCU_READ_FLAVOR_SLOWGP (SRCU_READ_FLAVOR_FAST | SRCU_READ_FLAVOR_FAST_UPDOWN)
|
||||||
// Flavors requiring synchronize_rcu()
|
// Flavors requiring synchronize_rcu()
|
||||||
// instead of smp_mb().
|
// instead of smp_mb().
|
||||||
void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
|
void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases_shared(ssp);
|
||||||
|
|
||||||
#ifdef CONFIG_TINY_SRCU
|
#ifdef CONFIG_TINY_SRCU
|
||||||
#include <linux/srcutiny.h>
|
#include <linux/srcutiny.h>
|
||||||
@@ -131,14 +131,16 @@ static inline bool same_state_synchronize_srcu(unsigned long oldstate1, unsigned
|
|||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_NEED_SRCU_NMI_SAFE
|
#ifdef CONFIG_NEED_SRCU_NMI_SAFE
|
||||||
int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp);
|
int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires_shared(ssp);
|
||||||
void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases(ssp);
|
void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases_shared(ssp);
|
||||||
#else
|
#else
|
||||||
static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp)
|
static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp)
|
||||||
|
__acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
return __srcu_read_lock(ssp);
|
return __srcu_read_lock(ssp);
|
||||||
}
|
}
|
||||||
static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
|
static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
|
||||||
|
__releases_shared(ssp)
|
||||||
{
|
{
|
||||||
__srcu_read_unlock(ssp, idx);
|
__srcu_read_unlock(ssp, idx);
|
||||||
}
|
}
|
||||||
@@ -210,6 +212,14 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
|
|||||||
|
|
||||||
#endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
|
#endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
|
||||||
|
|
||||||
|
/*
|
||||||
|
* No-op helper to denote that ssp must be held. Because SRCU-protected pointers
|
||||||
|
* should still be marked with __rcu_guarded, and we do not want to mark them
|
||||||
|
* with __guarded_by(ssp) as it would complicate annotations for writers, we
|
||||||
|
* choose the following strategy: srcu_dereference_check() calls this helper
|
||||||
|
* that checks that the passed ssp is held, and then fake-acquires 'RCU'.
|
||||||
|
*/
|
||||||
|
static inline void __srcu_read_lock_must_hold(const struct srcu_struct *ssp) __must_hold_shared(ssp) { }
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* srcu_dereference_check - fetch SRCU-protected pointer for later dereferencing
|
* srcu_dereference_check - fetch SRCU-protected pointer for later dereferencing
|
||||||
@@ -223,9 +233,15 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
|
|||||||
* to 1. The @c argument will normally be a logical expression containing
|
* to 1. The @c argument will normally be a logical expression containing
|
||||||
* lockdep_is_held() calls.
|
* lockdep_is_held() calls.
|
||||||
*/
|
*/
|
||||||
#define srcu_dereference_check(p, ssp, c) \
|
#define srcu_dereference_check(p, ssp, c) \
|
||||||
__rcu_dereference_check((p), __UNIQUE_ID(rcu), \
|
({ \
|
||||||
(c) || srcu_read_lock_held(ssp), __rcu)
|
__srcu_read_lock_must_hold(ssp); \
|
||||||
|
__acquire_shared_ctx_lock(RCU); \
|
||||||
|
__auto_type __v = __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
|
||||||
|
(c) || srcu_read_lock_held(ssp), __rcu); \
|
||||||
|
__release_shared_ctx_lock(RCU); \
|
||||||
|
__v; \
|
||||||
|
})
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* srcu_dereference - fetch SRCU-protected pointer for later dereferencing
|
* srcu_dereference - fetch SRCU-protected pointer for later dereferencing
|
||||||
@@ -268,7 +284,8 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
|
|||||||
* invoke srcu_read_unlock() from one task and the matching srcu_read_lock()
|
* invoke srcu_read_unlock() from one task and the matching srcu_read_lock()
|
||||||
* from another.
|
* from another.
|
||||||
*/
|
*/
|
||||||
static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp)
|
static inline int srcu_read_lock(struct srcu_struct *ssp)
|
||||||
|
__acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
int retval;
|
int retval;
|
||||||
|
|
||||||
@@ -304,7 +321,8 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp)
|
|||||||
* contexts where RCU is watching, that is, from contexts where it would
|
* contexts where RCU is watching, that is, from contexts where it would
|
||||||
* be legal to invoke rcu_read_lock(). Otherwise, lockdep will complain.
|
* be legal to invoke rcu_read_lock(). Otherwise, lockdep will complain.
|
||||||
*/
|
*/
|
||||||
static inline struct srcu_ctr __percpu *srcu_read_lock_fast(struct srcu_struct *ssp) __acquires(ssp)
|
static inline struct srcu_ctr __percpu *srcu_read_lock_fast(struct srcu_struct *ssp) __acquires_shared(ssp)
|
||||||
|
__acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
struct srcu_ctr __percpu *retval;
|
struct srcu_ctr __percpu *retval;
|
||||||
|
|
||||||
@@ -344,7 +362,7 @@ static inline struct srcu_ctr __percpu *srcu_read_lock_fast(struct srcu_struct *
|
|||||||
* complain.
|
* complain.
|
||||||
*/
|
*/
|
||||||
static inline struct srcu_ctr __percpu *srcu_read_lock_fast_updown(struct srcu_struct *ssp)
|
static inline struct srcu_ctr __percpu *srcu_read_lock_fast_updown(struct srcu_struct *ssp)
|
||||||
__acquires(ssp)
|
__acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
struct srcu_ctr __percpu *retval;
|
struct srcu_ctr __percpu *retval;
|
||||||
|
|
||||||
@@ -360,7 +378,7 @@ __acquires(ssp)
|
|||||||
* See srcu_read_lock_fast() for more information.
|
* See srcu_read_lock_fast() for more information.
|
||||||
*/
|
*/
|
||||||
static inline struct srcu_ctr __percpu *srcu_read_lock_fast_notrace(struct srcu_struct *ssp)
|
static inline struct srcu_ctr __percpu *srcu_read_lock_fast_notrace(struct srcu_struct *ssp)
|
||||||
__acquires(ssp)
|
__acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
struct srcu_ctr __percpu *retval;
|
struct srcu_ctr __percpu *retval;
|
||||||
|
|
||||||
@@ -381,7 +399,7 @@ static inline struct srcu_ctr __percpu *srcu_read_lock_fast_notrace(struct srcu_
|
|||||||
* and srcu_read_lock_fast(). However, the same definition/initialization
|
* and srcu_read_lock_fast(). However, the same definition/initialization
|
||||||
* requirements called out for srcu_read_lock_safe() apply.
|
* requirements called out for srcu_read_lock_safe() apply.
|
||||||
*/
|
*/
|
||||||
static inline struct srcu_ctr __percpu *srcu_down_read_fast(struct srcu_struct *ssp) __acquires(ssp)
|
static inline struct srcu_ctr __percpu *srcu_down_read_fast(struct srcu_struct *ssp) __acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi());
|
WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi());
|
||||||
RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_down_read_fast().");
|
RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_down_read_fast().");
|
||||||
@@ -400,7 +418,8 @@ static inline struct srcu_ctr __percpu *srcu_down_read_fast(struct srcu_struct *
|
|||||||
* then none of the other flavors may be used, whether before, during,
|
* then none of the other flavors may be used, whether before, during,
|
||||||
* or after.
|
* or after.
|
||||||
*/
|
*/
|
||||||
static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp)
|
static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp)
|
||||||
|
__acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
int retval;
|
int retval;
|
||||||
|
|
||||||
@@ -412,7 +431,8 @@ static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp
|
|||||||
|
|
||||||
/* Used by tracing, cannot be traced and cannot invoke lockdep. */
|
/* Used by tracing, cannot be traced and cannot invoke lockdep. */
|
||||||
static inline notrace int
|
static inline notrace int
|
||||||
srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp)
|
srcu_read_lock_notrace(struct srcu_struct *ssp)
|
||||||
|
__acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
int retval;
|
int retval;
|
||||||
|
|
||||||
@@ -443,7 +463,8 @@ srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp)
|
|||||||
* which calls to down_read() may be nested. The same srcu_struct may be
|
* which calls to down_read() may be nested. The same srcu_struct may be
|
||||||
* used concurrently by srcu_down_read() and srcu_read_lock().
|
* used concurrently by srcu_down_read() and srcu_read_lock().
|
||||||
*/
|
*/
|
||||||
static inline int srcu_down_read(struct srcu_struct *ssp) __acquires(ssp)
|
static inline int srcu_down_read(struct srcu_struct *ssp)
|
||||||
|
__acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
WARN_ON_ONCE(in_nmi());
|
WARN_ON_ONCE(in_nmi());
|
||||||
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL);
|
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL);
|
||||||
@@ -458,7 +479,7 @@ static inline int srcu_down_read(struct srcu_struct *ssp) __acquires(ssp)
|
|||||||
* Exit an SRCU read-side critical section.
|
* Exit an SRCU read-side critical section.
|
||||||
*/
|
*/
|
||||||
static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx)
|
static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx)
|
||||||
__releases(ssp)
|
__releases_shared(ssp)
|
||||||
{
|
{
|
||||||
WARN_ON_ONCE(idx & ~0x1);
|
WARN_ON_ONCE(idx & ~0x1);
|
||||||
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL);
|
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL);
|
||||||
@@ -474,7 +495,7 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx)
|
|||||||
* Exit a light-weight SRCU read-side critical section.
|
* Exit a light-weight SRCU read-side critical section.
|
||||||
*/
|
*/
|
||||||
static inline void srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
|
static inline void srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
|
||||||
__releases(ssp)
|
__releases_shared(ssp)
|
||||||
{
|
{
|
||||||
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST);
|
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST);
|
||||||
srcu_lock_release(&ssp->dep_map);
|
srcu_lock_release(&ssp->dep_map);
|
||||||
@@ -490,7 +511,7 @@ static inline void srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ct
|
|||||||
* Exit an SRCU-fast-updown read-side critical section.
|
* Exit an SRCU-fast-updown read-side critical section.
|
||||||
*/
|
*/
|
||||||
static inline void
|
static inline void
|
||||||
srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) __releases(ssp)
|
srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) __releases_shared(ssp)
|
||||||
{
|
{
|
||||||
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST_UPDOWN);
|
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST_UPDOWN);
|
||||||
srcu_lock_release(&ssp->dep_map);
|
srcu_lock_release(&ssp->dep_map);
|
||||||
@@ -504,7 +525,7 @@ srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *
|
|||||||
* See srcu_read_unlock_fast() for more information.
|
* See srcu_read_unlock_fast() for more information.
|
||||||
*/
|
*/
|
||||||
static inline void srcu_read_unlock_fast_notrace(struct srcu_struct *ssp,
|
static inline void srcu_read_unlock_fast_notrace(struct srcu_struct *ssp,
|
||||||
struct srcu_ctr __percpu *scp) __releases(ssp)
|
struct srcu_ctr __percpu *scp) __releases_shared(ssp)
|
||||||
{
|
{
|
||||||
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST);
|
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST);
|
||||||
__srcu_read_unlock_fast(ssp, scp);
|
__srcu_read_unlock_fast(ssp, scp);
|
||||||
@@ -519,7 +540,7 @@ static inline void srcu_read_unlock_fast_notrace(struct srcu_struct *ssp,
|
|||||||
* the same context as the maching srcu_down_read_fast().
|
* the same context as the maching srcu_down_read_fast().
|
||||||
*/
|
*/
|
||||||
static inline void srcu_up_read_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
|
static inline void srcu_up_read_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
|
||||||
__releases(ssp)
|
__releases_shared(ssp)
|
||||||
{
|
{
|
||||||
WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi());
|
WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi());
|
||||||
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST_UPDOWN);
|
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST_UPDOWN);
|
||||||
@@ -535,7 +556,7 @@ static inline void srcu_up_read_fast(struct srcu_struct *ssp, struct srcu_ctr __
|
|||||||
* Exit an SRCU read-side critical section, but in an NMI-safe manner.
|
* Exit an SRCU read-side critical section, but in an NMI-safe manner.
|
||||||
*/
|
*/
|
||||||
static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
|
static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
|
||||||
__releases(ssp)
|
__releases_shared(ssp)
|
||||||
{
|
{
|
||||||
WARN_ON_ONCE(idx & ~0x1);
|
WARN_ON_ONCE(idx & ~0x1);
|
||||||
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NMI);
|
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NMI);
|
||||||
@@ -545,7 +566,7 @@ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx)
|
|||||||
|
|
||||||
/* Used by tracing, cannot be traced and cannot call lockdep. */
|
/* Used by tracing, cannot be traced and cannot call lockdep. */
|
||||||
static inline notrace void
|
static inline notrace void
|
||||||
srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp)
|
srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases_shared(ssp)
|
||||||
{
|
{
|
||||||
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL);
|
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL);
|
||||||
__srcu_read_unlock(ssp, idx);
|
__srcu_read_unlock(ssp, idx);
|
||||||
@@ -560,7 +581,7 @@ srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp)
|
|||||||
* the same context as the maching srcu_down_read().
|
* the same context as the maching srcu_down_read().
|
||||||
*/
|
*/
|
||||||
static inline void srcu_up_read(struct srcu_struct *ssp, int idx)
|
static inline void srcu_up_read(struct srcu_struct *ssp, int idx)
|
||||||
__releases(ssp)
|
__releases_shared(ssp)
|
||||||
{
|
{
|
||||||
WARN_ON_ONCE(idx & ~0x1);
|
WARN_ON_ONCE(idx & ~0x1);
|
||||||
WARN_ON_ONCE(in_nmi());
|
WARN_ON_ONCE(in_nmi());
|
||||||
@@ -600,15 +621,21 @@ DEFINE_LOCK_GUARD_1(srcu, struct srcu_struct,
|
|||||||
_T->idx = srcu_read_lock(_T->lock),
|
_T->idx = srcu_read_lock(_T->lock),
|
||||||
srcu_read_unlock(_T->lock, _T->idx),
|
srcu_read_unlock(_T->lock, _T->idx),
|
||||||
int idx)
|
int idx)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(srcu, __acquires_shared(_T), __releases_shared(*(struct srcu_struct **)_T))
|
||||||
|
#define class_srcu_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(srcu, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(srcu_fast, struct srcu_struct,
|
DEFINE_LOCK_GUARD_1(srcu_fast, struct srcu_struct,
|
||||||
_T->scp = srcu_read_lock_fast(_T->lock),
|
_T->scp = srcu_read_lock_fast(_T->lock),
|
||||||
srcu_read_unlock_fast(_T->lock, _T->scp),
|
srcu_read_unlock_fast(_T->lock, _T->scp),
|
||||||
struct srcu_ctr __percpu *scp)
|
struct srcu_ctr __percpu *scp)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(srcu_fast, __acquires_shared(_T), __releases_shared(*(struct srcu_struct **)_T))
|
||||||
|
#define class_srcu_fast_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(srcu_fast, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(srcu_fast_notrace, struct srcu_struct,
|
DEFINE_LOCK_GUARD_1(srcu_fast_notrace, struct srcu_struct,
|
||||||
_T->scp = srcu_read_lock_fast_notrace(_T->lock),
|
_T->scp = srcu_read_lock_fast_notrace(_T->lock),
|
||||||
srcu_read_unlock_fast_notrace(_T->lock, _T->scp),
|
srcu_read_unlock_fast_notrace(_T->lock, _T->scp),
|
||||||
struct srcu_ctr __percpu *scp)
|
struct srcu_ctr __percpu *scp)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(srcu_fast_notrace, __acquires_shared(_T), __releases_shared(*(struct srcu_struct **)_T))
|
||||||
|
#define class_srcu_fast_notrace_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(srcu_fast_notrace, _T)
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|||||||
@@ -73,6 +73,7 @@ void synchronize_srcu(struct srcu_struct *ssp);
|
|||||||
* index that must be passed to the matching srcu_read_unlock().
|
* index that must be passed to the matching srcu_read_unlock().
|
||||||
*/
|
*/
|
||||||
static inline int __srcu_read_lock(struct srcu_struct *ssp)
|
static inline int __srcu_read_lock(struct srcu_struct *ssp)
|
||||||
|
__acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
int idx;
|
int idx;
|
||||||
|
|
||||||
@@ -80,6 +81,7 @@ static inline int __srcu_read_lock(struct srcu_struct *ssp)
|
|||||||
idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
|
idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
|
||||||
WRITE_ONCE(ssp->srcu_lock_nesting[idx], READ_ONCE(ssp->srcu_lock_nesting[idx]) + 1);
|
WRITE_ONCE(ssp->srcu_lock_nesting[idx], READ_ONCE(ssp->srcu_lock_nesting[idx]) + 1);
|
||||||
preempt_enable();
|
preempt_enable();
|
||||||
|
__acquire_shared(ssp);
|
||||||
return idx;
|
return idx;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -96,22 +98,26 @@ static inline struct srcu_ctr __percpu *__srcu_ctr_to_ptr(struct srcu_struct *ss
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_struct *ssp)
|
static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_struct *ssp)
|
||||||
|
__acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
return __srcu_ctr_to_ptr(ssp, __srcu_read_lock(ssp));
|
return __srcu_ctr_to_ptr(ssp, __srcu_read_lock(ssp));
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
|
static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
|
||||||
|
__releases_shared(ssp)
|
||||||
{
|
{
|
||||||
__srcu_read_unlock(ssp, __srcu_ptr_to_ctr(ssp, scp));
|
__srcu_read_unlock(ssp, __srcu_ptr_to_ctr(ssp, scp));
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline struct srcu_ctr __percpu *__srcu_read_lock_fast_updown(struct srcu_struct *ssp)
|
static inline struct srcu_ctr __percpu *__srcu_read_lock_fast_updown(struct srcu_struct *ssp)
|
||||||
|
__acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
return __srcu_ctr_to_ptr(ssp, __srcu_read_lock(ssp));
|
return __srcu_ctr_to_ptr(ssp, __srcu_read_lock(ssp));
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline
|
static inline
|
||||||
void __srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
|
void __srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
|
||||||
|
__releases_shared(ssp)
|
||||||
{
|
{
|
||||||
__srcu_read_unlock(ssp, __srcu_ptr_to_ctr(ssp, scp));
|
__srcu_read_unlock(ssp, __srcu_ptr_to_ctr(ssp, scp));
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -233,7 +233,7 @@ struct srcu_struct {
|
|||||||
#define DEFINE_STATIC_SRCU_FAST_UPDOWN(name) \
|
#define DEFINE_STATIC_SRCU_FAST_UPDOWN(name) \
|
||||||
__DEFINE_SRCU(name, SRCU_READ_FLAVOR_FAST_UPDOWN, static)
|
__DEFINE_SRCU(name, SRCU_READ_FLAVOR_FAST_UPDOWN, static)
|
||||||
|
|
||||||
int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
|
int __srcu_read_lock(struct srcu_struct *ssp) __acquires_shared(ssp);
|
||||||
void synchronize_srcu_expedited(struct srcu_struct *ssp);
|
void synchronize_srcu_expedited(struct srcu_struct *ssp);
|
||||||
void srcu_barrier(struct srcu_struct *ssp);
|
void srcu_barrier(struct srcu_struct *ssp);
|
||||||
void srcu_expedite_current(struct srcu_struct *ssp);
|
void srcu_expedite_current(struct srcu_struct *ssp);
|
||||||
@@ -286,6 +286,7 @@ static inline struct srcu_ctr __percpu *__srcu_ctr_to_ptr(struct srcu_struct *ss
|
|||||||
* implementations of this_cpu_inc().
|
* implementations of this_cpu_inc().
|
||||||
*/
|
*/
|
||||||
static inline struct srcu_ctr __percpu notrace *__srcu_read_lock_fast(struct srcu_struct *ssp)
|
static inline struct srcu_ctr __percpu notrace *__srcu_read_lock_fast(struct srcu_struct *ssp)
|
||||||
|
__acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp);
|
struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp);
|
||||||
|
|
||||||
@@ -294,6 +295,7 @@ static inline struct srcu_ctr __percpu notrace *__srcu_read_lock_fast(struct src
|
|||||||
else
|
else
|
||||||
atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks)); // Y, and implicit RCU reader.
|
atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks)); // Y, and implicit RCU reader.
|
||||||
barrier(); /* Avoid leaking the critical section. */
|
barrier(); /* Avoid leaking the critical section. */
|
||||||
|
__acquire_shared(ssp);
|
||||||
return scp;
|
return scp;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -308,7 +310,9 @@ static inline struct srcu_ctr __percpu notrace *__srcu_read_lock_fast(struct src
|
|||||||
*/
|
*/
|
||||||
static inline void notrace
|
static inline void notrace
|
||||||
__srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
|
__srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
|
||||||
|
__releases_shared(ssp)
|
||||||
{
|
{
|
||||||
|
__release_shared(ssp);
|
||||||
barrier(); /* Avoid leaking the critical section. */
|
barrier(); /* Avoid leaking the critical section. */
|
||||||
if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
|
if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
|
||||||
this_cpu_inc(scp->srcu_unlocks.counter); // Z, and implicit RCU reader.
|
this_cpu_inc(scp->srcu_unlocks.counter); // Z, and implicit RCU reader.
|
||||||
@@ -326,6 +330,7 @@ __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
|
|||||||
*/
|
*/
|
||||||
static inline
|
static inline
|
||||||
struct srcu_ctr __percpu notrace *__srcu_read_lock_fast_updown(struct srcu_struct *ssp)
|
struct srcu_ctr __percpu notrace *__srcu_read_lock_fast_updown(struct srcu_struct *ssp)
|
||||||
|
__acquires_shared(ssp)
|
||||||
{
|
{
|
||||||
struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp);
|
struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp);
|
||||||
|
|
||||||
@@ -334,6 +339,7 @@ struct srcu_ctr __percpu notrace *__srcu_read_lock_fast_updown(struct srcu_struc
|
|||||||
else
|
else
|
||||||
atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks)); // Y, and implicit RCU reader.
|
atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks)); // Y, and implicit RCU reader.
|
||||||
barrier(); /* Avoid leaking the critical section. */
|
barrier(); /* Avoid leaking the critical section. */
|
||||||
|
__acquire_shared(ssp);
|
||||||
return scp;
|
return scp;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -348,7 +354,9 @@ struct srcu_ctr __percpu notrace *__srcu_read_lock_fast_updown(struct srcu_struc
|
|||||||
*/
|
*/
|
||||||
static inline void notrace
|
static inline void notrace
|
||||||
__srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
|
__srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
|
||||||
|
__releases_shared(ssp)
|
||||||
{
|
{
|
||||||
|
__release_shared(ssp);
|
||||||
barrier(); /* Avoid leaking the critical section. */
|
barrier(); /* Avoid leaking the critical section. */
|
||||||
if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
|
if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
|
||||||
this_cpu_inc(scp->srcu_unlocks.counter); // Z, and implicit RCU reader.
|
this_cpu_inc(scp->srcu_unlocks.counter); // Z, and implicit RCU reader.
|
||||||
|
|||||||
@@ -44,7 +44,7 @@ struct ww_class {
|
|||||||
unsigned int is_wait_die;
|
unsigned int is_wait_die;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct ww_mutex {
|
context_lock_struct(ww_mutex) {
|
||||||
struct WW_MUTEX_BASE base;
|
struct WW_MUTEX_BASE base;
|
||||||
struct ww_acquire_ctx *ctx;
|
struct ww_acquire_ctx *ctx;
|
||||||
#ifdef DEBUG_WW_MUTEXES
|
#ifdef DEBUG_WW_MUTEXES
|
||||||
@@ -52,7 +52,7 @@ struct ww_mutex {
|
|||||||
#endif
|
#endif
|
||||||
};
|
};
|
||||||
|
|
||||||
struct ww_acquire_ctx {
|
context_lock_struct(ww_acquire_ctx) {
|
||||||
struct task_struct *task;
|
struct task_struct *task;
|
||||||
unsigned long stamp;
|
unsigned long stamp;
|
||||||
unsigned int acquired;
|
unsigned int acquired;
|
||||||
@@ -141,6 +141,7 @@ static inline void ww_mutex_init(struct ww_mutex *lock,
|
|||||||
*/
|
*/
|
||||||
static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
|
static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
|
||||||
struct ww_class *ww_class)
|
struct ww_class *ww_class)
|
||||||
|
__acquires(ctx) __no_context_analysis
|
||||||
{
|
{
|
||||||
ctx->task = current;
|
ctx->task = current;
|
||||||
ctx->stamp = atomic_long_inc_return_relaxed(&ww_class->stamp);
|
ctx->stamp = atomic_long_inc_return_relaxed(&ww_class->stamp);
|
||||||
@@ -179,6 +180,7 @@ static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
|
|||||||
* data structures.
|
* data structures.
|
||||||
*/
|
*/
|
||||||
static inline void ww_acquire_done(struct ww_acquire_ctx *ctx)
|
static inline void ww_acquire_done(struct ww_acquire_ctx *ctx)
|
||||||
|
__releases(ctx) __acquires_shared(ctx) __no_context_analysis
|
||||||
{
|
{
|
||||||
#ifdef DEBUG_WW_MUTEXES
|
#ifdef DEBUG_WW_MUTEXES
|
||||||
lockdep_assert_held(ctx);
|
lockdep_assert_held(ctx);
|
||||||
@@ -196,6 +198,7 @@ static inline void ww_acquire_done(struct ww_acquire_ctx *ctx)
|
|||||||
* mutexes have been released with ww_mutex_unlock.
|
* mutexes have been released with ww_mutex_unlock.
|
||||||
*/
|
*/
|
||||||
static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
|
static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
|
||||||
|
__releases_shared(ctx) __no_context_analysis
|
||||||
{
|
{
|
||||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||||
mutex_release(&ctx->first_lock_dep_map, _THIS_IP_);
|
mutex_release(&ctx->first_lock_dep_map, _THIS_IP_);
|
||||||
@@ -245,7 +248,8 @@ static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
|
|||||||
*
|
*
|
||||||
* A mutex acquired with this function must be released with ww_mutex_unlock.
|
* A mutex acquired with this function must be released with ww_mutex_unlock.
|
||||||
*/
|
*/
|
||||||
extern int /* __must_check */ ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx);
|
extern int /* __must_check */ ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
|
||||||
|
__cond_acquires(0, lock) __must_hold(ctx);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* ww_mutex_lock_interruptible - acquire the w/w mutex, interruptible
|
* ww_mutex_lock_interruptible - acquire the w/w mutex, interruptible
|
||||||
@@ -278,7 +282,8 @@ extern int /* __must_check */ ww_mutex_lock(struct ww_mutex *lock, struct ww_acq
|
|||||||
* A mutex acquired with this function must be released with ww_mutex_unlock.
|
* A mutex acquired with this function must be released with ww_mutex_unlock.
|
||||||
*/
|
*/
|
||||||
extern int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
|
extern int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
|
||||||
struct ww_acquire_ctx *ctx);
|
struct ww_acquire_ctx *ctx)
|
||||||
|
__cond_acquires(0, lock) __must_hold(ctx);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* ww_mutex_lock_slow - slowpath acquiring of the w/w mutex
|
* ww_mutex_lock_slow - slowpath acquiring of the w/w mutex
|
||||||
@@ -305,6 +310,7 @@ extern int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
|
|||||||
*/
|
*/
|
||||||
static inline void
|
static inline void
|
||||||
ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
|
ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
|
||||||
|
__acquires(lock) __must_hold(ctx) __no_context_analysis
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
#ifdef DEBUG_WW_MUTEXES
|
#ifdef DEBUG_WW_MUTEXES
|
||||||
@@ -342,6 +348,7 @@ ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
|
|||||||
static inline int __must_check
|
static inline int __must_check
|
||||||
ww_mutex_lock_slow_interruptible(struct ww_mutex *lock,
|
ww_mutex_lock_slow_interruptible(struct ww_mutex *lock,
|
||||||
struct ww_acquire_ctx *ctx)
|
struct ww_acquire_ctx *ctx)
|
||||||
|
__cond_acquires(0, lock) __must_hold(ctx)
|
||||||
{
|
{
|
||||||
#ifdef DEBUG_WW_MUTEXES
|
#ifdef DEBUG_WW_MUTEXES
|
||||||
DEBUG_LOCKS_WARN_ON(!ctx->contending_lock);
|
DEBUG_LOCKS_WARN_ON(!ctx->contending_lock);
|
||||||
@@ -349,10 +356,11 @@ ww_mutex_lock_slow_interruptible(struct ww_mutex *lock,
|
|||||||
return ww_mutex_lock_interruptible(lock, ctx);
|
return ww_mutex_lock_interruptible(lock, ctx);
|
||||||
}
|
}
|
||||||
|
|
||||||
extern void ww_mutex_unlock(struct ww_mutex *lock);
|
extern void ww_mutex_unlock(struct ww_mutex *lock) __releases(lock);
|
||||||
|
|
||||||
extern int __must_check ww_mutex_trylock(struct ww_mutex *lock,
|
extern int __must_check ww_mutex_trylock(struct ww_mutex *lock,
|
||||||
struct ww_acquire_ctx *ctx);
|
struct ww_acquire_ctx *ctx)
|
||||||
|
__cond_acquires(true, lock) __must_hold(ctx);
|
||||||
|
|
||||||
/***
|
/***
|
||||||
* ww_mutex_destroy - mark a w/w mutex unusable
|
* ww_mutex_destroy - mark a w/w mutex unusable
|
||||||
@@ -363,6 +371,7 @@ extern int __must_check ww_mutex_trylock(struct ww_mutex *lock,
|
|||||||
* this function is called.
|
* this function is called.
|
||||||
*/
|
*/
|
||||||
static inline void ww_mutex_destroy(struct ww_mutex *lock)
|
static inline void ww_mutex_destroy(struct ww_mutex *lock)
|
||||||
|
__must_not_hold(lock)
|
||||||
{
|
{
|
||||||
#ifndef CONFIG_PREEMPT_RT
|
#ifndef CONFIG_PREEMPT_RT
|
||||||
mutex_destroy(&lock->base);
|
mutex_destroy(&lock->base);
|
||||||
|
|||||||
@@ -43,6 +43,8 @@ KASAN_SANITIZE_kcov.o := n
|
|||||||
KCSAN_SANITIZE_kcov.o := n
|
KCSAN_SANITIZE_kcov.o := n
|
||||||
UBSAN_SANITIZE_kcov.o := n
|
UBSAN_SANITIZE_kcov.o := n
|
||||||
KMSAN_SANITIZE_kcov.o := n
|
KMSAN_SANITIZE_kcov.o := n
|
||||||
|
|
||||||
|
CONTEXT_ANALYSIS_kcov.o := y
|
||||||
CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack) -fno-stack-protector
|
CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack) -fno-stack-protector
|
||||||
|
|
||||||
obj-y += sched/
|
obj-y += sched/
|
||||||
|
|||||||
@@ -55,13 +55,13 @@ struct kcov {
|
|||||||
refcount_t refcount;
|
refcount_t refcount;
|
||||||
/* The lock protects mode, size, area and t. */
|
/* The lock protects mode, size, area and t. */
|
||||||
spinlock_t lock;
|
spinlock_t lock;
|
||||||
enum kcov_mode mode;
|
enum kcov_mode mode __guarded_by(&lock);
|
||||||
/* Size of arena (in long's). */
|
/* Size of arena (in long's). */
|
||||||
unsigned int size;
|
unsigned int size __guarded_by(&lock);
|
||||||
/* Coverage buffer shared with user space. */
|
/* Coverage buffer shared with user space. */
|
||||||
void *area;
|
void *area __guarded_by(&lock);
|
||||||
/* Task for which we collect coverage, or NULL. */
|
/* Task for which we collect coverage, or NULL. */
|
||||||
struct task_struct *t;
|
struct task_struct *t __guarded_by(&lock);
|
||||||
/* Collecting coverage from remote (background) threads. */
|
/* Collecting coverage from remote (background) threads. */
|
||||||
bool remote;
|
bool remote;
|
||||||
/* Size of remote area (in long's). */
|
/* Size of remote area (in long's). */
|
||||||
@@ -391,6 +391,7 @@ void kcov_task_init(struct task_struct *t)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void kcov_reset(struct kcov *kcov)
|
static void kcov_reset(struct kcov *kcov)
|
||||||
|
__must_hold(&kcov->lock)
|
||||||
{
|
{
|
||||||
kcov->t = NULL;
|
kcov->t = NULL;
|
||||||
kcov->mode = KCOV_MODE_INIT;
|
kcov->mode = KCOV_MODE_INIT;
|
||||||
@@ -400,6 +401,7 @@ static void kcov_reset(struct kcov *kcov)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void kcov_remote_reset(struct kcov *kcov)
|
static void kcov_remote_reset(struct kcov *kcov)
|
||||||
|
__must_hold(&kcov->lock)
|
||||||
{
|
{
|
||||||
int bkt;
|
int bkt;
|
||||||
struct kcov_remote *remote;
|
struct kcov_remote *remote;
|
||||||
@@ -419,6 +421,7 @@ static void kcov_remote_reset(struct kcov *kcov)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void kcov_disable(struct task_struct *t, struct kcov *kcov)
|
static void kcov_disable(struct task_struct *t, struct kcov *kcov)
|
||||||
|
__must_hold(&kcov->lock)
|
||||||
{
|
{
|
||||||
kcov_task_reset(t);
|
kcov_task_reset(t);
|
||||||
if (kcov->remote)
|
if (kcov->remote)
|
||||||
@@ -435,8 +438,11 @@ static void kcov_get(struct kcov *kcov)
|
|||||||
static void kcov_put(struct kcov *kcov)
|
static void kcov_put(struct kcov *kcov)
|
||||||
{
|
{
|
||||||
if (refcount_dec_and_test(&kcov->refcount)) {
|
if (refcount_dec_and_test(&kcov->refcount)) {
|
||||||
kcov_remote_reset(kcov);
|
/* Context-safety: no references left, object being destroyed. */
|
||||||
vfree(kcov->area);
|
context_unsafe(
|
||||||
|
kcov_remote_reset(kcov);
|
||||||
|
vfree(kcov->area);
|
||||||
|
);
|
||||||
kfree(kcov);
|
kfree(kcov);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -491,6 +497,7 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
|
|||||||
unsigned long size, off;
|
unsigned long size, off;
|
||||||
struct page *page;
|
struct page *page;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
void *area;
|
||||||
|
|
||||||
spin_lock_irqsave(&kcov->lock, flags);
|
spin_lock_irqsave(&kcov->lock, flags);
|
||||||
size = kcov->size * sizeof(unsigned long);
|
size = kcov->size * sizeof(unsigned long);
|
||||||
@@ -499,10 +506,11 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
|
|||||||
res = -EINVAL;
|
res = -EINVAL;
|
||||||
goto exit;
|
goto exit;
|
||||||
}
|
}
|
||||||
|
area = kcov->area;
|
||||||
spin_unlock_irqrestore(&kcov->lock, flags);
|
spin_unlock_irqrestore(&kcov->lock, flags);
|
||||||
vm_flags_set(vma, VM_DONTEXPAND);
|
vm_flags_set(vma, VM_DONTEXPAND);
|
||||||
for (off = 0; off < size; off += PAGE_SIZE) {
|
for (off = 0; off < size; off += PAGE_SIZE) {
|
||||||
page = vmalloc_to_page(kcov->area + off);
|
page = vmalloc_to_page(area + off);
|
||||||
res = vm_insert_page(vma, vma->vm_start + off, page);
|
res = vm_insert_page(vma, vma->vm_start + off, page);
|
||||||
if (res) {
|
if (res) {
|
||||||
pr_warn_once("kcov: vm_insert_page() failed\n");
|
pr_warn_once("kcov: vm_insert_page() failed\n");
|
||||||
@@ -522,10 +530,10 @@ static int kcov_open(struct inode *inode, struct file *filep)
|
|||||||
kcov = kzalloc(sizeof(*kcov), GFP_KERNEL);
|
kcov = kzalloc(sizeof(*kcov), GFP_KERNEL);
|
||||||
if (!kcov)
|
if (!kcov)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
guard(spinlock_init)(&kcov->lock);
|
||||||
kcov->mode = KCOV_MODE_DISABLED;
|
kcov->mode = KCOV_MODE_DISABLED;
|
||||||
kcov->sequence = 1;
|
kcov->sequence = 1;
|
||||||
refcount_set(&kcov->refcount, 1);
|
refcount_set(&kcov->refcount, 1);
|
||||||
spin_lock_init(&kcov->lock);
|
|
||||||
filep->private_data = kcov;
|
filep->private_data = kcov;
|
||||||
return nonseekable_open(inode, filep);
|
return nonseekable_open(inode, filep);
|
||||||
}
|
}
|
||||||
@@ -556,6 +564,7 @@ static int kcov_get_mode(unsigned long arg)
|
|||||||
* vmalloc fault handling path is instrumented.
|
* vmalloc fault handling path is instrumented.
|
||||||
*/
|
*/
|
||||||
static void kcov_fault_in_area(struct kcov *kcov)
|
static void kcov_fault_in_area(struct kcov *kcov)
|
||||||
|
__must_hold(&kcov->lock)
|
||||||
{
|
{
|
||||||
unsigned long stride = PAGE_SIZE / sizeof(unsigned long);
|
unsigned long stride = PAGE_SIZE / sizeof(unsigned long);
|
||||||
unsigned long *area = kcov->area;
|
unsigned long *area = kcov->area;
|
||||||
@@ -584,6 +593,7 @@ static inline bool kcov_check_handle(u64 handle, bool common_valid,
|
|||||||
|
|
||||||
static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
|
static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
|
||||||
unsigned long arg)
|
unsigned long arg)
|
||||||
|
__must_hold(&kcov->lock)
|
||||||
{
|
{
|
||||||
struct task_struct *t;
|
struct task_struct *t;
|
||||||
unsigned long flags, unused;
|
unsigned long flags, unused;
|
||||||
@@ -814,6 +824,7 @@ static inline bool kcov_mode_enabled(unsigned int mode)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void kcov_remote_softirq_start(struct task_struct *t)
|
static void kcov_remote_softirq_start(struct task_struct *t)
|
||||||
|
__must_hold(&kcov_percpu_data.lock)
|
||||||
{
|
{
|
||||||
struct kcov_percpu_data *data = this_cpu_ptr(&kcov_percpu_data);
|
struct kcov_percpu_data *data = this_cpu_ptr(&kcov_percpu_data);
|
||||||
unsigned int mode;
|
unsigned int mode;
|
||||||
@@ -831,6 +842,7 @@ static void kcov_remote_softirq_start(struct task_struct *t)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void kcov_remote_softirq_stop(struct task_struct *t)
|
static void kcov_remote_softirq_stop(struct task_struct *t)
|
||||||
|
__must_hold(&kcov_percpu_data.lock)
|
||||||
{
|
{
|
||||||
struct kcov_percpu_data *data = this_cpu_ptr(&kcov_percpu_data);
|
struct kcov_percpu_data *data = this_cpu_ptr(&kcov_percpu_data);
|
||||||
|
|
||||||
@@ -896,10 +908,12 @@ void kcov_remote_start(u64 handle)
|
|||||||
/* Put in kcov_remote_stop(). */
|
/* Put in kcov_remote_stop(). */
|
||||||
kcov_get(kcov);
|
kcov_get(kcov);
|
||||||
/*
|
/*
|
||||||
* Read kcov fields before unlock to prevent races with
|
* Read kcov fields before unlocking kcov_remote_lock to prevent races
|
||||||
* KCOV_DISABLE / kcov_remote_reset().
|
* with KCOV_DISABLE and kcov_remote_reset(); cannot acquire kcov->lock
|
||||||
|
* here, because it might lead to deadlock given kcov_remote_lock is
|
||||||
|
* acquired _after_ kcov->lock elsewhere.
|
||||||
*/
|
*/
|
||||||
mode = kcov->mode;
|
mode = context_unsafe(kcov->mode);
|
||||||
sequence = kcov->sequence;
|
sequence = kcov->sequence;
|
||||||
if (in_task()) {
|
if (in_task()) {
|
||||||
size = kcov->remote_size;
|
size = kcov->remote_size;
|
||||||
|
|||||||
@@ -1,4 +1,6 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
|
CONTEXT_ANALYSIS := y
|
||||||
|
|
||||||
KCSAN_SANITIZE := n
|
KCSAN_SANITIZE := n
|
||||||
KCOV_INSTRUMENT := n
|
KCOV_INSTRUMENT := n
|
||||||
UBSAN_SANITIZE := n
|
UBSAN_SANITIZE := n
|
||||||
|
|||||||
@@ -116,6 +116,7 @@ static DEFINE_RAW_SPINLOCK(report_lock);
|
|||||||
* been reported since (now - KCSAN_REPORT_ONCE_IN_MS).
|
* been reported since (now - KCSAN_REPORT_ONCE_IN_MS).
|
||||||
*/
|
*/
|
||||||
static bool rate_limit_report(unsigned long frame1, unsigned long frame2)
|
static bool rate_limit_report(unsigned long frame1, unsigned long frame2)
|
||||||
|
__must_hold(&report_lock)
|
||||||
{
|
{
|
||||||
struct report_time *use_entry = &report_times[0];
|
struct report_time *use_entry = &report_times[0];
|
||||||
unsigned long invalid_before;
|
unsigned long invalid_before;
|
||||||
@@ -366,6 +367,7 @@ static int sym_strcmp(void *addr1, void *addr2)
|
|||||||
|
|
||||||
static void
|
static void
|
||||||
print_stack_trace(unsigned long stack_entries[], int num_entries, unsigned long reordered_to)
|
print_stack_trace(unsigned long stack_entries[], int num_entries, unsigned long reordered_to)
|
||||||
|
__must_hold(&report_lock)
|
||||||
{
|
{
|
||||||
stack_trace_print(stack_entries, num_entries, 0);
|
stack_trace_print(stack_entries, num_entries, 0);
|
||||||
if (reordered_to)
|
if (reordered_to)
|
||||||
@@ -373,6 +375,7 @@ print_stack_trace(unsigned long stack_entries[], int num_entries, unsigned long
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void print_verbose_info(struct task_struct *task)
|
static void print_verbose_info(struct task_struct *task)
|
||||||
|
__must_hold(&report_lock)
|
||||||
{
|
{
|
||||||
if (!task)
|
if (!task)
|
||||||
return;
|
return;
|
||||||
@@ -389,6 +392,7 @@ static void print_report(enum kcsan_value_change value_change,
|
|||||||
const struct access_info *ai,
|
const struct access_info *ai,
|
||||||
struct other_info *other_info,
|
struct other_info *other_info,
|
||||||
u64 old, u64 new, u64 mask)
|
u64 old, u64 new, u64 mask)
|
||||||
|
__must_hold(&report_lock)
|
||||||
{
|
{
|
||||||
unsigned long reordered_to = 0;
|
unsigned long reordered_to = 0;
|
||||||
unsigned long stack_entries[NUM_STACK_ENTRIES] = { 0 };
|
unsigned long stack_entries[NUM_STACK_ENTRIES] = { 0 };
|
||||||
@@ -496,6 +500,7 @@ static void print_report(enum kcsan_value_change value_change,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void release_report(unsigned long *flags, struct other_info *other_info)
|
static void release_report(unsigned long *flags, struct other_info *other_info)
|
||||||
|
__releases(&report_lock)
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* Use size to denote valid/invalid, since KCSAN entirely ignores
|
* Use size to denote valid/invalid, since KCSAN entirely ignores
|
||||||
@@ -507,13 +512,11 @@ static void release_report(unsigned long *flags, struct other_info *other_info)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* Sets @other_info->task and awaits consumption of @other_info.
|
* Sets @other_info->task and awaits consumption of @other_info.
|
||||||
*
|
|
||||||
* Precondition: report_lock is held.
|
|
||||||
* Postcondition: report_lock is held.
|
|
||||||
*/
|
*/
|
||||||
static void set_other_info_task_blocking(unsigned long *flags,
|
static void set_other_info_task_blocking(unsigned long *flags,
|
||||||
const struct access_info *ai,
|
const struct access_info *ai,
|
||||||
struct other_info *other_info)
|
struct other_info *other_info)
|
||||||
|
__must_hold(&report_lock)
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* We may be instrumenting a code-path where current->state is already
|
* We may be instrumenting a code-path where current->state is already
|
||||||
@@ -572,6 +575,7 @@ static void set_other_info_task_blocking(unsigned long *flags,
|
|||||||
static void prepare_report_producer(unsigned long *flags,
|
static void prepare_report_producer(unsigned long *flags,
|
||||||
const struct access_info *ai,
|
const struct access_info *ai,
|
||||||
struct other_info *other_info)
|
struct other_info *other_info)
|
||||||
|
__must_not_hold(&report_lock)
|
||||||
{
|
{
|
||||||
raw_spin_lock_irqsave(&report_lock, *flags);
|
raw_spin_lock_irqsave(&report_lock, *flags);
|
||||||
|
|
||||||
@@ -603,6 +607,7 @@ static void prepare_report_producer(unsigned long *flags,
|
|||||||
static bool prepare_report_consumer(unsigned long *flags,
|
static bool prepare_report_consumer(unsigned long *flags,
|
||||||
const struct access_info *ai,
|
const struct access_info *ai,
|
||||||
struct other_info *other_info)
|
struct other_info *other_info)
|
||||||
|
__cond_acquires(true, &report_lock)
|
||||||
{
|
{
|
||||||
|
|
||||||
raw_spin_lock_irqsave(&report_lock, *flags);
|
raw_spin_lock_irqsave(&report_lock, *flags);
|
||||||
|
|||||||
@@ -13,7 +13,8 @@
|
|||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
#include <linux/ww_mutex.h>
|
#include <linux/ww_mutex.h>
|
||||||
|
|
||||||
static DEFINE_WD_CLASS(ww_class);
|
static DEFINE_WD_CLASS(wd_class);
|
||||||
|
static DEFINE_WW_CLASS(ww_class);
|
||||||
struct workqueue_struct *wq;
|
struct workqueue_struct *wq;
|
||||||
|
|
||||||
#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH
|
#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH
|
||||||
@@ -54,16 +55,16 @@ static void test_mutex_work(struct work_struct *work)
|
|||||||
ww_mutex_unlock(&mtx->mutex);
|
ww_mutex_unlock(&mtx->mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int __test_mutex(unsigned int flags)
|
static int __test_mutex(struct ww_class *class, unsigned int flags)
|
||||||
{
|
{
|
||||||
#define TIMEOUT (HZ / 16)
|
#define TIMEOUT (HZ / 16)
|
||||||
struct test_mutex mtx;
|
struct test_mutex mtx;
|
||||||
struct ww_acquire_ctx ctx;
|
struct ww_acquire_ctx ctx;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ww_mutex_init(&mtx.mutex, &ww_class);
|
ww_mutex_init(&mtx.mutex, class);
|
||||||
if (flags & TEST_MTX_CTX)
|
if (flags & TEST_MTX_CTX)
|
||||||
ww_acquire_init(&ctx, &ww_class);
|
ww_acquire_init(&ctx, class);
|
||||||
|
|
||||||
INIT_WORK_ONSTACK(&mtx.work, test_mutex_work);
|
INIT_WORK_ONSTACK(&mtx.work, test_mutex_work);
|
||||||
init_completion(&mtx.ready);
|
init_completion(&mtx.ready);
|
||||||
@@ -71,7 +72,7 @@ static int __test_mutex(unsigned int flags)
|
|||||||
init_completion(&mtx.done);
|
init_completion(&mtx.done);
|
||||||
mtx.flags = flags;
|
mtx.flags = flags;
|
||||||
|
|
||||||
schedule_work(&mtx.work);
|
queue_work(wq, &mtx.work);
|
||||||
|
|
||||||
wait_for_completion(&mtx.ready);
|
wait_for_completion(&mtx.ready);
|
||||||
ww_mutex_lock(&mtx.mutex, (flags & TEST_MTX_CTX) ? &ctx : NULL);
|
ww_mutex_lock(&mtx.mutex, (flags & TEST_MTX_CTX) ? &ctx : NULL);
|
||||||
@@ -106,13 +107,13 @@ static int __test_mutex(unsigned int flags)
|
|||||||
#undef TIMEOUT
|
#undef TIMEOUT
|
||||||
}
|
}
|
||||||
|
|
||||||
static int test_mutex(void)
|
static int test_mutex(struct ww_class *class)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
for (i = 0; i < __TEST_MTX_LAST; i++) {
|
for (i = 0; i < __TEST_MTX_LAST; i++) {
|
||||||
ret = __test_mutex(i);
|
ret = __test_mutex(class, i);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
@@ -120,15 +121,15 @@ static int test_mutex(void)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int test_aa(bool trylock)
|
static int test_aa(struct ww_class *class, bool trylock)
|
||||||
{
|
{
|
||||||
struct ww_mutex mutex;
|
struct ww_mutex mutex;
|
||||||
struct ww_acquire_ctx ctx;
|
struct ww_acquire_ctx ctx;
|
||||||
int ret;
|
int ret;
|
||||||
const char *from = trylock ? "trylock" : "lock";
|
const char *from = trylock ? "trylock" : "lock";
|
||||||
|
|
||||||
ww_mutex_init(&mutex, &ww_class);
|
ww_mutex_init(&mutex, class);
|
||||||
ww_acquire_init(&ctx, &ww_class);
|
ww_acquire_init(&ctx, class);
|
||||||
|
|
||||||
if (!trylock) {
|
if (!trylock) {
|
||||||
ret = ww_mutex_lock(&mutex, &ctx);
|
ret = ww_mutex_lock(&mutex, &ctx);
|
||||||
@@ -177,6 +178,7 @@ out:
|
|||||||
|
|
||||||
struct test_abba {
|
struct test_abba {
|
||||||
struct work_struct work;
|
struct work_struct work;
|
||||||
|
struct ww_class *class;
|
||||||
struct ww_mutex a_mutex;
|
struct ww_mutex a_mutex;
|
||||||
struct ww_mutex b_mutex;
|
struct ww_mutex b_mutex;
|
||||||
struct completion a_ready;
|
struct completion a_ready;
|
||||||
@@ -191,7 +193,7 @@ static void test_abba_work(struct work_struct *work)
|
|||||||
struct ww_acquire_ctx ctx;
|
struct ww_acquire_ctx ctx;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
ww_acquire_init_noinject(&ctx, &ww_class);
|
ww_acquire_init_noinject(&ctx, abba->class);
|
||||||
if (!abba->trylock)
|
if (!abba->trylock)
|
||||||
ww_mutex_lock(&abba->b_mutex, &ctx);
|
ww_mutex_lock(&abba->b_mutex, &ctx);
|
||||||
else
|
else
|
||||||
@@ -217,23 +219,24 @@ static void test_abba_work(struct work_struct *work)
|
|||||||
abba->result = err;
|
abba->result = err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int test_abba(bool trylock, bool resolve)
|
static int test_abba(struct ww_class *class, bool trylock, bool resolve)
|
||||||
{
|
{
|
||||||
struct test_abba abba;
|
struct test_abba abba;
|
||||||
struct ww_acquire_ctx ctx;
|
struct ww_acquire_ctx ctx;
|
||||||
int err, ret;
|
int err, ret;
|
||||||
|
|
||||||
ww_mutex_init(&abba.a_mutex, &ww_class);
|
ww_mutex_init(&abba.a_mutex, class);
|
||||||
ww_mutex_init(&abba.b_mutex, &ww_class);
|
ww_mutex_init(&abba.b_mutex, class);
|
||||||
INIT_WORK_ONSTACK(&abba.work, test_abba_work);
|
INIT_WORK_ONSTACK(&abba.work, test_abba_work);
|
||||||
init_completion(&abba.a_ready);
|
init_completion(&abba.a_ready);
|
||||||
init_completion(&abba.b_ready);
|
init_completion(&abba.b_ready);
|
||||||
|
abba.class = class;
|
||||||
abba.trylock = trylock;
|
abba.trylock = trylock;
|
||||||
abba.resolve = resolve;
|
abba.resolve = resolve;
|
||||||
|
|
||||||
schedule_work(&abba.work);
|
queue_work(wq, &abba.work);
|
||||||
|
|
||||||
ww_acquire_init_noinject(&ctx, &ww_class);
|
ww_acquire_init_noinject(&ctx, class);
|
||||||
if (!trylock)
|
if (!trylock)
|
||||||
ww_mutex_lock(&abba.a_mutex, &ctx);
|
ww_mutex_lock(&abba.a_mutex, &ctx);
|
||||||
else
|
else
|
||||||
@@ -278,6 +281,7 @@ static int test_abba(bool trylock, bool resolve)
|
|||||||
|
|
||||||
struct test_cycle {
|
struct test_cycle {
|
||||||
struct work_struct work;
|
struct work_struct work;
|
||||||
|
struct ww_class *class;
|
||||||
struct ww_mutex a_mutex;
|
struct ww_mutex a_mutex;
|
||||||
struct ww_mutex *b_mutex;
|
struct ww_mutex *b_mutex;
|
||||||
struct completion *a_signal;
|
struct completion *a_signal;
|
||||||
@@ -291,7 +295,7 @@ static void test_cycle_work(struct work_struct *work)
|
|||||||
struct ww_acquire_ctx ctx;
|
struct ww_acquire_ctx ctx;
|
||||||
int err, erra = 0;
|
int err, erra = 0;
|
||||||
|
|
||||||
ww_acquire_init_noinject(&ctx, &ww_class);
|
ww_acquire_init_noinject(&ctx, cycle->class);
|
||||||
ww_mutex_lock(&cycle->a_mutex, &ctx);
|
ww_mutex_lock(&cycle->a_mutex, &ctx);
|
||||||
|
|
||||||
complete(cycle->a_signal);
|
complete(cycle->a_signal);
|
||||||
@@ -314,7 +318,7 @@ static void test_cycle_work(struct work_struct *work)
|
|||||||
cycle->result = err ?: erra;
|
cycle->result = err ?: erra;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int __test_cycle(unsigned int nthreads)
|
static int __test_cycle(struct ww_class *class, unsigned int nthreads)
|
||||||
{
|
{
|
||||||
struct test_cycle *cycles;
|
struct test_cycle *cycles;
|
||||||
unsigned int n, last = nthreads - 1;
|
unsigned int n, last = nthreads - 1;
|
||||||
@@ -327,7 +331,8 @@ static int __test_cycle(unsigned int nthreads)
|
|||||||
for (n = 0; n < nthreads; n++) {
|
for (n = 0; n < nthreads; n++) {
|
||||||
struct test_cycle *cycle = &cycles[n];
|
struct test_cycle *cycle = &cycles[n];
|
||||||
|
|
||||||
ww_mutex_init(&cycle->a_mutex, &ww_class);
|
cycle->class = class;
|
||||||
|
ww_mutex_init(&cycle->a_mutex, class);
|
||||||
if (n == last)
|
if (n == last)
|
||||||
cycle->b_mutex = &cycles[0].a_mutex;
|
cycle->b_mutex = &cycles[0].a_mutex;
|
||||||
else
|
else
|
||||||
@@ -367,13 +372,13 @@ static int __test_cycle(unsigned int nthreads)
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int test_cycle(unsigned int ncpus)
|
static int test_cycle(struct ww_class *class, unsigned int ncpus)
|
||||||
{
|
{
|
||||||
unsigned int n;
|
unsigned int n;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
for (n = 2; n <= ncpus + 1; n++) {
|
for (n = 2; n <= ncpus + 1; n++) {
|
||||||
ret = __test_cycle(n);
|
ret = __test_cycle(class, n);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
@@ -384,6 +389,7 @@ static int test_cycle(unsigned int ncpus)
|
|||||||
struct stress {
|
struct stress {
|
||||||
struct work_struct work;
|
struct work_struct work;
|
||||||
struct ww_mutex *locks;
|
struct ww_mutex *locks;
|
||||||
|
struct ww_class *class;
|
||||||
unsigned long timeout;
|
unsigned long timeout;
|
||||||
int nlocks;
|
int nlocks;
|
||||||
};
|
};
|
||||||
@@ -443,7 +449,7 @@ static void stress_inorder_work(struct work_struct *work)
|
|||||||
int contended = -1;
|
int contended = -1;
|
||||||
int n, err;
|
int n, err;
|
||||||
|
|
||||||
ww_acquire_init(&ctx, &ww_class);
|
ww_acquire_init(&ctx, stress->class);
|
||||||
retry:
|
retry:
|
||||||
err = 0;
|
err = 0;
|
||||||
for (n = 0; n < nlocks; n++) {
|
for (n = 0; n < nlocks; n++) {
|
||||||
@@ -511,7 +517,7 @@ static void stress_reorder_work(struct work_struct *work)
|
|||||||
order = NULL;
|
order = NULL;
|
||||||
|
|
||||||
do {
|
do {
|
||||||
ww_acquire_init(&ctx, &ww_class);
|
ww_acquire_init(&ctx, stress->class);
|
||||||
|
|
||||||
list_for_each_entry(ll, &locks, link) {
|
list_for_each_entry(ll, &locks, link) {
|
||||||
err = ww_mutex_lock(ll->lock, &ctx);
|
err = ww_mutex_lock(ll->lock, &ctx);
|
||||||
@@ -570,7 +576,7 @@ static void stress_one_work(struct work_struct *work)
|
|||||||
#define STRESS_ONE BIT(2)
|
#define STRESS_ONE BIT(2)
|
||||||
#define STRESS_ALL (STRESS_INORDER | STRESS_REORDER | STRESS_ONE)
|
#define STRESS_ALL (STRESS_INORDER | STRESS_REORDER | STRESS_ONE)
|
||||||
|
|
||||||
static int stress(int nlocks, int nthreads, unsigned int flags)
|
static int stress(struct ww_class *class, int nlocks, int nthreads, unsigned int flags)
|
||||||
{
|
{
|
||||||
struct ww_mutex *locks;
|
struct ww_mutex *locks;
|
||||||
struct stress *stress_array;
|
struct stress *stress_array;
|
||||||
@@ -588,7 +594,7 @@ static int stress(int nlocks, int nthreads, unsigned int flags)
|
|||||||
}
|
}
|
||||||
|
|
||||||
for (n = 0; n < nlocks; n++)
|
for (n = 0; n < nlocks; n++)
|
||||||
ww_mutex_init(&locks[n], &ww_class);
|
ww_mutex_init(&locks[n], class);
|
||||||
|
|
||||||
count = 0;
|
count = 0;
|
||||||
for (n = 0; nthreads; n++) {
|
for (n = 0; nthreads; n++) {
|
||||||
@@ -617,6 +623,7 @@ static int stress(int nlocks, int nthreads, unsigned int flags)
|
|||||||
stress = &stress_array[count++];
|
stress = &stress_array[count++];
|
||||||
|
|
||||||
INIT_WORK(&stress->work, fn);
|
INIT_WORK(&stress->work, fn);
|
||||||
|
stress->class = class;
|
||||||
stress->locks = locks;
|
stress->locks = locks;
|
||||||
stress->nlocks = nlocks;
|
stress->nlocks = nlocks;
|
||||||
stress->timeout = jiffies + 2*HZ;
|
stress->timeout = jiffies + 2*HZ;
|
||||||
@@ -635,12 +642,100 @@ static int stress(int nlocks, int nthreads, unsigned int flags)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int __init test_ww_mutex_init(void)
|
static int run_tests(struct ww_class *class)
|
||||||
{
|
{
|
||||||
int ncpus = num_online_cpus();
|
int ncpus = num_online_cpus();
|
||||||
int ret, i;
|
int ret, i;
|
||||||
|
|
||||||
printk(KERN_INFO "Beginning ww mutex selftests\n");
|
ret = test_mutex(class);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
ret = test_aa(class, false);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
ret = test_aa(class, true);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
for (i = 0; i < 4; i++) {
|
||||||
|
ret = test_abba(class, i & 1, i & 2);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = test_cycle(class, ncpus);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
ret = stress(class, 16, 2 * ncpus, STRESS_INORDER);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
ret = stress(class, 16, 2 * ncpus, STRESS_REORDER);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
ret = stress(class, 2046, hweight32(STRESS_ALL) * ncpus, STRESS_ALL);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int run_test_classes(void)
|
||||||
|
{
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
pr_info("Beginning ww (wound) mutex selftests\n");
|
||||||
|
|
||||||
|
ret = run_tests(&ww_class);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
pr_info("Beginning ww (die) mutex selftests\n");
|
||||||
|
ret = run_tests(&wd_class);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
pr_info("All ww mutex selftests passed\n");
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static DEFINE_MUTEX(run_lock);
|
||||||
|
|
||||||
|
static ssize_t run_tests_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||||
|
const char *buf, size_t count)
|
||||||
|
{
|
||||||
|
if (!mutex_trylock(&run_lock)) {
|
||||||
|
pr_err("Test already running\n");
|
||||||
|
return count;
|
||||||
|
}
|
||||||
|
|
||||||
|
run_test_classes();
|
||||||
|
mutex_unlock(&run_lock);
|
||||||
|
|
||||||
|
return count;
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct kobj_attribute run_tests_attribute =
|
||||||
|
__ATTR(run_tests, 0664, NULL, run_tests_store);
|
||||||
|
|
||||||
|
static struct attribute *attrs[] = {
|
||||||
|
&run_tests_attribute.attr,
|
||||||
|
NULL, /* need to NULL terminate the list of attributes */
|
||||||
|
};
|
||||||
|
|
||||||
|
static struct attribute_group attr_group = {
|
||||||
|
.attrs = attrs,
|
||||||
|
};
|
||||||
|
|
||||||
|
static struct kobject *test_ww_mutex_kobj;
|
||||||
|
|
||||||
|
static int __init test_ww_mutex_init(void)
|
||||||
|
{
|
||||||
|
int ret;
|
||||||
|
|
||||||
prandom_seed_state(&rng, get_random_u64());
|
prandom_seed_state(&rng, get_random_u64());
|
||||||
|
|
||||||
@@ -648,46 +743,30 @@ static int __init test_ww_mutex_init(void)
|
|||||||
if (!wq)
|
if (!wq)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
ret = test_mutex();
|
test_ww_mutex_kobj = kobject_create_and_add("test_ww_mutex", kernel_kobj);
|
||||||
if (ret)
|
if (!test_ww_mutex_kobj) {
|
||||||
return ret;
|
destroy_workqueue(wq);
|
||||||
|
return -ENOMEM;
|
||||||
ret = test_aa(false);
|
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
|
|
||||||
ret = test_aa(true);
|
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
|
|
||||||
for (i = 0; i < 4; i++) {
|
|
||||||
ret = test_abba(i & 1, i & 2);
|
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = test_cycle(ncpus);
|
/* Create the files associated with this kobject */
|
||||||
if (ret)
|
ret = sysfs_create_group(test_ww_mutex_kobj, &attr_group);
|
||||||
|
if (ret) {
|
||||||
|
kobject_put(test_ww_mutex_kobj);
|
||||||
|
destroy_workqueue(wq);
|
||||||
return ret;
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
ret = stress(16, 2*ncpus, STRESS_INORDER);
|
mutex_lock(&run_lock);
|
||||||
if (ret)
|
ret = run_test_classes();
|
||||||
return ret;
|
mutex_unlock(&run_lock);
|
||||||
|
|
||||||
ret = stress(16, 2*ncpus, STRESS_REORDER);
|
return ret;
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
|
|
||||||
ret = stress(2046, hweight32(STRESS_ALL)*ncpus, STRESS_ALL);
|
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
|
|
||||||
printk(KERN_INFO "All ww mutex selftests passed\n");
|
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __exit test_ww_mutex_exit(void)
|
static void __exit test_ww_mutex_exit(void)
|
||||||
{
|
{
|
||||||
|
kobject_put(test_ww_mutex_kobj);
|
||||||
destroy_workqueue(wq);
|
destroy_workqueue(wq);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -245,6 +245,7 @@ int devkmsg_sysctl_set_loglvl(const struct ctl_table *table, int write,
|
|||||||
* For console list or console->flags updates
|
* For console list or console->flags updates
|
||||||
*/
|
*/
|
||||||
void console_list_lock(void)
|
void console_list_lock(void)
|
||||||
|
__acquires(&console_mutex)
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* In unregister_console() and console_force_preferred_locked(),
|
* In unregister_console() and console_force_preferred_locked(),
|
||||||
@@ -269,6 +270,7 @@ EXPORT_SYMBOL(console_list_lock);
|
|||||||
* Counterpart to console_list_lock()
|
* Counterpart to console_list_lock()
|
||||||
*/
|
*/
|
||||||
void console_list_unlock(void)
|
void console_list_unlock(void)
|
||||||
|
__releases(&console_mutex)
|
||||||
{
|
{
|
||||||
mutex_unlock(&console_mutex);
|
mutex_unlock(&console_mutex);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,5 +1,8 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
|
|
||||||
|
CONTEXT_ANALYSIS_core.o := y
|
||||||
|
CONTEXT_ANALYSIS_fair.o := y
|
||||||
|
|
||||||
# The compilers are complaining about unused variables inside an if(0) scope
|
# The compilers are complaining about unused variables inside an if(0) scope
|
||||||
# block. This is daft, shut them up.
|
# block. This is daft, shut them up.
|
||||||
ccflags-y += $(call cc-disable-warning, unused-but-set-variable)
|
ccflags-y += $(call cc-disable-warning, unused-but-set-variable)
|
||||||
|
|||||||
@@ -396,6 +396,8 @@ static atomic_t sched_core_count;
|
|||||||
static struct cpumask sched_core_mask;
|
static struct cpumask sched_core_mask;
|
||||||
|
|
||||||
static void sched_core_lock(int cpu, unsigned long *flags)
|
static void sched_core_lock(int cpu, unsigned long *flags)
|
||||||
|
__context_unsafe(/* acquires multiple */)
|
||||||
|
__acquires(&runqueues.__lock) /* overapproximation */
|
||||||
{
|
{
|
||||||
const struct cpumask *smt_mask = cpu_smt_mask(cpu);
|
const struct cpumask *smt_mask = cpu_smt_mask(cpu);
|
||||||
int t, i = 0;
|
int t, i = 0;
|
||||||
@@ -406,6 +408,8 @@ static void sched_core_lock(int cpu, unsigned long *flags)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void sched_core_unlock(int cpu, unsigned long *flags)
|
static void sched_core_unlock(int cpu, unsigned long *flags)
|
||||||
|
__context_unsafe(/* releases multiple */)
|
||||||
|
__releases(&runqueues.__lock) /* overapproximation */
|
||||||
{
|
{
|
||||||
const struct cpumask *smt_mask = cpu_smt_mask(cpu);
|
const struct cpumask *smt_mask = cpu_smt_mask(cpu);
|
||||||
int t;
|
int t;
|
||||||
@@ -630,6 +634,7 @@ EXPORT_SYMBOL(__trace_set_current_state);
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
|
void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
|
||||||
|
__context_unsafe()
|
||||||
{
|
{
|
||||||
raw_spinlock_t *lock;
|
raw_spinlock_t *lock;
|
||||||
|
|
||||||
@@ -655,6 +660,7 @@ void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
|
|||||||
}
|
}
|
||||||
|
|
||||||
bool raw_spin_rq_trylock(struct rq *rq)
|
bool raw_spin_rq_trylock(struct rq *rq)
|
||||||
|
__context_unsafe()
|
||||||
{
|
{
|
||||||
raw_spinlock_t *lock;
|
raw_spinlock_t *lock;
|
||||||
bool ret;
|
bool ret;
|
||||||
@@ -696,15 +702,16 @@ void double_rq_lock(struct rq *rq1, struct rq *rq2)
|
|||||||
raw_spin_rq_lock(rq1);
|
raw_spin_rq_lock(rq1);
|
||||||
if (__rq_lockp(rq1) != __rq_lockp(rq2))
|
if (__rq_lockp(rq1) != __rq_lockp(rq2))
|
||||||
raw_spin_rq_lock_nested(rq2, SINGLE_DEPTH_NESTING);
|
raw_spin_rq_lock_nested(rq2, SINGLE_DEPTH_NESTING);
|
||||||
|
else
|
||||||
|
__acquire_ctx_lock(__rq_lockp(rq2)); /* fake acquire */
|
||||||
|
|
||||||
double_rq_clock_clear_update(rq1, rq2);
|
double_rq_clock_clear_update(rq1, rq2);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* __task_rq_lock - lock the rq @p resides on.
|
* ___task_rq_lock - lock the rq @p resides on.
|
||||||
*/
|
*/
|
||||||
struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
|
struct rq *___task_rq_lock(struct task_struct *p, struct rq_flags *rf)
|
||||||
__acquires(rq->lock)
|
|
||||||
{
|
{
|
||||||
struct rq *rq;
|
struct rq *rq;
|
||||||
|
|
||||||
@@ -727,9 +734,7 @@ struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
|
|||||||
/*
|
/*
|
||||||
* task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
|
* task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
|
||||||
*/
|
*/
|
||||||
struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
|
struct rq *_task_rq_lock(struct task_struct *p, struct rq_flags *rf)
|
||||||
__acquires(p->pi_lock)
|
|
||||||
__acquires(rq->lock)
|
|
||||||
{
|
{
|
||||||
struct rq *rq;
|
struct rq *rq;
|
||||||
|
|
||||||
@@ -2431,6 +2436,7 @@ static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
|
|||||||
*/
|
*/
|
||||||
static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf,
|
static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf,
|
||||||
struct task_struct *p, int new_cpu)
|
struct task_struct *p, int new_cpu)
|
||||||
|
__must_hold(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
lockdep_assert_rq_held(rq);
|
lockdep_assert_rq_held(rq);
|
||||||
|
|
||||||
@@ -2477,6 +2483,7 @@ struct set_affinity_pending {
|
|||||||
*/
|
*/
|
||||||
static struct rq *__migrate_task(struct rq *rq, struct rq_flags *rf,
|
static struct rq *__migrate_task(struct rq *rq, struct rq_flags *rf,
|
||||||
struct task_struct *p, int dest_cpu)
|
struct task_struct *p, int dest_cpu)
|
||||||
|
__must_hold(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
/* Affinity changed (again). */
|
/* Affinity changed (again). */
|
||||||
if (!is_cpu_allowed(p, dest_cpu))
|
if (!is_cpu_allowed(p, dest_cpu))
|
||||||
@@ -2513,6 +2520,12 @@ static int migration_cpu_stop(void *data)
|
|||||||
*/
|
*/
|
||||||
flush_smp_call_function_queue();
|
flush_smp_call_function_queue();
|
||||||
|
|
||||||
|
/*
|
||||||
|
* We may change the underlying rq, but the locks held will
|
||||||
|
* appropriately be "transferred" when switching.
|
||||||
|
*/
|
||||||
|
context_unsafe_alias(rq);
|
||||||
|
|
||||||
raw_spin_lock(&p->pi_lock);
|
raw_spin_lock(&p->pi_lock);
|
||||||
rq_lock(rq, &rf);
|
rq_lock(rq, &rf);
|
||||||
|
|
||||||
@@ -2624,6 +2637,8 @@ int push_cpu_stop(void *arg)
|
|||||||
if (!lowest_rq)
|
if (!lowest_rq)
|
||||||
goto out_unlock;
|
goto out_unlock;
|
||||||
|
|
||||||
|
lockdep_assert_rq_held(lowest_rq);
|
||||||
|
|
||||||
// XXX validate p is still the highest prio task
|
// XXX validate p is still the highest prio task
|
||||||
if (task_rq(p) == rq) {
|
if (task_rq(p) == rq) {
|
||||||
move_queued_task_locked(rq, lowest_rq, p);
|
move_queued_task_locked(rq, lowest_rq, p);
|
||||||
@@ -2834,8 +2849,7 @@ void release_user_cpus_ptr(struct task_struct *p)
|
|||||||
*/
|
*/
|
||||||
static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flags *rf,
|
static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flags *rf,
|
||||||
int dest_cpu, unsigned int flags)
|
int dest_cpu, unsigned int flags)
|
||||||
__releases(rq->lock)
|
__releases(__rq_lockp(rq), &p->pi_lock)
|
||||||
__releases(p->pi_lock)
|
|
||||||
{
|
{
|
||||||
struct set_affinity_pending my_pending = { }, *pending = NULL;
|
struct set_affinity_pending my_pending = { }, *pending = NULL;
|
||||||
bool stop_pending, complete = false;
|
bool stop_pending, complete = false;
|
||||||
@@ -2990,8 +3004,7 @@ static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
|
|||||||
struct affinity_context *ctx,
|
struct affinity_context *ctx,
|
||||||
struct rq *rq,
|
struct rq *rq,
|
||||||
struct rq_flags *rf)
|
struct rq_flags *rf)
|
||||||
__releases(rq->lock)
|
__releases(__rq_lockp(rq), &p->pi_lock)
|
||||||
__releases(p->pi_lock)
|
|
||||||
{
|
{
|
||||||
const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
|
const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
|
||||||
const struct cpumask *cpu_valid_mask = cpu_active_mask;
|
const struct cpumask *cpu_valid_mask = cpu_active_mask;
|
||||||
@@ -4273,29 +4286,30 @@ static bool __task_needs_rq_lock(struct task_struct *p)
|
|||||||
*/
|
*/
|
||||||
int task_call_func(struct task_struct *p, task_call_f func, void *arg)
|
int task_call_func(struct task_struct *p, task_call_f func, void *arg)
|
||||||
{
|
{
|
||||||
struct rq *rq = NULL;
|
|
||||||
struct rq_flags rf;
|
struct rq_flags rf;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
|
raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
|
||||||
|
|
||||||
if (__task_needs_rq_lock(p))
|
if (__task_needs_rq_lock(p)) {
|
||||||
rq = __task_rq_lock(p, &rf);
|
struct rq *rq = __task_rq_lock(p, &rf);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* At this point the task is pinned; either:
|
* At this point the task is pinned; either:
|
||||||
* - blocked and we're holding off wakeups (pi->lock)
|
* - blocked and we're holding off wakeups (pi->lock)
|
||||||
* - woken, and we're holding off enqueue (rq->lock)
|
* - woken, and we're holding off enqueue (rq->lock)
|
||||||
* - queued, and we're holding off schedule (rq->lock)
|
* - queued, and we're holding off schedule (rq->lock)
|
||||||
* - running, and we're holding off de-schedule (rq->lock)
|
* - running, and we're holding off de-schedule (rq->lock)
|
||||||
*
|
*
|
||||||
* The called function (@func) can use: task_curr(), p->on_rq and
|
* The called function (@func) can use: task_curr(), p->on_rq and
|
||||||
* p->__state to differentiate between these states.
|
* p->__state to differentiate between these states.
|
||||||
*/
|
*/
|
||||||
ret = func(p, arg);
|
ret = func(p, arg);
|
||||||
|
|
||||||
if (rq)
|
|
||||||
__task_rq_unlock(rq, p, &rf);
|
__task_rq_unlock(rq, p, &rf);
|
||||||
|
} else {
|
||||||
|
ret = func(p, arg);
|
||||||
|
}
|
||||||
|
|
||||||
raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
|
raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
|
||||||
return ret;
|
return ret;
|
||||||
@@ -4972,6 +4986,8 @@ void balance_callbacks(struct rq *rq, struct balance_callback *head)
|
|||||||
|
|
||||||
static inline void
|
static inline void
|
||||||
prepare_lock_switch(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
|
prepare_lock_switch(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
|
||||||
|
__releases(__rq_lockp(rq))
|
||||||
|
__acquires(__rq_lockp(this_rq()))
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* Since the runqueue lock will be released by the next
|
* Since the runqueue lock will be released by the next
|
||||||
@@ -4985,9 +5001,15 @@ prepare_lock_switch(struct rq *rq, struct task_struct *next, struct rq_flags *rf
|
|||||||
/* this is a valid case when another task releases the spinlock */
|
/* this is a valid case when another task releases the spinlock */
|
||||||
rq_lockp(rq)->owner = next;
|
rq_lockp(rq)->owner = next;
|
||||||
#endif
|
#endif
|
||||||
|
/*
|
||||||
|
* Model the rq reference switcheroo.
|
||||||
|
*/
|
||||||
|
__release(__rq_lockp(rq));
|
||||||
|
__acquire(__rq_lockp(this_rq()));
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void finish_lock_switch(struct rq *rq)
|
static inline void finish_lock_switch(struct rq *rq)
|
||||||
|
__releases(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* If we are tracking spinlock dependencies then we have to
|
* If we are tracking spinlock dependencies then we have to
|
||||||
@@ -5043,6 +5065,7 @@ static inline void kmap_local_sched_in(void)
|
|||||||
static inline void
|
static inline void
|
||||||
prepare_task_switch(struct rq *rq, struct task_struct *prev,
|
prepare_task_switch(struct rq *rq, struct task_struct *prev,
|
||||||
struct task_struct *next)
|
struct task_struct *next)
|
||||||
|
__must_hold(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
kcov_prepare_switch(prev);
|
kcov_prepare_switch(prev);
|
||||||
sched_info_switch(rq, prev, next);
|
sched_info_switch(rq, prev, next);
|
||||||
@@ -5073,7 +5096,7 @@ prepare_task_switch(struct rq *rq, struct task_struct *prev,
|
|||||||
* because prev may have moved to another CPU.
|
* because prev may have moved to another CPU.
|
||||||
*/
|
*/
|
||||||
static struct rq *finish_task_switch(struct task_struct *prev)
|
static struct rq *finish_task_switch(struct task_struct *prev)
|
||||||
__releases(rq->lock)
|
__releases(__rq_lockp(this_rq()))
|
||||||
{
|
{
|
||||||
struct rq *rq = this_rq();
|
struct rq *rq = this_rq();
|
||||||
struct mm_struct *mm = rq->prev_mm;
|
struct mm_struct *mm = rq->prev_mm;
|
||||||
@@ -5169,7 +5192,7 @@ static struct rq *finish_task_switch(struct task_struct *prev)
|
|||||||
* @prev: the thread we just switched away from.
|
* @prev: the thread we just switched away from.
|
||||||
*/
|
*/
|
||||||
asmlinkage __visible void schedule_tail(struct task_struct *prev)
|
asmlinkage __visible void schedule_tail(struct task_struct *prev)
|
||||||
__releases(rq->lock)
|
__releases(__rq_lockp(this_rq()))
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* New tasks start with FORK_PREEMPT_COUNT, see there and
|
* New tasks start with FORK_PREEMPT_COUNT, see there and
|
||||||
@@ -5201,6 +5224,7 @@ asmlinkage __visible void schedule_tail(struct task_struct *prev)
|
|||||||
static __always_inline struct rq *
|
static __always_inline struct rq *
|
||||||
context_switch(struct rq *rq, struct task_struct *prev,
|
context_switch(struct rq *rq, struct task_struct *prev,
|
||||||
struct task_struct *next, struct rq_flags *rf)
|
struct task_struct *next, struct rq_flags *rf)
|
||||||
|
__releases(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
prepare_task_switch(rq, prev, next);
|
prepare_task_switch(rq, prev, next);
|
||||||
|
|
||||||
@@ -5869,6 +5893,7 @@ static void prev_balance(struct rq *rq, struct task_struct *prev,
|
|||||||
*/
|
*/
|
||||||
static inline struct task_struct *
|
static inline struct task_struct *
|
||||||
__pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
|
__pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
|
||||||
|
__must_hold(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
const struct sched_class *class;
|
const struct sched_class *class;
|
||||||
struct task_struct *p;
|
struct task_struct *p;
|
||||||
@@ -5969,6 +5994,7 @@ static void queue_core_balance(struct rq *rq);
|
|||||||
|
|
||||||
static struct task_struct *
|
static struct task_struct *
|
||||||
pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
|
pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
|
||||||
|
__must_hold(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
struct task_struct *next, *p, *max;
|
struct task_struct *next, *p, *max;
|
||||||
const struct cpumask *smt_mask;
|
const struct cpumask *smt_mask;
|
||||||
@@ -6277,6 +6303,7 @@ static bool steal_cookie_task(int cpu, struct sched_domain *sd)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void sched_core_balance(struct rq *rq)
|
static void sched_core_balance(struct rq *rq)
|
||||||
|
__must_hold(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
struct sched_domain *sd;
|
struct sched_domain *sd;
|
||||||
int cpu = cpu_of(rq);
|
int cpu = cpu_of(rq);
|
||||||
@@ -6422,6 +6449,7 @@ static inline void sched_core_cpu_dying(unsigned int cpu) {}
|
|||||||
|
|
||||||
static struct task_struct *
|
static struct task_struct *
|
||||||
pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
|
pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
|
||||||
|
__must_hold(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
return __pick_next_task(rq, prev, rf);
|
return __pick_next_task(rq, prev, rf);
|
||||||
}
|
}
|
||||||
@@ -8045,6 +8073,12 @@ static int __balance_push_cpu_stop(void *arg)
|
|||||||
int cpu;
|
int cpu;
|
||||||
|
|
||||||
scoped_guard (raw_spinlock_irq, &p->pi_lock) {
|
scoped_guard (raw_spinlock_irq, &p->pi_lock) {
|
||||||
|
/*
|
||||||
|
* We may change the underlying rq, but the locks held will
|
||||||
|
* appropriately be "transferred" when switching.
|
||||||
|
*/
|
||||||
|
context_unsafe_alias(rq);
|
||||||
|
|
||||||
cpu = select_fallback_rq(rq->cpu, p);
|
cpu = select_fallback_rq(rq->cpu, p);
|
||||||
|
|
||||||
rq_lock(rq, &rf);
|
rq_lock(rq, &rf);
|
||||||
@@ -8068,6 +8102,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
|
|||||||
* effective when the hotplug motion is down.
|
* effective when the hotplug motion is down.
|
||||||
*/
|
*/
|
||||||
static void balance_push(struct rq *rq)
|
static void balance_push(struct rq *rq)
|
||||||
|
__must_hold(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
struct task_struct *push_task = rq->curr;
|
struct task_struct *push_task = rq->curr;
|
||||||
|
|
||||||
|
|||||||
@@ -2860,6 +2860,7 @@ static int preferred_group_nid(struct task_struct *p, int nid)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void task_numa_placement(struct task_struct *p)
|
static void task_numa_placement(struct task_struct *p)
|
||||||
|
__context_unsafe(/* conditional locking */)
|
||||||
{
|
{
|
||||||
int seq, nid, max_nid = NUMA_NO_NODE;
|
int seq, nid, max_nid = NUMA_NO_NODE;
|
||||||
unsigned long max_faults = 0;
|
unsigned long max_faults = 0;
|
||||||
@@ -4781,7 +4782,8 @@ static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq)
|
|||||||
return cfs_rq->avg.load_avg;
|
return cfs_rq->avg.load_avg;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf);
|
static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf)
|
||||||
|
__must_hold(__rq_lockp(this_rq));
|
||||||
|
|
||||||
static inline unsigned long task_util(struct task_struct *p)
|
static inline unsigned long task_util(struct task_struct *p)
|
||||||
{
|
{
|
||||||
@@ -6188,6 +6190,7 @@ next:
|
|||||||
* used to track this state.
|
* used to track this state.
|
||||||
*/
|
*/
|
||||||
static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun, unsigned long flags)
|
static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun, unsigned long flags)
|
||||||
|
__must_hold(&cfs_b->lock)
|
||||||
{
|
{
|
||||||
int throttled;
|
int throttled;
|
||||||
|
|
||||||
@@ -8909,6 +8912,7 @@ static void set_next_task_fair(struct rq *rq, struct task_struct *p, bool first)
|
|||||||
|
|
||||||
struct task_struct *
|
struct task_struct *
|
||||||
pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
|
pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
|
||||||
|
__must_hold(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
struct sched_entity *se;
|
struct sched_entity *se;
|
||||||
struct task_struct *p;
|
struct task_struct *p;
|
||||||
@@ -12842,6 +12846,7 @@ static inline void nohz_newidle_balance(struct rq *this_rq) { }
|
|||||||
* > 0 - success, new (fair) tasks present
|
* > 0 - success, new (fair) tasks present
|
||||||
*/
|
*/
|
||||||
static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf)
|
static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf)
|
||||||
|
__must_hold(__rq_lockp(this_rq))
|
||||||
{
|
{
|
||||||
unsigned long next_balance = jiffies + HZ;
|
unsigned long next_balance = jiffies + HZ;
|
||||||
int this_cpu = this_rq->cpu;
|
int this_cpu = this_rq->cpu;
|
||||||
|
|||||||
@@ -1362,8 +1362,13 @@ static inline u32 sched_rng(void)
|
|||||||
return prandom_u32_state(this_cpu_ptr(&sched_rnd_state));
|
return prandom_u32_state(this_cpu_ptr(&sched_rnd_state));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static __always_inline struct rq *__this_rq(void)
|
||||||
|
{
|
||||||
|
return this_cpu_ptr(&runqueues);
|
||||||
|
}
|
||||||
|
|
||||||
#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
|
#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
|
||||||
#define this_rq() this_cpu_ptr(&runqueues)
|
#define this_rq() __this_rq()
|
||||||
#define task_rq(p) cpu_rq(task_cpu(p))
|
#define task_rq(p) cpu_rq(task_cpu(p))
|
||||||
#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
|
#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
|
||||||
#define raw_rq() raw_cpu_ptr(&runqueues)
|
#define raw_rq() raw_cpu_ptr(&runqueues)
|
||||||
@@ -1430,6 +1435,7 @@ static inline raw_spinlock_t *rq_lockp(struct rq *rq)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
|
static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
|
||||||
|
__returns_ctx_lock(rq_lockp(rq)) /* alias them */
|
||||||
{
|
{
|
||||||
if (rq->core_enabled)
|
if (rq->core_enabled)
|
||||||
return &rq->core->__lock;
|
return &rq->core->__lock;
|
||||||
@@ -1529,6 +1535,7 @@ static inline raw_spinlock_t *rq_lockp(struct rq *rq)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
|
static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
|
||||||
|
__returns_ctx_lock(rq_lockp(rq)) /* alias them */
|
||||||
{
|
{
|
||||||
return &rq->__lock;
|
return &rq->__lock;
|
||||||
}
|
}
|
||||||
@@ -1571,32 +1578,42 @@ static inline bool rt_group_sched_enabled(void)
|
|||||||
#endif /* !CONFIG_RT_GROUP_SCHED */
|
#endif /* !CONFIG_RT_GROUP_SCHED */
|
||||||
|
|
||||||
static inline void lockdep_assert_rq_held(struct rq *rq)
|
static inline void lockdep_assert_rq_held(struct rq *rq)
|
||||||
|
__assumes_ctx_lock(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
lockdep_assert_held(__rq_lockp(rq));
|
lockdep_assert_held(__rq_lockp(rq));
|
||||||
}
|
}
|
||||||
|
|
||||||
extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
|
extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
|
||||||
extern bool raw_spin_rq_trylock(struct rq *rq);
|
__acquires(__rq_lockp(rq));
|
||||||
extern void raw_spin_rq_unlock(struct rq *rq);
|
|
||||||
|
extern bool raw_spin_rq_trylock(struct rq *rq)
|
||||||
|
__cond_acquires(true, __rq_lockp(rq));
|
||||||
|
|
||||||
|
extern void raw_spin_rq_unlock(struct rq *rq)
|
||||||
|
__releases(__rq_lockp(rq));
|
||||||
|
|
||||||
static inline void raw_spin_rq_lock(struct rq *rq)
|
static inline void raw_spin_rq_lock(struct rq *rq)
|
||||||
|
__acquires(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
raw_spin_rq_lock_nested(rq, 0);
|
raw_spin_rq_lock_nested(rq, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void raw_spin_rq_lock_irq(struct rq *rq)
|
static inline void raw_spin_rq_lock_irq(struct rq *rq)
|
||||||
|
__acquires(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
local_irq_disable();
|
local_irq_disable();
|
||||||
raw_spin_rq_lock(rq);
|
raw_spin_rq_lock(rq);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void raw_spin_rq_unlock_irq(struct rq *rq)
|
static inline void raw_spin_rq_unlock_irq(struct rq *rq)
|
||||||
|
__releases(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
raw_spin_rq_unlock(rq);
|
raw_spin_rq_unlock(rq);
|
||||||
local_irq_enable();
|
local_irq_enable();
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned long _raw_spin_rq_lock_irqsave(struct rq *rq)
|
static inline unsigned long _raw_spin_rq_lock_irqsave(struct rq *rq)
|
||||||
|
__acquires(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
@@ -1607,6 +1624,7 @@ static inline unsigned long _raw_spin_rq_lock_irqsave(struct rq *rq)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void raw_spin_rq_unlock_irqrestore(struct rq *rq, unsigned long flags)
|
static inline void raw_spin_rq_unlock_irqrestore(struct rq *rq, unsigned long flags)
|
||||||
|
__releases(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
raw_spin_rq_unlock(rq);
|
raw_spin_rq_unlock(rq);
|
||||||
local_irq_restore(flags);
|
local_irq_restore(flags);
|
||||||
@@ -1855,18 +1873,16 @@ static inline void rq_repin_lock(struct rq *rq, struct rq_flags *rf)
|
|||||||
rq->clock_update_flags |= rf->clock_update_flags;
|
rq->clock_update_flags |= rf->clock_update_flags;
|
||||||
}
|
}
|
||||||
|
|
||||||
extern
|
#define __task_rq_lock(...) __acquire_ret(___task_rq_lock(__VA_ARGS__), __rq_lockp(__ret))
|
||||||
struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
|
extern struct rq *___task_rq_lock(struct task_struct *p, struct rq_flags *rf) __acquires_ret;
|
||||||
__acquires(rq->lock);
|
|
||||||
|
|
||||||
extern
|
#define task_rq_lock(...) __acquire_ret(_task_rq_lock(__VA_ARGS__), __rq_lockp(__ret))
|
||||||
struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
|
extern struct rq *_task_rq_lock(struct task_struct *p, struct rq_flags *rf)
|
||||||
__acquires(p->pi_lock)
|
__acquires(&p->pi_lock) __acquires_ret;
|
||||||
__acquires(rq->lock);
|
|
||||||
|
|
||||||
static inline void
|
static inline void
|
||||||
__task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
|
__task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
|
||||||
__releases(rq->lock)
|
__releases(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
rq_unpin_lock(rq, rf);
|
rq_unpin_lock(rq, rf);
|
||||||
raw_spin_rq_unlock(rq);
|
raw_spin_rq_unlock(rq);
|
||||||
@@ -1874,8 +1890,7 @@ __task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
|
|||||||
|
|
||||||
static inline void
|
static inline void
|
||||||
task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
|
task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
|
||||||
__releases(rq->lock)
|
__releases(__rq_lockp(rq), &p->pi_lock)
|
||||||
__releases(p->pi_lock)
|
|
||||||
{
|
{
|
||||||
__task_rq_unlock(rq, p, rf);
|
__task_rq_unlock(rq, p, rf);
|
||||||
raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
|
raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
|
||||||
@@ -1885,6 +1900,8 @@ DEFINE_LOCK_GUARD_1(task_rq_lock, struct task_struct,
|
|||||||
_T->rq = task_rq_lock(_T->lock, &_T->rf),
|
_T->rq = task_rq_lock(_T->lock, &_T->rf),
|
||||||
task_rq_unlock(_T->rq, _T->lock, &_T->rf),
|
task_rq_unlock(_T->rq, _T->lock, &_T->rf),
|
||||||
struct rq *rq; struct rq_flags rf)
|
struct rq *rq; struct rq_flags rf)
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(task_rq_lock, __acquires(_T->pi_lock), __releases((*(struct task_struct **)_T)->pi_lock))
|
||||||
|
#define class_task_rq_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(task_rq_lock, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(__task_rq_lock, struct task_struct,
|
DEFINE_LOCK_GUARD_1(__task_rq_lock, struct task_struct,
|
||||||
_T->rq = __task_rq_lock(_T->lock, &_T->rf),
|
_T->rq = __task_rq_lock(_T->lock, &_T->rf),
|
||||||
@@ -1892,42 +1909,42 @@ DEFINE_LOCK_GUARD_1(__task_rq_lock, struct task_struct,
|
|||||||
struct rq *rq; struct rq_flags rf)
|
struct rq *rq; struct rq_flags rf)
|
||||||
|
|
||||||
static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
|
static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
|
||||||
__acquires(rq->lock)
|
__acquires(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
raw_spin_rq_lock_irqsave(rq, rf->flags);
|
raw_spin_rq_lock_irqsave(rq, rf->flags);
|
||||||
rq_pin_lock(rq, rf);
|
rq_pin_lock(rq, rf);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void rq_lock_irq(struct rq *rq, struct rq_flags *rf)
|
static inline void rq_lock_irq(struct rq *rq, struct rq_flags *rf)
|
||||||
__acquires(rq->lock)
|
__acquires(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
raw_spin_rq_lock_irq(rq);
|
raw_spin_rq_lock_irq(rq);
|
||||||
rq_pin_lock(rq, rf);
|
rq_pin_lock(rq, rf);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void rq_lock(struct rq *rq, struct rq_flags *rf)
|
static inline void rq_lock(struct rq *rq, struct rq_flags *rf)
|
||||||
__acquires(rq->lock)
|
__acquires(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
raw_spin_rq_lock(rq);
|
raw_spin_rq_lock(rq);
|
||||||
rq_pin_lock(rq, rf);
|
rq_pin_lock(rq, rf);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
|
static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
|
||||||
__releases(rq->lock)
|
__releases(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
rq_unpin_lock(rq, rf);
|
rq_unpin_lock(rq, rf);
|
||||||
raw_spin_rq_unlock_irqrestore(rq, rf->flags);
|
raw_spin_rq_unlock_irqrestore(rq, rf->flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
|
static inline void rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
|
||||||
__releases(rq->lock)
|
__releases(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
rq_unpin_lock(rq, rf);
|
rq_unpin_lock(rq, rf);
|
||||||
raw_spin_rq_unlock_irq(rq);
|
raw_spin_rq_unlock_irq(rq);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void rq_unlock(struct rq *rq, struct rq_flags *rf)
|
static inline void rq_unlock(struct rq *rq, struct rq_flags *rf)
|
||||||
__releases(rq->lock)
|
__releases(__rq_lockp(rq))
|
||||||
{
|
{
|
||||||
rq_unpin_lock(rq, rf);
|
rq_unpin_lock(rq, rf);
|
||||||
raw_spin_rq_unlock(rq);
|
raw_spin_rq_unlock(rq);
|
||||||
@@ -1938,18 +1955,27 @@ DEFINE_LOCK_GUARD_1(rq_lock, struct rq,
|
|||||||
rq_unlock(_T->lock, &_T->rf),
|
rq_unlock(_T->lock, &_T->rf),
|
||||||
struct rq_flags rf)
|
struct rq_flags rf)
|
||||||
|
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(rq_lock, __acquires(__rq_lockp(_T)), __releases(__rq_lockp(*(struct rq **)_T)));
|
||||||
|
#define class_rq_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rq_lock, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(rq_lock_irq, struct rq,
|
DEFINE_LOCK_GUARD_1(rq_lock_irq, struct rq,
|
||||||
rq_lock_irq(_T->lock, &_T->rf),
|
rq_lock_irq(_T->lock, &_T->rf),
|
||||||
rq_unlock_irq(_T->lock, &_T->rf),
|
rq_unlock_irq(_T->lock, &_T->rf),
|
||||||
struct rq_flags rf)
|
struct rq_flags rf)
|
||||||
|
|
||||||
|
DECLARE_LOCK_GUARD_1_ATTRS(rq_lock_irq, __acquires(__rq_lockp(_T)), __releases(__rq_lockp(*(struct rq **)_T)));
|
||||||
|
#define class_rq_lock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rq_lock_irq, _T)
|
||||||
|
|
||||||
DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
|
DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
|
||||||
rq_lock_irqsave(_T->lock, &_T->rf),
|
rq_lock_irqsave(_T->lock, &_T->rf),
|
||||||
rq_unlock_irqrestore(_T->lock, &_T->rf),
|
rq_unlock_irqrestore(_T->lock, &_T->rf),
|
||||||
struct rq_flags rf)
|
struct rq_flags rf)
|
||||||
|
|
||||||
static inline struct rq *this_rq_lock_irq(struct rq_flags *rf)
|
DECLARE_LOCK_GUARD_1_ATTRS(rq_lock_irqsave, __acquires(__rq_lockp(_T)), __releases(__rq_lockp(*(struct rq **)_T)));
|
||||||
__acquires(rq->lock)
|
#define class_rq_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rq_lock_irqsave, _T)
|
||||||
|
|
||||||
|
#define this_rq_lock_irq(...) __acquire_ret(_this_rq_lock_irq(__VA_ARGS__), __rq_lockp(__ret))
|
||||||
|
static inline struct rq *_this_rq_lock_irq(struct rq_flags *rf) __acquires_ret
|
||||||
{
|
{
|
||||||
struct rq *rq;
|
struct rq *rq;
|
||||||
|
|
||||||
@@ -3077,8 +3103,20 @@ static inline void double_rq_clock_clear_update(struct rq *rq1, struct rq *rq2)
|
|||||||
#define DEFINE_LOCK_GUARD_2(name, type, _lock, _unlock, ...) \
|
#define DEFINE_LOCK_GUARD_2(name, type, _lock, _unlock, ...) \
|
||||||
__DEFINE_UNLOCK_GUARD(name, type, _unlock, type *lock2; __VA_ARGS__) \
|
__DEFINE_UNLOCK_GUARD(name, type, _unlock, type *lock2; __VA_ARGS__) \
|
||||||
static inline class_##name##_t class_##name##_constructor(type *lock, type *lock2) \
|
static inline class_##name##_t class_##name##_constructor(type *lock, type *lock2) \
|
||||||
|
__no_context_analysis \
|
||||||
{ class_##name##_t _t = { .lock = lock, .lock2 = lock2 }, *_T = &_t; \
|
{ class_##name##_t _t = { .lock = lock, .lock2 = lock2 }, *_T = &_t; \
|
||||||
_lock; return _t; }
|
_lock; return _t; }
|
||||||
|
#define DECLARE_LOCK_GUARD_2_ATTRS(_name, _lock, _unlock1, _unlock2) \
|
||||||
|
static inline class_##_name##_t class_##_name##_constructor(lock_##_name##_t *_T1, \
|
||||||
|
lock_##_name##_t *_T2) _lock; \
|
||||||
|
static __always_inline void __class_##_name##_cleanup_ctx1(class_##_name##_t **_T1) \
|
||||||
|
__no_context_analysis _unlock1 { } \
|
||||||
|
static __always_inline void __class_##_name##_cleanup_ctx2(class_##_name##_t **_T2) \
|
||||||
|
__no_context_analysis _unlock2 { }
|
||||||
|
#define WITH_LOCK_GUARD_2_ATTRS(_name, _T1, _T2) \
|
||||||
|
class_##_name##_constructor(_T1, _T2), \
|
||||||
|
*__UNIQUE_ID(unlock1) __cleanup(__class_##_name##_cleanup_ctx1) = (void *)(_T1),\
|
||||||
|
*__UNIQUE_ID(unlock2) __cleanup(__class_##_name##_cleanup_ctx2) = (void *)(_T2)
|
||||||
|
|
||||||
static inline bool rq_order_less(struct rq *rq1, struct rq *rq2)
|
static inline bool rq_order_less(struct rq *rq1, struct rq *rq2)
|
||||||
{
|
{
|
||||||
@@ -3106,7 +3144,8 @@ static inline bool rq_order_less(struct rq *rq1, struct rq *rq2)
|
|||||||
return rq1->cpu < rq2->cpu;
|
return rq1->cpu < rq2->cpu;
|
||||||
}
|
}
|
||||||
|
|
||||||
extern void double_rq_lock(struct rq *rq1, struct rq *rq2);
|
extern void double_rq_lock(struct rq *rq1, struct rq *rq2)
|
||||||
|
__acquires(__rq_lockp(rq1), __rq_lockp(rq2));
|
||||||
|
|
||||||
#ifdef CONFIG_PREEMPTION
|
#ifdef CONFIG_PREEMPTION
|
||||||
|
|
||||||
@@ -3119,9 +3158,8 @@ extern void double_rq_lock(struct rq *rq1, struct rq *rq2);
|
|||||||
* also adds more overhead and therefore may reduce throughput.
|
* also adds more overhead and therefore may reduce throughput.
|
||||||
*/
|
*/
|
||||||
static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
||||||
__releases(this_rq->lock)
|
__must_hold(__rq_lockp(this_rq))
|
||||||
__acquires(busiest->lock)
|
__acquires(__rq_lockp(busiest))
|
||||||
__acquires(this_rq->lock)
|
|
||||||
{
|
{
|
||||||
raw_spin_rq_unlock(this_rq);
|
raw_spin_rq_unlock(this_rq);
|
||||||
double_rq_lock(this_rq, busiest);
|
double_rq_lock(this_rq, busiest);
|
||||||
@@ -3138,12 +3176,16 @@ static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
|||||||
* regardless of entry order into the function.
|
* regardless of entry order into the function.
|
||||||
*/
|
*/
|
||||||
static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
||||||
__releases(this_rq->lock)
|
__must_hold(__rq_lockp(this_rq))
|
||||||
__acquires(busiest->lock)
|
__acquires(__rq_lockp(busiest))
|
||||||
__acquires(this_rq->lock)
|
|
||||||
{
|
{
|
||||||
if (__rq_lockp(this_rq) == __rq_lockp(busiest) ||
|
if (__rq_lockp(this_rq) == __rq_lockp(busiest)) {
|
||||||
likely(raw_spin_rq_trylock(busiest))) {
|
__acquire(__rq_lockp(busiest)); /* already held */
|
||||||
|
double_rq_clock_clear_update(this_rq, busiest);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (likely(raw_spin_rq_trylock(busiest))) {
|
||||||
double_rq_clock_clear_update(this_rq, busiest);
|
double_rq_clock_clear_update(this_rq, busiest);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -3166,6 +3208,8 @@ static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
|||||||
* double_lock_balance - lock the busiest runqueue, this_rq is locked already.
|
* double_lock_balance - lock the busiest runqueue, this_rq is locked already.
|
||||||
*/
|
*/
|
||||||
static inline int double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
static inline int double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
||||||
|
__must_hold(__rq_lockp(this_rq))
|
||||||
|
__acquires(__rq_lockp(busiest))
|
||||||
{
|
{
|
||||||
lockdep_assert_irqs_disabled();
|
lockdep_assert_irqs_disabled();
|
||||||
|
|
||||||
@@ -3173,14 +3217,17 @@ static inline int double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void double_unlock_balance(struct rq *this_rq, struct rq *busiest)
|
static inline void double_unlock_balance(struct rq *this_rq, struct rq *busiest)
|
||||||
__releases(busiest->lock)
|
__releases(__rq_lockp(busiest))
|
||||||
{
|
{
|
||||||
if (__rq_lockp(this_rq) != __rq_lockp(busiest))
|
if (__rq_lockp(this_rq) != __rq_lockp(busiest))
|
||||||
raw_spin_rq_unlock(busiest);
|
raw_spin_rq_unlock(busiest);
|
||||||
|
else
|
||||||
|
__release(__rq_lockp(busiest)); /* fake release */
|
||||||
lock_set_subclass(&__rq_lockp(this_rq)->dep_map, 0, _RET_IP_);
|
lock_set_subclass(&__rq_lockp(this_rq)->dep_map, 0, _RET_IP_);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void double_lock(spinlock_t *l1, spinlock_t *l2)
|
static inline void double_lock(spinlock_t *l1, spinlock_t *l2)
|
||||||
|
__acquires(l1, l2)
|
||||||
{
|
{
|
||||||
if (l1 > l2)
|
if (l1 > l2)
|
||||||
swap(l1, l2);
|
swap(l1, l2);
|
||||||
@@ -3190,6 +3237,7 @@ static inline void double_lock(spinlock_t *l1, spinlock_t *l2)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void double_lock_irq(spinlock_t *l1, spinlock_t *l2)
|
static inline void double_lock_irq(spinlock_t *l1, spinlock_t *l2)
|
||||||
|
__acquires(l1, l2)
|
||||||
{
|
{
|
||||||
if (l1 > l2)
|
if (l1 > l2)
|
||||||
swap(l1, l2);
|
swap(l1, l2);
|
||||||
@@ -3199,6 +3247,7 @@ static inline void double_lock_irq(spinlock_t *l1, spinlock_t *l2)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void double_raw_lock(raw_spinlock_t *l1, raw_spinlock_t *l2)
|
static inline void double_raw_lock(raw_spinlock_t *l1, raw_spinlock_t *l2)
|
||||||
|
__acquires(l1, l2)
|
||||||
{
|
{
|
||||||
if (l1 > l2)
|
if (l1 > l2)
|
||||||
swap(l1, l2);
|
swap(l1, l2);
|
||||||
@@ -3208,6 +3257,7 @@ static inline void double_raw_lock(raw_spinlock_t *l1, raw_spinlock_t *l2)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline void double_raw_unlock(raw_spinlock_t *l1, raw_spinlock_t *l2)
|
static inline void double_raw_unlock(raw_spinlock_t *l1, raw_spinlock_t *l2)
|
||||||
|
__releases(l1, l2)
|
||||||
{
|
{
|
||||||
raw_spin_unlock(l1);
|
raw_spin_unlock(l1);
|
||||||
raw_spin_unlock(l2);
|
raw_spin_unlock(l2);
|
||||||
@@ -3217,6 +3267,13 @@ DEFINE_LOCK_GUARD_2(double_raw_spinlock, raw_spinlock_t,
|
|||||||
double_raw_lock(_T->lock, _T->lock2),
|
double_raw_lock(_T->lock, _T->lock2),
|
||||||
double_raw_unlock(_T->lock, _T->lock2))
|
double_raw_unlock(_T->lock, _T->lock2))
|
||||||
|
|
||||||
|
DECLARE_LOCK_GUARD_2_ATTRS(double_raw_spinlock,
|
||||||
|
__acquires(_T1, _T2),
|
||||||
|
__releases(*(raw_spinlock_t **)_T1),
|
||||||
|
__releases(*(raw_spinlock_t **)_T2));
|
||||||
|
#define class_double_raw_spinlock_constructor(_T1, _T2) \
|
||||||
|
WITH_LOCK_GUARD_2_ATTRS(double_raw_spinlock, _T1, _T2)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* double_rq_unlock - safely unlock two runqueues
|
* double_rq_unlock - safely unlock two runqueues
|
||||||
*
|
*
|
||||||
@@ -3224,13 +3281,12 @@ DEFINE_LOCK_GUARD_2(double_raw_spinlock, raw_spinlock_t,
|
|||||||
* you need to do so manually after calling.
|
* you need to do so manually after calling.
|
||||||
*/
|
*/
|
||||||
static inline void double_rq_unlock(struct rq *rq1, struct rq *rq2)
|
static inline void double_rq_unlock(struct rq *rq1, struct rq *rq2)
|
||||||
__releases(rq1->lock)
|
__releases(__rq_lockp(rq1), __rq_lockp(rq2))
|
||||||
__releases(rq2->lock)
|
|
||||||
{
|
{
|
||||||
if (__rq_lockp(rq1) != __rq_lockp(rq2))
|
if (__rq_lockp(rq1) != __rq_lockp(rq2))
|
||||||
raw_spin_rq_unlock(rq2);
|
raw_spin_rq_unlock(rq2);
|
||||||
else
|
else
|
||||||
__release(rq2->lock);
|
__release(__rq_lockp(rq2)); /* fake release */
|
||||||
raw_spin_rq_unlock(rq1);
|
raw_spin_rq_unlock(rq1);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1355,8 +1355,8 @@ int zap_other_threads(struct task_struct *p)
|
|||||||
return count;
|
return count;
|
||||||
}
|
}
|
||||||
|
|
||||||
struct sighand_struct *__lock_task_sighand(struct task_struct *tsk,
|
struct sighand_struct *lock_task_sighand(struct task_struct *tsk,
|
||||||
unsigned long *flags)
|
unsigned long *flags)
|
||||||
{
|
{
|
||||||
struct sighand_struct *sighand;
|
struct sighand_struct *sighand;
|
||||||
|
|
||||||
|
|||||||
@@ -66,14 +66,7 @@ static const struct k_clock clock_realtime, clock_monotonic;
|
|||||||
#error "SIGEV_THREAD_ID must not share bit with other SIGEV values!"
|
#error "SIGEV_THREAD_ID must not share bit with other SIGEV values!"
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
static struct k_itimer *__lock_timer(timer_t timer_id);
|
static struct k_itimer *lock_timer(timer_t timer_id);
|
||||||
|
|
||||||
#define lock_timer(tid) \
|
|
||||||
({ struct k_itimer *__timr; \
|
|
||||||
__cond_lock(&__timr->it_lock, __timr = __lock_timer(tid)); \
|
|
||||||
__timr; \
|
|
||||||
})
|
|
||||||
|
|
||||||
static inline void unlock_timer(struct k_itimer *timr)
|
static inline void unlock_timer(struct k_itimer *timr)
|
||||||
{
|
{
|
||||||
if (likely((timr)))
|
if (likely((timr)))
|
||||||
@@ -85,7 +78,7 @@ static inline void unlock_timer(struct k_itimer *timr)
|
|||||||
|
|
||||||
#define scoped_timer (scope)
|
#define scoped_timer (scope)
|
||||||
|
|
||||||
DEFINE_CLASS(lock_timer, struct k_itimer *, unlock_timer(_T), __lock_timer(id), timer_t id);
|
DEFINE_CLASS(lock_timer, struct k_itimer *, unlock_timer(_T), lock_timer(id), timer_t id);
|
||||||
DEFINE_CLASS_IS_COND_GUARD(lock_timer);
|
DEFINE_CLASS_IS_COND_GUARD(lock_timer);
|
||||||
|
|
||||||
static struct timer_hash_bucket *hash_bucket(struct signal_struct *sig, unsigned int nr)
|
static struct timer_hash_bucket *hash_bucket(struct signal_struct *sig, unsigned int nr)
|
||||||
@@ -600,7 +593,7 @@ COMPAT_SYSCALL_DEFINE3(timer_create, clockid_t, which_clock,
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
static struct k_itimer *__lock_timer(timer_t timer_id)
|
static struct k_itimer *lock_timer(timer_t timer_id)
|
||||||
{
|
{
|
||||||
struct k_itimer *timr;
|
struct k_itimer *timr;
|
||||||
|
|
||||||
|
|||||||
@@ -616,6 +616,36 @@ config DEBUG_FORCE_WEAK_PER_CPU
|
|||||||
To ensure that generic code follows the above rules, this
|
To ensure that generic code follows the above rules, this
|
||||||
option forces all percpu variables to be defined as weak.
|
option forces all percpu variables to be defined as weak.
|
||||||
|
|
||||||
|
config WARN_CONTEXT_ANALYSIS
|
||||||
|
bool "Compiler context-analysis warnings"
|
||||||
|
depends on CC_IS_CLANG && CLANG_VERSION >= 220000
|
||||||
|
# Branch profiling re-defines "if", which messes with the compiler's
|
||||||
|
# ability to analyze __cond_acquires(..), resulting in false positives.
|
||||||
|
depends on !TRACE_BRANCH_PROFILING
|
||||||
|
default y
|
||||||
|
help
|
||||||
|
Context Analysis is a language extension, which enables statically
|
||||||
|
checking that required contexts are active (or inactive) by acquiring
|
||||||
|
and releasing user-definable "context locks".
|
||||||
|
|
||||||
|
Clang's name of the feature is "Thread Safety Analysis". Requires
|
||||||
|
Clang 22 or later.
|
||||||
|
|
||||||
|
Produces warnings by default. Select CONFIG_WERROR if you wish to
|
||||||
|
turn these warnings into errors.
|
||||||
|
|
||||||
|
For more details, see Documentation/dev-tools/context-analysis.rst.
|
||||||
|
|
||||||
|
config WARN_CONTEXT_ANALYSIS_ALL
|
||||||
|
bool "Enable context analysis for all source files"
|
||||||
|
depends on WARN_CONTEXT_ANALYSIS
|
||||||
|
depends on EXPERT && !COMPILE_TEST
|
||||||
|
help
|
||||||
|
Enable tree-wide context analysis. This is likely to produce a
|
||||||
|
large number of false positives - enable at your own risk.
|
||||||
|
|
||||||
|
If unsure, say N.
|
||||||
|
|
||||||
endmenu # "Compiler options"
|
endmenu # "Compiler options"
|
||||||
|
|
||||||
menu "Generic Kernel Debugging Instruments"
|
menu "Generic Kernel Debugging Instruments"
|
||||||
@@ -2813,6 +2843,20 @@ config LINEAR_RANGES_TEST
|
|||||||
|
|
||||||
If unsure, say N.
|
If unsure, say N.
|
||||||
|
|
||||||
|
config CONTEXT_ANALYSIS_TEST
|
||||||
|
bool "Compiler context-analysis warnings test"
|
||||||
|
depends on EXPERT
|
||||||
|
help
|
||||||
|
This builds the test for compiler-based context analysis. The test
|
||||||
|
does not add executable code to the kernel, but is meant to test that
|
||||||
|
common patterns supported by the analysis do not result in false
|
||||||
|
positive warnings.
|
||||||
|
|
||||||
|
When adding support for new context locks, it is strongly recommended
|
||||||
|
to add supported patterns to this test.
|
||||||
|
|
||||||
|
If unsure, say N.
|
||||||
|
|
||||||
config CMDLINE_KUNIT_TEST
|
config CMDLINE_KUNIT_TEST
|
||||||
tristate "KUnit test for cmdline API" if !KUNIT_ALL_TESTS
|
tristate "KUnit test for cmdline API" if !KUNIT_ALL_TESTS
|
||||||
depends on KUNIT
|
depends on KUNIT
|
||||||
|
|||||||
@@ -50,6 +50,8 @@ lib-$(CONFIG_MIN_HEAP) += min_heap.o
|
|||||||
lib-y += kobject.o klist.o
|
lib-y += kobject.o klist.o
|
||||||
obj-y += lockref.o
|
obj-y += lockref.o
|
||||||
|
|
||||||
|
CONTEXT_ANALYSIS_rhashtable.o := y
|
||||||
|
|
||||||
obj-y += bcd.o sort.o parser.o debug_locks.o random32.o \
|
obj-y += bcd.o sort.o parser.o debug_locks.o random32.o \
|
||||||
bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \
|
bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \
|
||||||
list_sort.o uuid.o iov_iter.o clz_ctz.o \
|
list_sort.o uuid.o iov_iter.o clz_ctz.o \
|
||||||
@@ -250,6 +252,7 @@ obj-$(CONFIG_POLYNOMIAL) += polynomial.o
|
|||||||
# Prevent the compiler from calling builtins like memcmp() or bcmp() from this
|
# Prevent the compiler from calling builtins like memcmp() or bcmp() from this
|
||||||
# file.
|
# file.
|
||||||
CFLAGS_stackdepot.o += -fno-builtin
|
CFLAGS_stackdepot.o += -fno-builtin
|
||||||
|
CONTEXT_ANALYSIS_stackdepot.o := y
|
||||||
obj-$(CONFIG_STACKDEPOT) += stackdepot.o
|
obj-$(CONFIG_STACKDEPOT) += stackdepot.o
|
||||||
KASAN_SANITIZE_stackdepot.o := n
|
KASAN_SANITIZE_stackdepot.o := n
|
||||||
# In particular, instrumenting stackdepot.c with KMSAN will result in infinite
|
# In particular, instrumenting stackdepot.c with KMSAN will result in infinite
|
||||||
@@ -331,4 +334,7 @@ obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o
|
|||||||
|
|
||||||
obj-$(CONFIG_FIRMWARE_TABLE) += fw_table.o
|
obj-$(CONFIG_FIRMWARE_TABLE) += fw_table.o
|
||||||
|
|
||||||
|
CONTEXT_ANALYSIS_test_context-analysis.o := y
|
||||||
|
obj-$(CONFIG_CONTEXT_ANALYSIS_TEST) += test_context-analysis.o
|
||||||
|
|
||||||
subdir-$(CONFIG_FORTIFY_SOURCE) += test_fortify
|
subdir-$(CONFIG_FORTIFY_SOURCE) += test_fortify
|
||||||
|
|||||||
@@ -18,7 +18,7 @@
|
|||||||
* because the spin-lock and the decrement must be
|
* because the spin-lock and the decrement must be
|
||||||
* "atomic".
|
* "atomic".
|
||||||
*/
|
*/
|
||||||
int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
|
int atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
|
||||||
{
|
{
|
||||||
/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
|
/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
|
||||||
if (atomic_add_unless(atomic, -1, 1))
|
if (atomic_add_unless(atomic, -1, 1))
|
||||||
@@ -32,7 +32,7 @@ int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
EXPORT_SYMBOL(_atomic_dec_and_lock);
|
EXPORT_SYMBOL(atomic_dec_and_lock);
|
||||||
|
|
||||||
int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock,
|
int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock,
|
||||||
unsigned long *flags)
|
unsigned long *flags)
|
||||||
@@ -50,7 +50,7 @@ int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock,
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL(_atomic_dec_and_lock_irqsave);
|
EXPORT_SYMBOL(_atomic_dec_and_lock_irqsave);
|
||||||
|
|
||||||
int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock)
|
int atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock)
|
||||||
{
|
{
|
||||||
/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
|
/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
|
||||||
if (atomic_add_unless(atomic, -1, 1))
|
if (atomic_add_unless(atomic, -1, 1))
|
||||||
@@ -63,7 +63,7 @@ int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock)
|
|||||||
raw_spin_unlock(lock);
|
raw_spin_unlock(lock);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(_atomic_dec_and_raw_lock);
|
EXPORT_SYMBOL(atomic_dec_and_raw_lock);
|
||||||
|
|
||||||
int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock,
|
int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock,
|
||||||
unsigned long *flags)
|
unsigned long *flags)
|
||||||
|
|||||||
@@ -105,7 +105,6 @@ EXPORT_SYMBOL(lockref_put_return);
|
|||||||
* @lockref: pointer to lockref structure
|
* @lockref: pointer to lockref structure
|
||||||
* Return: 1 if count updated successfully or 0 if count <= 1 and lock taken
|
* Return: 1 if count updated successfully or 0 if count <= 1 and lock taken
|
||||||
*/
|
*/
|
||||||
#undef lockref_put_or_lock
|
|
||||||
bool lockref_put_or_lock(struct lockref *lockref)
|
bool lockref_put_or_lock(struct lockref *lockref)
|
||||||
{
|
{
|
||||||
CMPXCHG_LOOP(
|
CMPXCHG_LOOP(
|
||||||
|
|||||||
@@ -358,6 +358,7 @@ static int rhashtable_rehash_table(struct rhashtable *ht)
|
|||||||
static int rhashtable_rehash_alloc(struct rhashtable *ht,
|
static int rhashtable_rehash_alloc(struct rhashtable *ht,
|
||||||
struct bucket_table *old_tbl,
|
struct bucket_table *old_tbl,
|
||||||
unsigned int size)
|
unsigned int size)
|
||||||
|
__must_hold(&ht->mutex)
|
||||||
{
|
{
|
||||||
struct bucket_table *new_tbl;
|
struct bucket_table *new_tbl;
|
||||||
int err;
|
int err;
|
||||||
@@ -392,6 +393,7 @@ static int rhashtable_rehash_alloc(struct rhashtable *ht,
|
|||||||
* bucket locks or concurrent RCU protected lookups and traversals.
|
* bucket locks or concurrent RCU protected lookups and traversals.
|
||||||
*/
|
*/
|
||||||
static int rhashtable_shrink(struct rhashtable *ht)
|
static int rhashtable_shrink(struct rhashtable *ht)
|
||||||
|
__must_hold(&ht->mutex)
|
||||||
{
|
{
|
||||||
struct bucket_table *old_tbl = rht_dereference(ht->tbl, ht);
|
struct bucket_table *old_tbl = rht_dereference(ht->tbl, ht);
|
||||||
unsigned int nelems = atomic_read(&ht->nelems);
|
unsigned int nelems = atomic_read(&ht->nelems);
|
||||||
@@ -724,7 +726,7 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_exit);
|
|||||||
* resize events and always continue.
|
* resize events and always continue.
|
||||||
*/
|
*/
|
||||||
int rhashtable_walk_start_check(struct rhashtable_iter *iter)
|
int rhashtable_walk_start_check(struct rhashtable_iter *iter)
|
||||||
__acquires(RCU)
|
__acquires_shared(RCU)
|
||||||
{
|
{
|
||||||
struct rhashtable *ht = iter->ht;
|
struct rhashtable *ht = iter->ht;
|
||||||
bool rhlist = ht->rhlist;
|
bool rhlist = ht->rhlist;
|
||||||
@@ -940,7 +942,6 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_peek);
|
|||||||
* hash table.
|
* hash table.
|
||||||
*/
|
*/
|
||||||
void rhashtable_walk_stop(struct rhashtable_iter *iter)
|
void rhashtable_walk_stop(struct rhashtable_iter *iter)
|
||||||
__releases(RCU)
|
|
||||||
{
|
{
|
||||||
struct rhashtable *ht;
|
struct rhashtable *ht;
|
||||||
struct bucket_table *tbl = iter->walker.tbl;
|
struct bucket_table *tbl = iter->walker.tbl;
|
||||||
|
|||||||
@@ -61,18 +61,18 @@ static unsigned int stack_bucket_number_order;
|
|||||||
/* Hash mask for indexing the table. */
|
/* Hash mask for indexing the table. */
|
||||||
static unsigned int stack_hash_mask;
|
static unsigned int stack_hash_mask;
|
||||||
|
|
||||||
|
/* The lock must be held when performing pool or freelist modifications. */
|
||||||
|
static DEFINE_RAW_SPINLOCK(pool_lock);
|
||||||
/* Array of memory regions that store stack records. */
|
/* Array of memory regions that store stack records. */
|
||||||
static void **stack_pools;
|
static void **stack_pools __pt_guarded_by(&pool_lock);
|
||||||
/* Newly allocated pool that is not yet added to stack_pools. */
|
/* Newly allocated pool that is not yet added to stack_pools. */
|
||||||
static void *new_pool;
|
static void *new_pool;
|
||||||
/* Number of pools in stack_pools. */
|
/* Number of pools in stack_pools. */
|
||||||
static int pools_num;
|
static int pools_num;
|
||||||
/* Offset to the unused space in the currently used pool. */
|
/* Offset to the unused space in the currently used pool. */
|
||||||
static size_t pool_offset = DEPOT_POOL_SIZE;
|
static size_t pool_offset __guarded_by(&pool_lock) = DEPOT_POOL_SIZE;
|
||||||
/* Freelist of stack records within stack_pools. */
|
/* Freelist of stack records within stack_pools. */
|
||||||
static LIST_HEAD(free_stacks);
|
static __guarded_by(&pool_lock) LIST_HEAD(free_stacks);
|
||||||
/* The lock must be held when performing pool or freelist modifications. */
|
|
||||||
static DEFINE_RAW_SPINLOCK(pool_lock);
|
|
||||||
|
|
||||||
/* Statistics counters for debugfs. */
|
/* Statistics counters for debugfs. */
|
||||||
enum depot_counter_id {
|
enum depot_counter_id {
|
||||||
@@ -291,6 +291,7 @@ EXPORT_SYMBOL_GPL(stack_depot_init);
|
|||||||
* Initializes new stack pool, and updates the list of pools.
|
* Initializes new stack pool, and updates the list of pools.
|
||||||
*/
|
*/
|
||||||
static bool depot_init_pool(void **prealloc)
|
static bool depot_init_pool(void **prealloc)
|
||||||
|
__must_hold(&pool_lock)
|
||||||
{
|
{
|
||||||
lockdep_assert_held(&pool_lock);
|
lockdep_assert_held(&pool_lock);
|
||||||
|
|
||||||
@@ -338,6 +339,7 @@ static bool depot_init_pool(void **prealloc)
|
|||||||
|
|
||||||
/* Keeps the preallocated memory to be used for a new stack depot pool. */
|
/* Keeps the preallocated memory to be used for a new stack depot pool. */
|
||||||
static void depot_keep_new_pool(void **prealloc)
|
static void depot_keep_new_pool(void **prealloc)
|
||||||
|
__must_hold(&pool_lock)
|
||||||
{
|
{
|
||||||
lockdep_assert_held(&pool_lock);
|
lockdep_assert_held(&pool_lock);
|
||||||
|
|
||||||
@@ -357,6 +359,7 @@ static void depot_keep_new_pool(void **prealloc)
|
|||||||
* the current pre-allocation.
|
* the current pre-allocation.
|
||||||
*/
|
*/
|
||||||
static struct stack_record *depot_pop_free_pool(void **prealloc, size_t size)
|
static struct stack_record *depot_pop_free_pool(void **prealloc, size_t size)
|
||||||
|
__must_hold(&pool_lock)
|
||||||
{
|
{
|
||||||
struct stack_record *stack;
|
struct stack_record *stack;
|
||||||
void *current_pool;
|
void *current_pool;
|
||||||
@@ -391,6 +394,7 @@ static struct stack_record *depot_pop_free_pool(void **prealloc, size_t size)
|
|||||||
|
|
||||||
/* Try to find next free usable entry from the freelist. */
|
/* Try to find next free usable entry from the freelist. */
|
||||||
static struct stack_record *depot_pop_free(void)
|
static struct stack_record *depot_pop_free(void)
|
||||||
|
__must_hold(&pool_lock)
|
||||||
{
|
{
|
||||||
struct stack_record *stack;
|
struct stack_record *stack;
|
||||||
|
|
||||||
@@ -428,6 +432,7 @@ static inline size_t depot_stack_record_size(struct stack_record *s, unsigned in
|
|||||||
/* Allocates a new stack in a stack depot pool. */
|
/* Allocates a new stack in a stack depot pool. */
|
||||||
static struct stack_record *
|
static struct stack_record *
|
||||||
depot_alloc_stack(unsigned long *entries, unsigned int nr_entries, u32 hash, depot_flags_t flags, void **prealloc)
|
depot_alloc_stack(unsigned long *entries, unsigned int nr_entries, u32 hash, depot_flags_t flags, void **prealloc)
|
||||||
|
__must_hold(&pool_lock)
|
||||||
{
|
{
|
||||||
struct stack_record *stack = NULL;
|
struct stack_record *stack = NULL;
|
||||||
size_t record_size;
|
size_t record_size;
|
||||||
@@ -486,6 +491,7 @@ depot_alloc_stack(unsigned long *entries, unsigned int nr_entries, u32 hash, dep
|
|||||||
}
|
}
|
||||||
|
|
||||||
static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
|
static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
|
||||||
|
__must_not_hold(&pool_lock)
|
||||||
{
|
{
|
||||||
const int pools_num_cached = READ_ONCE(pools_num);
|
const int pools_num_cached = READ_ONCE(pools_num);
|
||||||
union handle_parts parts = { .handle = handle };
|
union handle_parts parts = { .handle = handle };
|
||||||
@@ -502,7 +508,8 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
|
|||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
pool = stack_pools[pool_index];
|
/* @pool_index either valid, or user passed in corrupted value. */
|
||||||
|
pool = context_unsafe(stack_pools[pool_index]);
|
||||||
if (WARN_ON(!pool))
|
if (WARN_ON(!pool))
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
@@ -515,6 +522,7 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
|
|||||||
|
|
||||||
/* Links stack into the freelist. */
|
/* Links stack into the freelist. */
|
||||||
static void depot_free_stack(struct stack_record *stack)
|
static void depot_free_stack(struct stack_record *stack)
|
||||||
|
__must_not_hold(&pool_lock)
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
|
|||||||
598
lib/test_context-analysis.c
Normal file
598
lib/test_context-analysis.c
Normal file
@@ -0,0 +1,598 @@
|
|||||||
|
// SPDX-License-Identifier: GPL-2.0-only
|
||||||
|
/*
|
||||||
|
* Compile-only tests for common patterns that should not generate false
|
||||||
|
* positive errors when compiled with Clang's context analysis.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <linux/bit_spinlock.h>
|
||||||
|
#include <linux/build_bug.h>
|
||||||
|
#include <linux/local_lock.h>
|
||||||
|
#include <linux/mutex.h>
|
||||||
|
#include <linux/percpu.h>
|
||||||
|
#include <linux/rcupdate.h>
|
||||||
|
#include <linux/rwsem.h>
|
||||||
|
#include <linux/seqlock.h>
|
||||||
|
#include <linux/spinlock.h>
|
||||||
|
#include <linux/srcu.h>
|
||||||
|
#include <linux/ww_mutex.h>
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Test that helper macros work as expected.
|
||||||
|
*/
|
||||||
|
static void __used test_common_helpers(void)
|
||||||
|
{
|
||||||
|
BUILD_BUG_ON(context_unsafe(3) != 3); /* plain expression */
|
||||||
|
BUILD_BUG_ON(context_unsafe((void)2; 3) != 3); /* does not swallow semi-colon */
|
||||||
|
BUILD_BUG_ON(context_unsafe((void)2, 3) != 3); /* does not swallow commas */
|
||||||
|
context_unsafe(do { } while (0)); /* works with void statements */
|
||||||
|
}
|
||||||
|
|
||||||
|
#define TEST_SPINLOCK_COMMON(class, type, type_init, type_lock, type_unlock, type_trylock, op) \
|
||||||
|
struct test_##class##_data { \
|
||||||
|
type lock; \
|
||||||
|
int counter __guarded_by(&lock); \
|
||||||
|
int *pointer __pt_guarded_by(&lock); \
|
||||||
|
}; \
|
||||||
|
static void __used test_##class##_init(struct test_##class##_data *d) \
|
||||||
|
{ \
|
||||||
|
guard(type_init)(&d->lock); \
|
||||||
|
d->counter = 0; \
|
||||||
|
} \
|
||||||
|
static void __used test_##class(struct test_##class##_data *d) \
|
||||||
|
{ \
|
||||||
|
unsigned long flags; \
|
||||||
|
d->pointer++; \
|
||||||
|
type_lock(&d->lock); \
|
||||||
|
op(d->counter); \
|
||||||
|
op(*d->pointer); \
|
||||||
|
type_unlock(&d->lock); \
|
||||||
|
type_lock##_irq(&d->lock); \
|
||||||
|
op(d->counter); \
|
||||||
|
op(*d->pointer); \
|
||||||
|
type_unlock##_irq(&d->lock); \
|
||||||
|
type_lock##_bh(&d->lock); \
|
||||||
|
op(d->counter); \
|
||||||
|
op(*d->pointer); \
|
||||||
|
type_unlock##_bh(&d->lock); \
|
||||||
|
type_lock##_irqsave(&d->lock, flags); \
|
||||||
|
op(d->counter); \
|
||||||
|
op(*d->pointer); \
|
||||||
|
type_unlock##_irqrestore(&d->lock, flags); \
|
||||||
|
} \
|
||||||
|
static void __used test_##class##_trylock(struct test_##class##_data *d) \
|
||||||
|
{ \
|
||||||
|
if (type_trylock(&d->lock)) { \
|
||||||
|
op(d->counter); \
|
||||||
|
type_unlock(&d->lock); \
|
||||||
|
} \
|
||||||
|
} \
|
||||||
|
static void __used test_##class##_assert(struct test_##class##_data *d) \
|
||||||
|
{ \
|
||||||
|
lockdep_assert_held(&d->lock); \
|
||||||
|
op(d->counter); \
|
||||||
|
} \
|
||||||
|
static void __used test_##class##_guard(struct test_##class##_data *d) \
|
||||||
|
{ \
|
||||||
|
{ guard(class)(&d->lock); op(d->counter); } \
|
||||||
|
{ guard(class##_irq)(&d->lock); op(d->counter); } \
|
||||||
|
{ guard(class##_irqsave)(&d->lock); op(d->counter); } \
|
||||||
|
}
|
||||||
|
|
||||||
|
#define TEST_OP_RW(x) (x)++
|
||||||
|
#define TEST_OP_RO(x) ((void)(x))
|
||||||
|
|
||||||
|
TEST_SPINLOCK_COMMON(raw_spinlock,
|
||||||
|
raw_spinlock_t,
|
||||||
|
raw_spinlock_init,
|
||||||
|
raw_spin_lock,
|
||||||
|
raw_spin_unlock,
|
||||||
|
raw_spin_trylock,
|
||||||
|
TEST_OP_RW);
|
||||||
|
static void __used test_raw_spinlock_trylock_extra(struct test_raw_spinlock_data *d)
|
||||||
|
{
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
data_race(d->counter++); /* no warning */
|
||||||
|
|
||||||
|
if (raw_spin_trylock_irq(&d->lock)) {
|
||||||
|
d->counter++;
|
||||||
|
raw_spin_unlock_irq(&d->lock);
|
||||||
|
}
|
||||||
|
if (raw_spin_trylock_irqsave(&d->lock, flags)) {
|
||||||
|
d->counter++;
|
||||||
|
raw_spin_unlock_irqrestore(&d->lock, flags);
|
||||||
|
}
|
||||||
|
scoped_cond_guard(raw_spinlock_try, return, &d->lock) {
|
||||||
|
d->counter++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_SPINLOCK_COMMON(spinlock,
|
||||||
|
spinlock_t,
|
||||||
|
spinlock_init,
|
||||||
|
spin_lock,
|
||||||
|
spin_unlock,
|
||||||
|
spin_trylock,
|
||||||
|
TEST_OP_RW);
|
||||||
|
static void __used test_spinlock_trylock_extra(struct test_spinlock_data *d)
|
||||||
|
{
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
if (spin_trylock_irq(&d->lock)) {
|
||||||
|
d->counter++;
|
||||||
|
spin_unlock_irq(&d->lock);
|
||||||
|
}
|
||||||
|
if (spin_trylock_irqsave(&d->lock, flags)) {
|
||||||
|
d->counter++;
|
||||||
|
spin_unlock_irqrestore(&d->lock, flags);
|
||||||
|
}
|
||||||
|
scoped_cond_guard(spinlock_try, return, &d->lock) {
|
||||||
|
d->counter++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_SPINLOCK_COMMON(write_lock,
|
||||||
|
rwlock_t,
|
||||||
|
rwlock_init,
|
||||||
|
write_lock,
|
||||||
|
write_unlock,
|
||||||
|
write_trylock,
|
||||||
|
TEST_OP_RW);
|
||||||
|
static void __used test_write_trylock_extra(struct test_write_lock_data *d)
|
||||||
|
{
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
if (write_trylock_irqsave(&d->lock, flags)) {
|
||||||
|
d->counter++;
|
||||||
|
write_unlock_irqrestore(&d->lock, flags);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_SPINLOCK_COMMON(read_lock,
|
||||||
|
rwlock_t,
|
||||||
|
rwlock_init,
|
||||||
|
read_lock,
|
||||||
|
read_unlock,
|
||||||
|
read_trylock,
|
||||||
|
TEST_OP_RO);
|
||||||
|
|
||||||
|
struct test_mutex_data {
|
||||||
|
struct mutex mtx;
|
||||||
|
int counter __guarded_by(&mtx);
|
||||||
|
};
|
||||||
|
|
||||||
|
static void __used test_mutex_init(struct test_mutex_data *d)
|
||||||
|
{
|
||||||
|
guard(mutex_init)(&d->mtx);
|
||||||
|
d->counter = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_mutex_lock(struct test_mutex_data *d)
|
||||||
|
{
|
||||||
|
mutex_lock(&d->mtx);
|
||||||
|
d->counter++;
|
||||||
|
mutex_unlock(&d->mtx);
|
||||||
|
mutex_lock_io(&d->mtx);
|
||||||
|
d->counter++;
|
||||||
|
mutex_unlock(&d->mtx);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_mutex_trylock(struct test_mutex_data *d, atomic_t *a)
|
||||||
|
{
|
||||||
|
if (!mutex_lock_interruptible(&d->mtx)) {
|
||||||
|
d->counter++;
|
||||||
|
mutex_unlock(&d->mtx);
|
||||||
|
}
|
||||||
|
if (!mutex_lock_killable(&d->mtx)) {
|
||||||
|
d->counter++;
|
||||||
|
mutex_unlock(&d->mtx);
|
||||||
|
}
|
||||||
|
if (mutex_trylock(&d->mtx)) {
|
||||||
|
d->counter++;
|
||||||
|
mutex_unlock(&d->mtx);
|
||||||
|
}
|
||||||
|
if (atomic_dec_and_mutex_lock(a, &d->mtx)) {
|
||||||
|
d->counter++;
|
||||||
|
mutex_unlock(&d->mtx);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_mutex_assert(struct test_mutex_data *d)
|
||||||
|
{
|
||||||
|
lockdep_assert_held(&d->mtx);
|
||||||
|
d->counter++;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_mutex_guard(struct test_mutex_data *d)
|
||||||
|
{
|
||||||
|
guard(mutex)(&d->mtx);
|
||||||
|
d->counter++;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_mutex_cond_guard(struct test_mutex_data *d)
|
||||||
|
{
|
||||||
|
scoped_cond_guard(mutex_try, return, &d->mtx) {
|
||||||
|
d->counter++;
|
||||||
|
}
|
||||||
|
scoped_cond_guard(mutex_intr, return, &d->mtx) {
|
||||||
|
d->counter++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
struct test_seqlock_data {
|
||||||
|
seqlock_t sl;
|
||||||
|
int counter __guarded_by(&sl);
|
||||||
|
};
|
||||||
|
|
||||||
|
static void __used test_seqlock_init(struct test_seqlock_data *d)
|
||||||
|
{
|
||||||
|
guard(seqlock_init)(&d->sl);
|
||||||
|
d->counter = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_seqlock_reader(struct test_seqlock_data *d)
|
||||||
|
{
|
||||||
|
unsigned int seq;
|
||||||
|
|
||||||
|
do {
|
||||||
|
seq = read_seqbegin(&d->sl);
|
||||||
|
(void)d->counter;
|
||||||
|
} while (read_seqretry(&d->sl, seq));
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_seqlock_writer(struct test_seqlock_data *d)
|
||||||
|
{
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
write_seqlock(&d->sl);
|
||||||
|
d->counter++;
|
||||||
|
write_sequnlock(&d->sl);
|
||||||
|
|
||||||
|
write_seqlock_irq(&d->sl);
|
||||||
|
d->counter++;
|
||||||
|
write_sequnlock_irq(&d->sl);
|
||||||
|
|
||||||
|
write_seqlock_bh(&d->sl);
|
||||||
|
d->counter++;
|
||||||
|
write_sequnlock_bh(&d->sl);
|
||||||
|
|
||||||
|
write_seqlock_irqsave(&d->sl, flags);
|
||||||
|
d->counter++;
|
||||||
|
write_sequnlock_irqrestore(&d->sl, flags);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_seqlock_scoped(struct test_seqlock_data *d)
|
||||||
|
{
|
||||||
|
scoped_seqlock_read (&d->sl, ss_lockless) {
|
||||||
|
(void)d->counter;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
struct test_rwsem_data {
|
||||||
|
struct rw_semaphore sem;
|
||||||
|
int counter __guarded_by(&sem);
|
||||||
|
};
|
||||||
|
|
||||||
|
static void __used test_rwsem_init(struct test_rwsem_data *d)
|
||||||
|
{
|
||||||
|
guard(rwsem_init)(&d->sem);
|
||||||
|
d->counter = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_rwsem_reader(struct test_rwsem_data *d)
|
||||||
|
{
|
||||||
|
down_read(&d->sem);
|
||||||
|
(void)d->counter;
|
||||||
|
up_read(&d->sem);
|
||||||
|
|
||||||
|
if (down_read_trylock(&d->sem)) {
|
||||||
|
(void)d->counter;
|
||||||
|
up_read(&d->sem);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_rwsem_writer(struct test_rwsem_data *d)
|
||||||
|
{
|
||||||
|
down_write(&d->sem);
|
||||||
|
d->counter++;
|
||||||
|
up_write(&d->sem);
|
||||||
|
|
||||||
|
down_write(&d->sem);
|
||||||
|
d->counter++;
|
||||||
|
downgrade_write(&d->sem);
|
||||||
|
(void)d->counter;
|
||||||
|
up_read(&d->sem);
|
||||||
|
|
||||||
|
if (down_write_trylock(&d->sem)) {
|
||||||
|
d->counter++;
|
||||||
|
up_write(&d->sem);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_rwsem_assert(struct test_rwsem_data *d)
|
||||||
|
{
|
||||||
|
rwsem_assert_held_nolockdep(&d->sem);
|
||||||
|
d->counter++;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_rwsem_guard(struct test_rwsem_data *d)
|
||||||
|
{
|
||||||
|
{ guard(rwsem_read)(&d->sem); (void)d->counter; }
|
||||||
|
{ guard(rwsem_write)(&d->sem); d->counter++; }
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_rwsem_cond_guard(struct test_rwsem_data *d)
|
||||||
|
{
|
||||||
|
scoped_cond_guard(rwsem_read_try, return, &d->sem) {
|
||||||
|
(void)d->counter;
|
||||||
|
}
|
||||||
|
scoped_cond_guard(rwsem_write_try, return, &d->sem) {
|
||||||
|
d->counter++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
struct test_bit_spinlock_data {
|
||||||
|
unsigned long bits;
|
||||||
|
int counter __guarded_by(__bitlock(3, &bits));
|
||||||
|
};
|
||||||
|
|
||||||
|
static void __used test_bit_spin_lock(struct test_bit_spinlock_data *d)
|
||||||
|
{
|
||||||
|
/*
|
||||||
|
* Note, the analysis seems to have false negatives, because it won't
|
||||||
|
* precisely recognize the bit of the fake __bitlock() token.
|
||||||
|
*/
|
||||||
|
bit_spin_lock(3, &d->bits);
|
||||||
|
d->counter++;
|
||||||
|
bit_spin_unlock(3, &d->bits);
|
||||||
|
|
||||||
|
bit_spin_lock(3, &d->bits);
|
||||||
|
d->counter++;
|
||||||
|
__bit_spin_unlock(3, &d->bits);
|
||||||
|
|
||||||
|
if (bit_spin_trylock(3, &d->bits)) {
|
||||||
|
d->counter++;
|
||||||
|
bit_spin_unlock(3, &d->bits);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Test that we can mark a variable guarded by RCU, and we can dereference and
|
||||||
|
* write to the pointer with RCU's primitives.
|
||||||
|
*/
|
||||||
|
struct test_rcu_data {
|
||||||
|
long __rcu_guarded *data;
|
||||||
|
};
|
||||||
|
|
||||||
|
static void __used test_rcu_guarded_reader(struct test_rcu_data *d)
|
||||||
|
{
|
||||||
|
rcu_read_lock();
|
||||||
|
(void)rcu_dereference(d->data);
|
||||||
|
rcu_read_unlock();
|
||||||
|
|
||||||
|
rcu_read_lock_bh();
|
||||||
|
(void)rcu_dereference(d->data);
|
||||||
|
rcu_read_unlock_bh();
|
||||||
|
|
||||||
|
rcu_read_lock_sched();
|
||||||
|
(void)rcu_dereference(d->data);
|
||||||
|
rcu_read_unlock_sched();
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_rcu_guard(struct test_rcu_data *d)
|
||||||
|
{
|
||||||
|
guard(rcu)();
|
||||||
|
(void)rcu_dereference(d->data);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_rcu_guarded_updater(struct test_rcu_data *d)
|
||||||
|
{
|
||||||
|
rcu_assign_pointer(d->data, NULL);
|
||||||
|
RCU_INIT_POINTER(d->data, NULL);
|
||||||
|
(void)unrcu_pointer(d->data);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void wants_rcu_held(void) __must_hold_shared(RCU) { }
|
||||||
|
static void wants_rcu_held_bh(void) __must_hold_shared(RCU_BH) { }
|
||||||
|
static void wants_rcu_held_sched(void) __must_hold_shared(RCU_SCHED) { }
|
||||||
|
|
||||||
|
static void __used test_rcu_lock_variants(void)
|
||||||
|
{
|
||||||
|
rcu_read_lock();
|
||||||
|
wants_rcu_held();
|
||||||
|
rcu_read_unlock();
|
||||||
|
|
||||||
|
rcu_read_lock_bh();
|
||||||
|
wants_rcu_held_bh();
|
||||||
|
rcu_read_unlock_bh();
|
||||||
|
|
||||||
|
rcu_read_lock_sched();
|
||||||
|
wants_rcu_held_sched();
|
||||||
|
rcu_read_unlock_sched();
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_rcu_lock_reentrant(void)
|
||||||
|
{
|
||||||
|
rcu_read_lock();
|
||||||
|
rcu_read_lock();
|
||||||
|
rcu_read_lock_bh();
|
||||||
|
rcu_read_lock_bh();
|
||||||
|
rcu_read_lock_sched();
|
||||||
|
rcu_read_lock_sched();
|
||||||
|
|
||||||
|
rcu_read_unlock_sched();
|
||||||
|
rcu_read_unlock_sched();
|
||||||
|
rcu_read_unlock_bh();
|
||||||
|
rcu_read_unlock_bh();
|
||||||
|
rcu_read_unlock();
|
||||||
|
rcu_read_unlock();
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_rcu_assert_variants(void)
|
||||||
|
{
|
||||||
|
lockdep_assert_in_rcu_read_lock();
|
||||||
|
wants_rcu_held();
|
||||||
|
|
||||||
|
lockdep_assert_in_rcu_read_lock_bh();
|
||||||
|
wants_rcu_held_bh();
|
||||||
|
|
||||||
|
lockdep_assert_in_rcu_read_lock_sched();
|
||||||
|
wants_rcu_held_sched();
|
||||||
|
}
|
||||||
|
|
||||||
|
struct test_srcu_data {
|
||||||
|
struct srcu_struct srcu;
|
||||||
|
long __rcu_guarded *data;
|
||||||
|
};
|
||||||
|
|
||||||
|
static void __used test_srcu(struct test_srcu_data *d)
|
||||||
|
{
|
||||||
|
init_srcu_struct(&d->srcu);
|
||||||
|
|
||||||
|
int idx = srcu_read_lock(&d->srcu);
|
||||||
|
long *data = srcu_dereference(d->data, &d->srcu);
|
||||||
|
(void)data;
|
||||||
|
srcu_read_unlock(&d->srcu, idx);
|
||||||
|
|
||||||
|
rcu_assign_pointer(d->data, NULL);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_srcu_guard(struct test_srcu_data *d)
|
||||||
|
{
|
||||||
|
{ guard(srcu)(&d->srcu); (void)srcu_dereference(d->data, &d->srcu); }
|
||||||
|
{ guard(srcu_fast)(&d->srcu); (void)srcu_dereference(d->data, &d->srcu); }
|
||||||
|
{ guard(srcu_fast_notrace)(&d->srcu); (void)srcu_dereference(d->data, &d->srcu); }
|
||||||
|
}
|
||||||
|
|
||||||
|
struct test_local_lock_data {
|
||||||
|
local_lock_t lock;
|
||||||
|
int counter __guarded_by(&lock);
|
||||||
|
};
|
||||||
|
|
||||||
|
static DEFINE_PER_CPU(struct test_local_lock_data, test_local_lock_data) = {
|
||||||
|
.lock = INIT_LOCAL_LOCK(lock),
|
||||||
|
};
|
||||||
|
|
||||||
|
static void __used test_local_lock_init(struct test_local_lock_data *d)
|
||||||
|
{
|
||||||
|
guard(local_lock_init)(&d->lock);
|
||||||
|
d->counter = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_local_lock(void)
|
||||||
|
{
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
local_lock(&test_local_lock_data.lock);
|
||||||
|
this_cpu_add(test_local_lock_data.counter, 1);
|
||||||
|
local_unlock(&test_local_lock_data.lock);
|
||||||
|
|
||||||
|
local_lock_irq(&test_local_lock_data.lock);
|
||||||
|
this_cpu_add(test_local_lock_data.counter, 1);
|
||||||
|
local_unlock_irq(&test_local_lock_data.lock);
|
||||||
|
|
||||||
|
local_lock_irqsave(&test_local_lock_data.lock, flags);
|
||||||
|
this_cpu_add(test_local_lock_data.counter, 1);
|
||||||
|
local_unlock_irqrestore(&test_local_lock_data.lock, flags);
|
||||||
|
|
||||||
|
local_lock_nested_bh(&test_local_lock_data.lock);
|
||||||
|
this_cpu_add(test_local_lock_data.counter, 1);
|
||||||
|
local_unlock_nested_bh(&test_local_lock_data.lock);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_local_lock_guard(void)
|
||||||
|
{
|
||||||
|
{ guard(local_lock)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); }
|
||||||
|
{ guard(local_lock_irq)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); }
|
||||||
|
{ guard(local_lock_irqsave)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); }
|
||||||
|
{ guard(local_lock_nested_bh)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); }
|
||||||
|
}
|
||||||
|
|
||||||
|
struct test_local_trylock_data {
|
||||||
|
local_trylock_t lock;
|
||||||
|
int counter __guarded_by(&lock);
|
||||||
|
};
|
||||||
|
|
||||||
|
static DEFINE_PER_CPU(struct test_local_trylock_data, test_local_trylock_data) = {
|
||||||
|
.lock = INIT_LOCAL_TRYLOCK(lock),
|
||||||
|
};
|
||||||
|
|
||||||
|
static void __used test_local_trylock_init(struct test_local_trylock_data *d)
|
||||||
|
{
|
||||||
|
guard(local_trylock_init)(&d->lock);
|
||||||
|
d->counter = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_local_trylock(void)
|
||||||
|
{
|
||||||
|
local_lock(&test_local_trylock_data.lock);
|
||||||
|
this_cpu_add(test_local_trylock_data.counter, 1);
|
||||||
|
local_unlock(&test_local_trylock_data.lock);
|
||||||
|
|
||||||
|
if (local_trylock(&test_local_trylock_data.lock)) {
|
||||||
|
this_cpu_add(test_local_trylock_data.counter, 1);
|
||||||
|
local_unlock(&test_local_trylock_data.lock);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static DEFINE_WD_CLASS(ww_class);
|
||||||
|
|
||||||
|
struct test_ww_mutex_data {
|
||||||
|
struct ww_mutex mtx;
|
||||||
|
int counter __guarded_by(&mtx);
|
||||||
|
};
|
||||||
|
|
||||||
|
static void __used test_ww_mutex_lock_noctx(struct test_ww_mutex_data *d)
|
||||||
|
{
|
||||||
|
if (!ww_mutex_lock(&d->mtx, NULL)) {
|
||||||
|
d->counter++;
|
||||||
|
ww_mutex_unlock(&d->mtx);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!ww_mutex_lock_interruptible(&d->mtx, NULL)) {
|
||||||
|
d->counter++;
|
||||||
|
ww_mutex_unlock(&d->mtx);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (ww_mutex_trylock(&d->mtx, NULL)) {
|
||||||
|
d->counter++;
|
||||||
|
ww_mutex_unlock(&d->mtx);
|
||||||
|
}
|
||||||
|
|
||||||
|
ww_mutex_lock_slow(&d->mtx, NULL);
|
||||||
|
d->counter++;
|
||||||
|
ww_mutex_unlock(&d->mtx);
|
||||||
|
|
||||||
|
ww_mutex_destroy(&d->mtx);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void __used test_ww_mutex_lock_ctx(struct test_ww_mutex_data *d)
|
||||||
|
{
|
||||||
|
struct ww_acquire_ctx ctx;
|
||||||
|
|
||||||
|
ww_acquire_init(&ctx, &ww_class);
|
||||||
|
|
||||||
|
if (!ww_mutex_lock(&d->mtx, &ctx)) {
|
||||||
|
d->counter++;
|
||||||
|
ww_mutex_unlock(&d->mtx);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!ww_mutex_lock_interruptible(&d->mtx, &ctx)) {
|
||||||
|
d->counter++;
|
||||||
|
ww_mutex_unlock(&d->mtx);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (ww_mutex_trylock(&d->mtx, &ctx)) {
|
||||||
|
d->counter++;
|
||||||
|
ww_mutex_unlock(&d->mtx);
|
||||||
|
}
|
||||||
|
|
||||||
|
ww_mutex_lock_slow(&d->mtx, &ctx);
|
||||||
|
d->counter++;
|
||||||
|
ww_mutex_unlock(&d->mtx);
|
||||||
|
|
||||||
|
ww_acquire_done(&ctx);
|
||||||
|
ww_acquire_fini(&ctx);
|
||||||
|
|
||||||
|
ww_mutex_destroy(&d->mtx);
|
||||||
|
}
|
||||||
@@ -1,5 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
|
|
||||||
|
CONTEXT_ANALYSIS := y
|
||||||
|
|
||||||
obj-y := core.o report.o
|
obj-y := core.o report.o
|
||||||
|
|
||||||
CFLAGS_kfence_test.o := -fno-omit-frame-pointer -fno-optimize-sibling-calls
|
CFLAGS_kfence_test.o := -fno-omit-frame-pointer -fno-optimize-sibling-calls
|
||||||
|
|||||||
@@ -133,8 +133,8 @@ struct kfence_metadata *kfence_metadata __read_mostly;
|
|||||||
static struct kfence_metadata *kfence_metadata_init __read_mostly;
|
static struct kfence_metadata *kfence_metadata_init __read_mostly;
|
||||||
|
|
||||||
/* Freelist with available objects. */
|
/* Freelist with available objects. */
|
||||||
static struct list_head kfence_freelist = LIST_HEAD_INIT(kfence_freelist);
|
DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */
|
||||||
static DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */
|
static struct list_head kfence_freelist __guarded_by(&kfence_freelist_lock) = LIST_HEAD_INIT(kfence_freelist);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The static key to set up a KFENCE allocation; or if static keys are not used
|
* The static key to set up a KFENCE allocation; or if static keys are not used
|
||||||
@@ -254,6 +254,7 @@ static bool kfence_unprotect(unsigned long addr)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *meta)
|
static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *meta)
|
||||||
|
__must_hold(&meta->lock)
|
||||||
{
|
{
|
||||||
unsigned long offset = (meta - kfence_metadata + 1) * PAGE_SIZE * 2;
|
unsigned long offset = (meta - kfence_metadata + 1) * PAGE_SIZE * 2;
|
||||||
unsigned long pageaddr = (unsigned long)&__kfence_pool[offset];
|
unsigned long pageaddr = (unsigned long)&__kfence_pool[offset];
|
||||||
@@ -289,6 +290,7 @@ static inline bool kfence_obj_allocated(const struct kfence_metadata *meta)
|
|||||||
static noinline void
|
static noinline void
|
||||||
metadata_update_state(struct kfence_metadata *meta, enum kfence_object_state next,
|
metadata_update_state(struct kfence_metadata *meta, enum kfence_object_state next,
|
||||||
unsigned long *stack_entries, size_t num_stack_entries)
|
unsigned long *stack_entries, size_t num_stack_entries)
|
||||||
|
__must_hold(&meta->lock)
|
||||||
{
|
{
|
||||||
struct kfence_track *track =
|
struct kfence_track *track =
|
||||||
next == KFENCE_OBJECT_ALLOCATED ? &meta->alloc_track : &meta->free_track;
|
next == KFENCE_OBJECT_ALLOCATED ? &meta->alloc_track : &meta->free_track;
|
||||||
@@ -486,7 +488,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g
|
|||||||
alloc_covered_add(alloc_stack_hash, 1);
|
alloc_covered_add(alloc_stack_hash, 1);
|
||||||
|
|
||||||
/* Set required slab fields. */
|
/* Set required slab fields. */
|
||||||
slab = virt_to_slab((void *)meta->addr);
|
slab = virt_to_slab(addr);
|
||||||
slab->slab_cache = cache;
|
slab->slab_cache = cache;
|
||||||
slab->objects = 1;
|
slab->objects = 1;
|
||||||
|
|
||||||
@@ -515,6 +517,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g
|
|||||||
static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool zombie)
|
static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool zombie)
|
||||||
{
|
{
|
||||||
struct kcsan_scoped_access assert_page_exclusive;
|
struct kcsan_scoped_access assert_page_exclusive;
|
||||||
|
u32 alloc_stack_hash;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
bool init;
|
bool init;
|
||||||
|
|
||||||
@@ -547,9 +550,10 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z
|
|||||||
/* Mark the object as freed. */
|
/* Mark the object as freed. */
|
||||||
metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0);
|
metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0);
|
||||||
init = slab_want_init_on_free(meta->cache);
|
init = slab_want_init_on_free(meta->cache);
|
||||||
|
alloc_stack_hash = meta->alloc_stack_hash;
|
||||||
raw_spin_unlock_irqrestore(&meta->lock, flags);
|
raw_spin_unlock_irqrestore(&meta->lock, flags);
|
||||||
|
|
||||||
alloc_covered_add(meta->alloc_stack_hash, -1);
|
alloc_covered_add(alloc_stack_hash, -1);
|
||||||
|
|
||||||
/* Check canary bytes for memory corruption. */
|
/* Check canary bytes for memory corruption. */
|
||||||
check_canary(meta);
|
check_canary(meta);
|
||||||
@@ -594,6 +598,7 @@ static void rcu_guarded_free(struct rcu_head *h)
|
|||||||
* which partial initialization succeeded.
|
* which partial initialization succeeded.
|
||||||
*/
|
*/
|
||||||
static unsigned long kfence_init_pool(void)
|
static unsigned long kfence_init_pool(void)
|
||||||
|
__context_unsafe(/* constructor */)
|
||||||
{
|
{
|
||||||
unsigned long addr, start_pfn;
|
unsigned long addr, start_pfn;
|
||||||
int i, rand;
|
int i, rand;
|
||||||
@@ -1242,6 +1247,7 @@ bool kfence_handle_page_fault(unsigned long addr, bool is_write, struct pt_regs
|
|||||||
{
|
{
|
||||||
const int page_index = (addr - (unsigned long)__kfence_pool) / PAGE_SIZE;
|
const int page_index = (addr - (unsigned long)__kfence_pool) / PAGE_SIZE;
|
||||||
struct kfence_metadata *to_report = NULL;
|
struct kfence_metadata *to_report = NULL;
|
||||||
|
unsigned long unprotected_page = 0;
|
||||||
enum kfence_error_type error_type;
|
enum kfence_error_type error_type;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
@@ -1275,9 +1281,8 @@ bool kfence_handle_page_fault(unsigned long addr, bool is_write, struct pt_regs
|
|||||||
if (!to_report)
|
if (!to_report)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
raw_spin_lock_irqsave(&to_report->lock, flags);
|
|
||||||
to_report->unprotected_page = addr;
|
|
||||||
error_type = KFENCE_ERROR_OOB;
|
error_type = KFENCE_ERROR_OOB;
|
||||||
|
unprotected_page = addr;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If the object was freed before we took the look we can still
|
* If the object was freed before we took the look we can still
|
||||||
@@ -1289,7 +1294,6 @@ bool kfence_handle_page_fault(unsigned long addr, bool is_write, struct pt_regs
|
|||||||
if (!to_report)
|
if (!to_report)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
raw_spin_lock_irqsave(&to_report->lock, flags);
|
|
||||||
error_type = KFENCE_ERROR_UAF;
|
error_type = KFENCE_ERROR_UAF;
|
||||||
/*
|
/*
|
||||||
* We may race with __kfence_alloc(), and it is possible that a
|
* We may race with __kfence_alloc(), and it is possible that a
|
||||||
@@ -1301,6 +1305,8 @@ bool kfence_handle_page_fault(unsigned long addr, bool is_write, struct pt_regs
|
|||||||
|
|
||||||
out:
|
out:
|
||||||
if (to_report) {
|
if (to_report) {
|
||||||
|
raw_spin_lock_irqsave(&to_report->lock, flags);
|
||||||
|
to_report->unprotected_page = unprotected_page;
|
||||||
kfence_report_error(addr, is_write, regs, to_report, error_type);
|
kfence_report_error(addr, is_write, regs, to_report, error_type);
|
||||||
raw_spin_unlock_irqrestore(&to_report->lock, flags);
|
raw_spin_unlock_irqrestore(&to_report->lock, flags);
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
@@ -34,6 +34,8 @@
|
|||||||
/* Maximum stack depth for reports. */
|
/* Maximum stack depth for reports. */
|
||||||
#define KFENCE_STACK_DEPTH 64
|
#define KFENCE_STACK_DEPTH 64
|
||||||
|
|
||||||
|
extern raw_spinlock_t kfence_freelist_lock;
|
||||||
|
|
||||||
/* KFENCE object states. */
|
/* KFENCE object states. */
|
||||||
enum kfence_object_state {
|
enum kfence_object_state {
|
||||||
KFENCE_OBJECT_UNUSED, /* Object is unused. */
|
KFENCE_OBJECT_UNUSED, /* Object is unused. */
|
||||||
@@ -53,7 +55,7 @@ struct kfence_track {
|
|||||||
|
|
||||||
/* KFENCE metadata per guarded allocation. */
|
/* KFENCE metadata per guarded allocation. */
|
||||||
struct kfence_metadata {
|
struct kfence_metadata {
|
||||||
struct list_head list; /* Freelist node; access under kfence_freelist_lock. */
|
struct list_head list __guarded_by(&kfence_freelist_lock); /* Freelist node. */
|
||||||
struct rcu_head rcu_head; /* For delayed freeing. */
|
struct rcu_head rcu_head; /* For delayed freeing. */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -91,13 +93,13 @@ struct kfence_metadata {
|
|||||||
* In case of an invalid access, the page that was unprotected; we
|
* In case of an invalid access, the page that was unprotected; we
|
||||||
* optimistically only store one address.
|
* optimistically only store one address.
|
||||||
*/
|
*/
|
||||||
unsigned long unprotected_page;
|
unsigned long unprotected_page __guarded_by(&lock);
|
||||||
|
|
||||||
/* Allocation and free stack information. */
|
/* Allocation and free stack information. */
|
||||||
struct kfence_track alloc_track;
|
struct kfence_track alloc_track __guarded_by(&lock);
|
||||||
struct kfence_track free_track;
|
struct kfence_track free_track __guarded_by(&lock);
|
||||||
/* For updating alloc_covered on frees. */
|
/* For updating alloc_covered on frees. */
|
||||||
u32 alloc_stack_hash;
|
u32 alloc_stack_hash __guarded_by(&lock);
|
||||||
#ifdef CONFIG_MEMCG
|
#ifdef CONFIG_MEMCG
|
||||||
struct slabobj_ext obj_exts;
|
struct slabobj_ext obj_exts;
|
||||||
#endif
|
#endif
|
||||||
@@ -141,6 +143,6 @@ enum kfence_error_type {
|
|||||||
void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *regs,
|
void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *regs,
|
||||||
const struct kfence_metadata *meta, enum kfence_error_type type);
|
const struct kfence_metadata *meta, enum kfence_error_type type);
|
||||||
|
|
||||||
void kfence_print_object(struct seq_file *seq, const struct kfence_metadata *meta);
|
void kfence_print_object(struct seq_file *seq, const struct kfence_metadata *meta) __must_hold(&meta->lock);
|
||||||
|
|
||||||
#endif /* MM_KFENCE_KFENCE_H */
|
#endif /* MM_KFENCE_KFENCE_H */
|
||||||
|
|||||||
@@ -106,6 +106,7 @@ found:
|
|||||||
|
|
||||||
static void kfence_print_stack(struct seq_file *seq, const struct kfence_metadata *meta,
|
static void kfence_print_stack(struct seq_file *seq, const struct kfence_metadata *meta,
|
||||||
bool show_alloc)
|
bool show_alloc)
|
||||||
|
__must_hold(&meta->lock)
|
||||||
{
|
{
|
||||||
const struct kfence_track *track = show_alloc ? &meta->alloc_track : &meta->free_track;
|
const struct kfence_track *track = show_alloc ? &meta->alloc_track : &meta->free_track;
|
||||||
u64 ts_sec = track->ts_nsec;
|
u64 ts_sec = track->ts_nsec;
|
||||||
@@ -207,8 +208,6 @@ void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *r
|
|||||||
if (WARN_ON(type != KFENCE_ERROR_INVALID && !meta))
|
if (WARN_ON(type != KFENCE_ERROR_INVALID && !meta))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (meta)
|
|
||||||
lockdep_assert_held(&meta->lock);
|
|
||||||
/*
|
/*
|
||||||
* Because we may generate reports in printk-unfriendly parts of the
|
* Because we may generate reports in printk-unfriendly parts of the
|
||||||
* kernel, such as scheduler code, the use of printk() could deadlock.
|
* kernel, such as scheduler code, the use of printk() could deadlock.
|
||||||
@@ -263,6 +262,7 @@ void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *r
|
|||||||
stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, 0);
|
stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, 0);
|
||||||
|
|
||||||
if (meta) {
|
if (meta) {
|
||||||
|
lockdep_assert_held(&meta->lock);
|
||||||
pr_err("\n");
|
pr_err("\n");
|
||||||
kfence_print_object(NULL, meta);
|
kfence_print_object(NULL, meta);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2213,8 +2213,8 @@ static pmd_t *walk_to_pmd(struct mm_struct *mm, unsigned long addr)
|
|||||||
return pmd;
|
return pmd;
|
||||||
}
|
}
|
||||||
|
|
||||||
pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr,
|
pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr,
|
||||||
spinlock_t **ptl)
|
spinlock_t **ptl)
|
||||||
{
|
{
|
||||||
pmd_t *pmd = walk_to_pmd(mm, addr);
|
pmd_t *pmd = walk_to_pmd(mm, addr);
|
||||||
|
|
||||||
|
|||||||
@@ -280,7 +280,7 @@ static unsigned long pmdp_get_lockless_start(void) { return 0; }
|
|||||||
static void pmdp_get_lockless_end(unsigned long irqflags) { }
|
static void pmdp_get_lockless_end(unsigned long irqflags) { }
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
|
pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
|
||||||
{
|
{
|
||||||
unsigned long irqflags;
|
unsigned long irqflags;
|
||||||
pmd_t pmdval;
|
pmd_t pmdval;
|
||||||
@@ -332,13 +332,12 @@ pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, pmd_t *pmd,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* pte_offset_map_lock(mm, pmd, addr, ptlp), and its internal implementation
|
* pte_offset_map_lock(mm, pmd, addr, ptlp) is usually called with the pmd
|
||||||
* __pte_offset_map_lock() below, is usually called with the pmd pointer for
|
* pointer for addr, reached by walking down the mm's pgd, p4d, pud for addr:
|
||||||
* addr, reached by walking down the mm's pgd, p4d, pud for addr: either while
|
* either while holding mmap_lock or vma lock for read or for write; or in
|
||||||
* holding mmap_lock or vma lock for read or for write; or in truncate or rmap
|
* truncate or rmap context, while holding file's i_mmap_lock or anon_vma lock
|
||||||
* context, while holding file's i_mmap_lock or anon_vma lock for read (or for
|
* for read (or for write). In a few cases, it may be used with pmd pointing to
|
||||||
* write). In a few cases, it may be used with pmd pointing to a pmd_t already
|
* a pmd_t already copied to or constructed on the stack.
|
||||||
* copied to or constructed on the stack.
|
|
||||||
*
|
*
|
||||||
* When successful, it returns the pte pointer for addr, with its page table
|
* When successful, it returns the pte pointer for addr, with its page table
|
||||||
* kmapped if necessary (when CONFIG_HIGHPTE), and locked against concurrent
|
* kmapped if necessary (when CONFIG_HIGHPTE), and locked against concurrent
|
||||||
@@ -389,8 +388,8 @@ pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, pmd_t *pmd,
|
|||||||
* table, and may not use RCU at all: "outsiders" like khugepaged should avoid
|
* table, and may not use RCU at all: "outsiders" like khugepaged should avoid
|
||||||
* pte_offset_map() and co once the vma is detached from mm or mm_users is zero.
|
* pte_offset_map() and co once the vma is detached from mm or mm_users is zero.
|
||||||
*/
|
*/
|
||||||
pte_t *__pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd,
|
pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd,
|
||||||
unsigned long addr, spinlock_t **ptlp)
|
unsigned long addr, spinlock_t **ptlp)
|
||||||
{
|
{
|
||||||
spinlock_t *ptl;
|
spinlock_t *ptl;
|
||||||
pmd_t pmdval;
|
pmd_t pmdval;
|
||||||
|
|||||||
@@ -257,7 +257,7 @@ void tcp_sigpool_get(unsigned int id)
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(tcp_sigpool_get);
|
EXPORT_SYMBOL_GPL(tcp_sigpool_get);
|
||||||
|
|
||||||
int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acquires(RCU_BH)
|
int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acquires(0, RCU_BH)
|
||||||
{
|
{
|
||||||
struct crypto_ahash *hash;
|
struct crypto_ahash *hash;
|
||||||
|
|
||||||
|
|||||||
@@ -11,11 +11,6 @@
|
|||||||
|
|
||||||
#include <linux/atomic.h>
|
#include <linux/atomic.h>
|
||||||
|
|
||||||
// TODO: Remove this after INLINE_HELPERS support is added.
|
|
||||||
#ifndef __rust_helper
|
|
||||||
#define __rust_helper
|
|
||||||
#endif
|
|
||||||
|
|
||||||
__rust_helper int
|
__rust_helper int
|
||||||
rust_helper_atomic_read(const atomic_t *v)
|
rust_helper_atomic_read(const atomic_t *v)
|
||||||
{
|
{
|
||||||
@@ -1037,4 +1032,4 @@ rust_helper_atomic64_dec_if_positive(atomic64_t *v)
|
|||||||
}
|
}
|
||||||
|
|
||||||
#endif /* _RUST_ATOMIC_API_H */
|
#endif /* _RUST_ATOMIC_API_H */
|
||||||
// 615a0e0c98b5973a47fe4fa65e92935051ca00ed
|
// e4edb6174dd42a265284958f00a7cea7ddb464b1
|
||||||
|
|||||||
139
rust/helpers/atomic_ext.c
Normal file
139
rust/helpers/atomic_ext.c
Normal file
@@ -0,0 +1,139 @@
|
|||||||
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
|
|
||||||
|
#include <asm/barrier.h>
|
||||||
|
#include <asm/rwonce.h>
|
||||||
|
#include <linux/atomic.h>
|
||||||
|
|
||||||
|
__rust_helper s8 rust_helper_atomic_i8_read(s8 *ptr)
|
||||||
|
{
|
||||||
|
return READ_ONCE(*ptr);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper s8 rust_helper_atomic_i8_read_acquire(s8 *ptr)
|
||||||
|
{
|
||||||
|
return smp_load_acquire(ptr);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper s16 rust_helper_atomic_i16_read(s16 *ptr)
|
||||||
|
{
|
||||||
|
return READ_ONCE(*ptr);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper s16 rust_helper_atomic_i16_read_acquire(s16 *ptr)
|
||||||
|
{
|
||||||
|
return smp_load_acquire(ptr);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper void rust_helper_atomic_i8_set(s8 *ptr, s8 val)
|
||||||
|
{
|
||||||
|
WRITE_ONCE(*ptr, val);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper void rust_helper_atomic_i8_set_release(s8 *ptr, s8 val)
|
||||||
|
{
|
||||||
|
smp_store_release(ptr, val);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper void rust_helper_atomic_i16_set(s16 *ptr, s16 val)
|
||||||
|
{
|
||||||
|
WRITE_ONCE(*ptr, val);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper void rust_helper_atomic_i16_set_release(s16 *ptr, s16 val)
|
||||||
|
{
|
||||||
|
smp_store_release(ptr, val);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* xchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
|
||||||
|
* architecture provding xchg() support for i8 and i16.
|
||||||
|
*
|
||||||
|
* The architectures that currently support Rust (x86_64, armv7,
|
||||||
|
* arm64, riscv, and loongarch) satisfy these requirements.
|
||||||
|
*/
|
||||||
|
__rust_helper s8 rust_helper_atomic_i8_xchg(s8 *ptr, s8 new)
|
||||||
|
{
|
||||||
|
return xchg(ptr, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper s16 rust_helper_atomic_i16_xchg(s16 *ptr, s16 new)
|
||||||
|
{
|
||||||
|
return xchg(ptr, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper s8 rust_helper_atomic_i8_xchg_acquire(s8 *ptr, s8 new)
|
||||||
|
{
|
||||||
|
return xchg_acquire(ptr, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper s16 rust_helper_atomic_i16_xchg_acquire(s16 *ptr, s16 new)
|
||||||
|
{
|
||||||
|
return xchg_acquire(ptr, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper s8 rust_helper_atomic_i8_xchg_release(s8 *ptr, s8 new)
|
||||||
|
{
|
||||||
|
return xchg_release(ptr, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper s16 rust_helper_atomic_i16_xchg_release(s16 *ptr, s16 new)
|
||||||
|
{
|
||||||
|
return xchg_release(ptr, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper s8 rust_helper_atomic_i8_xchg_relaxed(s8 *ptr, s8 new)
|
||||||
|
{
|
||||||
|
return xchg_relaxed(ptr, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper s16 rust_helper_atomic_i16_xchg_relaxed(s16 *ptr, s16 new)
|
||||||
|
{
|
||||||
|
return xchg_relaxed(ptr, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* try_cmpxchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
|
||||||
|
* architecture provding try_cmpxchg() support for i8 and i16.
|
||||||
|
*
|
||||||
|
* The architectures that currently support Rust (x86_64, armv7,
|
||||||
|
* arm64, riscv, and loongarch) satisfy these requirements.
|
||||||
|
*/
|
||||||
|
__rust_helper bool rust_helper_atomic_i8_try_cmpxchg(s8 *ptr, s8 *old, s8 new)
|
||||||
|
{
|
||||||
|
return try_cmpxchg(ptr, old, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper bool rust_helper_atomic_i16_try_cmpxchg(s16 *ptr, s16 *old, s16 new)
|
||||||
|
{
|
||||||
|
return try_cmpxchg(ptr, old, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_acquire(s8 *ptr, s8 *old, s8 new)
|
||||||
|
{
|
||||||
|
return try_cmpxchg_acquire(ptr, old, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_acquire(s16 *ptr, s16 *old, s16 new)
|
||||||
|
{
|
||||||
|
return try_cmpxchg_acquire(ptr, old, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_release(s8 *ptr, s8 *old, s8 new)
|
||||||
|
{
|
||||||
|
return try_cmpxchg_release(ptr, old, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_release(s16 *ptr, s16 *old, s16 new)
|
||||||
|
{
|
||||||
|
return try_cmpxchg_release(ptr, old, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_relaxed(s8 *ptr, s8 *old, s8 new)
|
||||||
|
{
|
||||||
|
return try_cmpxchg_relaxed(ptr, old, new);
|
||||||
|
}
|
||||||
|
|
||||||
|
__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_relaxed(s16 *ptr, s16 *old, s16 new)
|
||||||
|
{
|
||||||
|
return try_cmpxchg_relaxed(ptr, old, new);
|
||||||
|
}
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user