| /linux/Documentation/locking/ |
| A D | preempt-locking.rst | 35 protect these situations by disabling preemption around them. 37 You can also use put_cpu() and get_cpu(), which will disable preemption. 44 Under preemption, the state of the CPU must be protected. This is arch- 51 preemption must be disabled around such regions. 54 kernel_fpu_begin and kernel_fpu_end will disable and enable preemption. 72 Data protection under preemption is achieved by disabling preemption for the 86 preemption is not enabled. 125 Preventing preemption using interrupt disabling 132 in doubt, rely on locking or explicit preemption disabling. 137 These may be used to protect from preemption, however, on exit, if preemption [all …]
|
| A D | locktypes.rst | 60 mechanisms, disabling preemption or interrupts are pure CPU local 103 PI has limitations on non-PREEMPT_RT kernels due to preemption and 106 PI clearly cannot preempt preemption-disabled or interrupt-disabled 162 by disabling preemption or interrupts. 220 preemption or interrupts is required, for example, to safely access 247 Non-PREEMPT_RT kernels disable preemption to get this effect. 250 preemption disabled. The lock disables softirq handlers and also 251 prevents reentrancy due to task preemption. 433 preemption. The following substitution works on both kernels:: 476 Acquiring a raw_spinlock_t disables preemption and possibly also [all …]
|
| A D | seqlock.rst | 47 preemption, preemption must be explicitly disabled before entering the 72 /* Serialized context with disabled preemption */ 107 For lock types which do not implicitly disable preemption, preemption
|
| A D | hwspinlock.rst | 95 Upon a successful return from this function, preemption is disabled so 111 Upon a successful return from this function, preemption and the local 127 Upon a successful return from this function, preemption is disabled, 178 Upon a successful return from this function, preemption is disabled so 195 Upon a successful return from this function, preemption and the local 211 Upon a successful return from this function, preemption is disabled, 268 Upon a successful return from this function, preemption and local 280 Upon a successful return from this function, preemption is reenabled,
|
| A D | ww-mutex-design.rst | 53 running transaction. Note that this is not the same as process preemption. A 350 The Wound-Wait preemption is implemented with a lazy-preemption scheme: 354 wounded status and retries. A great benefit of implementing preemption in
|
| /linux/kernel/ |
| A D | Kconfig.preempt | 22 This is the traditional Linux preemption model, geared towards 38 "explicit preemption points" to the kernel code. These new 39 preemption points have been selected to reduce the maximum 61 otherwise not be about to reach a natural preemption point. 102 This option allows to define the preemption model on the kernel 103 command line parameter and thus override the default preemption
|
| /linux/Documentation/core-api/ |
| A D | local_ops.rst | 42 making sure that we modify it from within a preemption safe context. It is 76 preemption already disabled. I suggest, however, to explicitly 77 disable preemption anyway to make sure it will still work correctly on 104 local atomic operations: it makes sure that preemption is disabled around write 110 If you are already in a preemption-safe context, you can use 161 * preemptible context (it disables preemption) :
|
| A D | this_cpu_ops.rst | 20 necessary to disable preemption or interrupts to ensure that the 44 The following this_cpu() operations with implied preemption protection 46 preemption and interrupts:: 111 reserved for a specific processor. Without disabling preemption in the 143 preemption has been disabled. The pointer is then used to 144 access local per cpu data in a critical section. When preemption 231 preemption. If a per cpu variable is not used in an interrupt context
|
| /linux/Documentation/RCU/ |
| A D | NMI-RCU.rst | 45 The do_nmi() function processes each NMI. It first disables preemption 50 preemption is restored. 95 CPUs complete any preemption-disabled segments of code that they were 97 Since NMI handlers disable preemption, synchronize_rcu() is guaranteed
|
| /linux/tools/lib/traceevent/Documentation/ |
| A D | libtraceevent-record_parse.txt | 41 The _tep_data_preempt_count()_ function gets the preemption count from the 64 preemption count. 95 /* Got the preemption count */
|
| A D | libtraceevent-event_print.txt | 33 current context, and preemption count. 53 Field 4 is the preemption count.
|
| /linux/Documentation/virt/kvm/devices/ |
| A D | arm-vgic.rst | 99 maximum possible 128 preemption levels. The semantics of the register 100 indicate if any interrupts in a given preemption level are in the active 103 Thus, preemption level X has one or more active interrupts if and only if: 107 Bits for undefined preemption levels are RAZ/WI.
|
| /linux/arch/arc/kernel/ |
| A D | entry-compact.S | 152 ; if L2 IRQ interrupted a L1 ISR, disable preemption 157 ; -preemption off IRQ, user task in syscall picked to run 172 ; bump thread_info->preempt_count (Disable preemption) 367 ; decrement thread_info->preempt_count (re-enable preemption)
|
| A D | entry.S | 295 ; --- (Slow Path #1) task preemption ---
|
| /linux/Documentation/translations/zh_CN/core-api/ |
| A D | local_ops.rst | 155 * preemptible context (it disables preemption) :
|
| /linux/Documentation/arm/ |
| A D | kernel_mode_neon.rst | 14 preemption disabled 58 * NEON/VFP code is executed with preemption disabled.
|
| /linux/include/rdma/ |
| A D | opa_port_info.h | 321 } preemption; member
|
| /linux/Documentation/gpu/rfc/ |
| A D | i915_scheduler.rst | 43 * Features like timeslicing / preemption / virtual engines would 56 preemption, timeslicing, etc... so it is possible for jobs to
|
| /linux/Documentation/driver-api/ |
| A D | io-mapping.rst | 53 io_mapping_map_atomic_wc() has the side effect of disabling preemption and
|
| /linux/Documentation/kernel-hacking/ |
| A D | locking.rst | 16 With the wide availability of HyperThreading, and preemption in the 130 is set, then spinlocks simply disable preemption, which is sufficient to 131 prevent any races. For most purposes, we can think of preemption as 1132 these simply disable preemption so the reader won't go to sleep while 1235 Now, because the 'read lock' in RCU is simply disabling preemption, a 1236 caller which always has preemption disabled between calling 1298 preemption disabled. This also means you need to be in user context: 1399 preemption 1405 preemption, even on UP.
|
| /linux/kernel/rcu/ |
| A D | Kconfig | 84 only voluntary context switch (not preemption!), idle, and 91 only context switch (including preemption) and user-mode
|
| /linux/Documentation/devicetree/bindings/net/dsa/ |
| A D | ocelot.txt | 23 TSN frame preemption.
|
| /linux/drivers/gpu/drm/i915/ |
| A D | Kconfig.profile | 59 How long to wait (in milliseconds) for a preemption event to occur
|
| /linux/Documentation/trace/ |
| A D | ftrace-uses.rst | 135 ftrace_test_recursion_trylock() will disable preemption, and the 194 with preemption disabled. If it is not set, then it is possible
|
| A D | ftrace.rst | 29 disabled and enabled, as well as for preemption and from a time 757 time for which preemption is disabled. 762 records the largest time for which irqs and/or preemption 1544 When preemption is disabled, we may be able to receive 1546 priority task must wait for preemption to be enabled again 1551 which preemption was disabled. The control of preemptoff tracer 1676 preemption disabled for the longest times is helpful. But 1677 sometimes we would like to know when either preemption and/or 1699 preemption is disabled. This total time is the time that we can 1763 within the preemption points. We do see that it started with [all …]
|