| /linux/Documentation/driver-api/thermal/ |
| A D | cpu-idle-cooling.rst | 25 because of the OPP density, we can only choose an OPP with a power 35 If we can remove the static and the dynamic leakage for a specific 38 injection period, we can mitigate the temperature by modulating the 49 idle state target residency, we lead to dropping the static and the 132 - It is less than or equal to the latency we tolerate when the 143 When we reach the thermal trip point, we have to sustain a specified 144 power for a specific temperature but at this time we consume:: 151 because we don’t want to change the OPP. We can group the 172 the idle injection we need. Alternatively if we have the idle 173 injection duration, we can compute the running duration with:: [all …]
|
| /linux/Documentation/devicetree/bindings/pinctrl/ |
| A D | sprd,pinctrl.txt | 12 to choose one function (like: UART0) for which system, since we 15 There are too much various configuration that we can not list all 16 of them, so we can not make every Spreadtrum-special configuration 18 global configuration in future. Then we add one "sprd,control" to 19 set these various global control configuration, and we need use 22 Moreover we recognise every fields comprising one bit or several 23 bits in one global control register as one pin, thus we should 32 Now we have 4 systems for sleep mode on SC9860 SoC: AP system, 52 kernel on SC9860 platform), then we can not select "sleep" state 53 when the PUBCP system goes into deep sleep mode. Thus we introduce [all …]
|
| /linux/Documentation/x86/ |
| A D | entry_64.rst | 58 so. If we mess that up even slightly, we crash. 60 So when we have a secondary entry, already in kernel mode, we *must 61 not* use SWAPGS blindly - nor must we forget doing a SWAPGS when it's 87 If we are at an interrupt or user-trap/gate-alike boundary then we can 89 whether SWAPGS was already done: if we see that we are a secondary 90 entry interrupting kernel mode execution, then we know that the GS 91 base has already been switched. If it says that we interrupted 92 user-space execution then we must do the SWAPGS. 94 But if we are in an NMI/MCE/DEBUG/whatever super-atomic entry context, 96 stack but before we executed SWAPGS, then the only safe way to check [all …]
|
| /linux/Documentation/filesystems/ |
| A D | xfs-delayed-logging-design.rst | 167 changes to the log buffers, we need to ensure that the object we are formatting 196 Hence we avoid the need to lock items when we need to flush outstanding 233 If we don't keep the vector around, we do not know where the region boundaries 411 the log vector chaining. If we track by the log vectors, then we only need to 452 To ensure that we can do this, we need to track all the checkpoint contexts 467 are also committed to disk before the one we need to wait for. Therefore we 490 amount of log space required as we add items to the commit item list, but we 508 each, so we in 1.5MB of directory buffers we'd have roughly 400 buffers and a 530 permanent reservation on the space, but we still need to make sure we refresh 608 CIL commit/flush lock. If we pin the object outside this lock, we cannot [all …]
|
| A D | directory-locking.rst | 10 When taking the i_rwsem on multiple non-directory objects, we 16 1) read access. Locking rules: caller locks directory we are accessing. 29 lock it. If we need to lock both, lock them in inode pointer order. 31 NB: we might get away with locking the source (and target in exchange 55 lock it. If we need to lock both, do so in inode pointer order. 69 First of all, at any moment we have a partial ordering of the 82 the order until we had acquired all locks). 111 would have a contended child and we had assumed that no object is its 116 of its descendents is locked by cross-directory rename (otherwise we 120 But locking rules for cross-directory rename guarantee that we do not [all …]
|
| A D | path-lookup.txt | 49 the path given by the name's starting point (which we know in advance -- eg. 55 A parent, of course, must be a directory, and we must have appropriate 81 in that bucket is then walked, and we do a full comparison of each entry 148 However, when inserting object 2 onto a new list, we end up with this: 206 With this two parts of the puzzle, we can do path lookups without taking 256 | children:"npiggin" | we now recheck the d_seq of dentry0. Then we 270 | children:NULL | its refcount because we're holding d_lock. 284 When we reach a point where sleeping is required, or a filesystem callout 295 * synchronize_rcu is called when unregistering a filesystem, so we can 302 so we can load this tuple atomically, and also check whether any of its [all …]
|
| A D | xfs-self-describing-metadata.rst | 31 However, if we scale the filesystem up to 1PB, we now have 10x as much metadata 43 magic number in the metadata block, we have no other way of identifying what it 56 Hence we need to record more information into the metadata to allow us to 58 of analysis. We can't protect against every possible type of error, but we can 71 magic numbers. Hence we can change the on-disk format of all these objects to 82 block. If we can verify the block contains the metadata it was intended to 111 in the metadata we have no idea of the scope of the corruption. If we have an 141 the LSN we can tell if the corrupted metadata all belonged to the same log 171 need more discrimination of error type at higher levels, we can define new 323 This will verify the internal structure of the metadata before we go any [all …]
|
| A D | idmappings.rst | 24 we're talking about an id in the upper or lower idmapset. 49 Given that we are dealing with order isomorphisms plus the fact that we're 50 dealing with subsets we can embedd idmappings into each other, i.e. we can 85 for simplicity. After that if we want to know what ``id`` maps to we can do 88 - If we want to map from left to right:: 93 - If we want to map from right to left:: 109 idmapping. So we're mapping up in the first idmapping:: 154 with user namespaces. Since we mainly care about how idmappings work we're not 202 If we've been given ``k11000`` from one idmapping we can map that id up in 708 As we can see, we end up with an invertible and therefore information [all …]
|
| /linux/tools/lib/perf/Documentation/ |
| A D | libperf-counting.txt | 73 Once the setup is complete we start by defining specific events using the `struct perf_event_attr`. 97 In this case we will monitor current process, so we create threads map with single pid (0): 110 Now we create libperf's event list, which will serve as holder for the events we want: 121 We create libperf's events for the attributes we defined earlier and add them to the list: 156 so we need to enable the whole list explicitly (both events). 158 From this moment events are counting and we can do our workload. 160 When we are done we disable the events list. 171 Now we need to get the counts from events, following code iterates through the
|
| /linux/Documentation/filesystems/ext4/ |
| A D | orphan.rst | 9 would leak. Similarly if we truncate or extend the file, we need not be able 10 to perform the operation in a single journalling transaction. In such case we 17 inode (we overload i_dtime inode field for this). However this filesystem 36 When a filesystem with orphan file feature is writeably mounted, we set 38 be valid orphan entries. In case we see this feature when mounting the 39 filesystem, we read the whole orphan file and process all orphan inodes found 40 there as usual. When cleanly unmounting the filesystem we remove the
|
| /linux/Documentation/scheduler/ |
| A D | schedutil.txt | 4 we know this is flawed, but it is the best workable approximation. 10 With PELT we track some metrics across the various scheduler entities, from 12 we use an Exponentially Weighted Moving Average (EWMA), each period (1024us) 31 Using this we track 2 key metrics: 'running' and 'runnable'. 'Running' 46 a big CPU, we allow architectures to scale the time delta with two ratios, one 56 For more dynamic systems where the hardware is in control of DVFS we use 58 For Intel specifically, we use: 80 of DVFS and CPU type. IOW. we can transfer and compare them between CPUs. 138 XXX IO-wait; when the update is due to a task wakeup from IO-completion we 162 suppose we have a CPU saturated with 4 tasks, then when we migrate a task [all …]
|
| /linux/Documentation/arm64/ |
| A D | perf.rst | 40 For a VHE host this attribute is ignored as we consider the host kernel to 43 For a non-VHE host this attribute will exclude EL2 as we consider the 61 Due to the overlapping exception levels between host and guests we cannot 62 exclusively rely on the PMU's hardware exception filtering - therefore we 66 For non-VHE systems we exclude EL2 for exclude_host - upon entering and 67 exiting the guest we disable/enable the event as appropriate based on the 70 For VHE systems we exclude EL1 for exclude_guest and exclude both EL0,EL2 71 for exclude_host. Upon entering and exiting the guest we modify the event 82 On non-VHE hosts we enable/disable counters on the entry/exit of host/guest
|
| /linux/drivers/block/paride/ |
| A D | Transition-notes | 9 ps_spinlock. C is always preceded by B, since we can't reach it 10 other than through B and we don't drop ps_spinlock between them. 14 A and each B is preceded by either A or C. Moments when we enter 37 * in ps_tq_int(): from the moment when we get ps_spinlock() to the 73 we would have to be called for the PIA that got ->claimed_cont 83 it is holding pd_lock. The only place within the area where we 87 we were acquiring the lock, (1) would be already false, since 89 If it was 0 before we tried to acquire pd_lock, (2) would be 109 was acquiring ps_spinlock) or (2.3) (if it was set when we started to 123 We don't need to reset it to NULL, since we are guaranteed that there [all …]
|
| /linux/drivers/scsi/aic7xxx/ |
| A D | aic79xx.seq | 183 * we detect case 1, we will properly defer the post of the SCB 376 * order is preserved even if we batch. 910 * out before we can test SDONE, we'll think that 1109 * If we get one, we use the tag returned to find the proper 1424 * line, or we just want to acknowledge the byte, then we do a dummy read 1466 * Do we have any prefetch left??? 1475 /* Did we just finish fetching segs? */ 1613 * Since we've are entering a data phase, we will 1642 * unless we already know that we should be bitbucketing. 1882 * FIFO. This status is the only way we can detect if we [all …]
|
| A D | aic7xxx.seq | 211 /* The Target ID we were selected at */ 362 * when we have outstanding transactions, so we can safely 364 * we start sending out transactions again. 486 * we properly identified ourselves. 735 /* Did we just finish fetching segs? */ 738 /* Are we actively fetching segments? */ 742 * Do we have any prefetch left??? 1407 * we aren't going to touch host memory. 1874 * If we get one, we use the tag returned to find the proper 1964 * using SCSIBUSL. When we have pulled the ATN line, or we just want to [all …]
|
| /linux/Documentation/RCU/ |
| A D | rculist_nulls.rst | 36 * reuse these object before the RCU grace period, we 39 if (obj->key != key) { // not the object we expected 104 * we need to make sure obj->key is updated before obj->next 115 Nothing special here, we can use a standard RCU hlist deletion. 135 With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup() 138 For example, if we choose to store the slot number as the 'nulls' 139 end-of-list marker for each slot of the hash table, we can detect 143 is not the slot number, then we must restart the lookup at 161 if (obj->key != key) { // not the object we expected 168 * if the nulls value we got at the end of this lookup is [all …]
|
| /linux/Documentation/power/ |
| A D | freezing-of-tasks.rst | 22 we only consider hibernation, but the description also applies to suspend). 33 it loop until PF_FROZEN is cleared for it. Then, we say that the task is 80 - freezes all tasks (including kernel threads) because we can't freeze 84 - thaws only kernel threads; this is particularly useful if we need to do 101 IV. Why do we do that? 107 hibernation. At the moment we have no simple means of checkpointing 125 to allocate additional memory and we prevent them from doing that by 139 "RJW:> Why we freeze tasks at all or why we freeze kernel threads? 144 s2ram with some devices in the middle of a DMA. So we want to be able to 167 running. Since we need to disable nonboot CPUs during the hibernation, [all …]
|
| /linux/Documentation/powerpc/ |
| A D | pci_iov_resource_on_powernv.rst | 40 The following section provides a rough description of what we have on P8 52 For DMA, MSIs and inbound PCIe error messages, we have a table (in 91 reserved for MSIs but this is not a problem at this point; we just 93 ignores that however and will forward in that space if we try). 100 Now, this is the "main" window we use in Linux today (excluding 116 bits which are not conveyed by PowerBus but we don't use this. 134 Then we do the same thing as with M32, using the bridge alignment 137 Since we cannot remap, we have two additional constraints: 150 the best we found. So when any of the PEs freezes, we freeze the 158 sense, but we haven't done it yet. [all …]
|
| /linux/Documentation/sound/designs/ |
| A D | jack-injection.rst | 10 validate ALSA userspace changes. For example, we change the audio 11 profile switching code in the pulseaudio, and we want to verify if the 13 in this case, we could inject plugin or plugout events to an audio 14 jack or to some audio jacks, we don't need to physically access the 26 To inject events to audio jacks, we need to enable the jack injection 28 change the state by hardware events anymore, we could inject plugin or 30 ``status``, after we finish our test, we need to disable the jack
|
| /linux/drivers/gpu/drm/i915/ |
| A D | Kconfig.profile | 19 When listening to a foreign fence, we install a supplementary timer 20 to ensure that we are always signaled and our userspace is able to 31 On runtime suspend, as we suspend the device, we have to revoke 34 the GGTT mmap can be very slow and so we impose a small hysteris 79 we may spend some time polling for its completion. As the IRQ may 80 take a non-negligible time to setup, we do a short spin first to 87 May be 0 to disable the initial spin. In practice, we estimate 96 the GPU, we allow the innocent contexts also on the system to quiesce. 109 When two user batches of equal priority are executing, we will
|
| /linux/Documentation/block/ |
| A D | deadline-iosched.rst | 20 service time for a request. As we focus mainly on read latencies, this is 49 When we have to move requests from the io scheduler queue to the block 50 device dispatch queue, we always give a preference to reads. However, we 52 how many times we give preference to reads over writes. When that has been 53 done writes_starved number of times, we dispatch some writes based on the 68 that comes at basically 0 cost we leave that on. We simply disable the
|
| /linux/Documentation/networking/ |
| A D | fib_trie.rst | 37 verify that they actually do match the key we are searching for. 72 fib_find_node(). Inserting a new node means we might have to run the 107 slower than the corresponding fib_hash function, as we have to walk the 124 trie, key segment by key segment, until we find a leaf. check_leaf() does 127 If we find a match, we are done. 129 If we don't find a match, we enter prefix matching mode. The prefix length, 131 and we backtrack upwards through the trie trying to find a longest matching 137 the child index until we find a match or the child index consists of nothing but 140 At this point we backtrack (t->stats.backtrack++) up the trie, continuing to 143 At this point we will repeatedly descend subtries to look for a match, and there
|
| /linux/tools/perf/ |
| A D | builtin-timechart.c | 410 struct wake_event *we = zalloc(sizeof(*we)); in sched_wakeup() local 412 if (!we) in sched_wakeup() 415 we->time = timestamp; in sched_wakeup() 416 we->waker = waker; in sched_wakeup() 420 we->waker = -1; in sched_wakeup() 422 we->wakee = wakee; in sched_wakeup() 1042 while (we) { in draw_wakeups() 1049 if (p->pid == we->waker || p->pid == we->wakee) { in draw_wakeups() 1090 svg_interrupt(we->time, to, we->backtrace); in draw_wakeups() 1092 svg_wakeline(we->time, from, to, we->backtrace); in draw_wakeups() [all …]
|
| /linux/Documentation/devicetree/bindings/i2c/ |
| A D | i2c-arb-gpio-challenge.txt | 24 - OUR_CLAIM: output from us signaling to other hosts that we want the bus 31 Let's say we want to claim the bus. We: 35 3. Check THEIR_CLAIMS. If none are asserted then the we have the bus and we are 44 - our-claim-gpio: The GPIO that we use to claim the bus. 51 - wait-retry-us: we'll attempt another claim after this many microseconds. 53 - wait-free-us: we'll give up after this many microseconds. Default is 50000 us.
|
| /linux/Documentation/virt/kvm/ |
| A D | locking.rst | 52 tracking. That means we need to restore the saved R/X bits. This is 56 write-protect. That means we just need to change the W bit of the spte. 66 On fast page fault path, we will use cmpxchg to atomically set the spte W 71 But we need carefully check these cases: 75 The mapping from gfn to pfn may be changed since we can only ensure the pfn 115 For direct sp, we can easily avoid it since the spte of direct sp is fixed 116 to gfn. For indirect sp, we disabled fast page fault for simplicity. 126 Then, we can ensure the dirty bitmaps is correctly set for a gfn. 182 If the spte is updated from writable to readonly, we should flush all TLBs, 255 devices, we put the blocked vCPU on the list blocked_vcpu_on_cpu [all …]
|