Lines Matching refs:spinning
51 // idle P and there are no "spinning" worker threads. A worker thread is considered
52 // spinning if it is out of local work and did not find work in global run queue/
53 // netpoller; the spinning state is denoted in m.spinning and in sched.nmspinning.
54 // Threads unparked this way are also considered spinning; we don't do goroutine
55 // handoff so such threads are out of work initially. Spinning threads do some
56 // spinning looking for work in per-P run queues before parking. If a spinning
57 // thread finds work it takes itself out of the spinning state and proceeds to
58 // execution. If it does not find work it takes itself out of the spinning state
60 // If there is at least one spinning thread (sched.nmspinning>1), we don't unpark
61 // new threads when readying goroutines. To compensate for that, if the last spinning
62 // thread finds work and stops spinning, it must unpark a new spinning thread.
67 // spinning->non-spinning thread transition. This transition can race with submission
72 // The general pattern for spinning->non-spinning transition is: decrement nmspinning,
1642 if _g_.m.spinning {
1643 throw("stopm spinning")
1664 // startm's caller incremented nmspinning. Set the new M's spinning.
1665 getg().m.spinning = true
1671 // If spinning is set, the caller has incremented nmspinning and startm will
1672 // either decrement nmspinning or set m.spinning in the newly started M.
1674 func startm(_p_ *p, spinning bool) {
1680 if spinning {
1694 if spinning {
1695 // The caller incremented nmspinning, so set m.spinning in the new M.
1701 if mp.spinning {
1702 throw("startm: m is spinning")
1707 if spinning && !runqempty(_p_) {
1710 // The caller incremented nmspinning, so set m.spinning in the new M.
1711 mp.spinning = spinning
1733 // no local work, check that there are no spinning/idle M's,
1775 // be conservative about spinning threads
1838 if _g_.m.spinning {
1839 _g_.m.spinning = false
1962 // If number of spinning M's >= number of busy P's, block.
1965 if !_g_.m.spinning && 2*atomic.Load(&sched.nmspinning) >= procs-atomic.Load(&sched.npidle) {
1968 if !_g_.m.spinning {
1969 _g_.m.spinning = true
2016 // Delicate dance: thread transitions from spinning to non-spinning state,
2023 // If we discover new work below, we need to restore m.spinning as a signal
2027 // the system is fully loaded so no spinning threads are required.
2029 wasSpinning := _g_.m.spinning
2030 if _g_.m.spinning {
2031 _g_.m.spinning = false
2047 _g_.m.spinning = true
2068 _g_.m.spinning = true
2081 if _g_.m.spinning {
2082 throw("findrunnable: netpoll with spinning")
2129 if !_g_.m.spinning {
2130 throw("resetspinning: not a spinning m")
2132 _g_.m.spinning = false
2217 if gp != nil && _g_.m.spinning {
2218 throw("schedule: spinning with local work")
2225 // This thread is going to run a goroutine and is not spinning anymore,
2226 // so if it was marked as spinning we need to reset it now and potentially
2227 // start a new spinning M.
2228 if _g_.m.spinning {
4043 print(" M", mp.id, ": p=", id1, " curg=", id2, " mallocing=", mp.mallocing, " throwing=", mp.throwing, " preemptoff=", mp.preemptoff, ""+" locks=", mp.locks, " dying=", mp.dying, " helpgc=", mp.helpgc, " spinning=", mp.spinning, " blocked=", mp.blocked, " lockedg=", id3, "\n")
4474 // Active spinning for sync.Mutex.
4478 // sync.Mutex is cooperative, so we are conservative with spinning.
4481 // As opposed to runtime mutex we don't do passive spinning here,