Skip to content
Snippets Groups Projects
  1. Jun 05, 2019
  2. Apr 09, 2019
    • Sakari Ailus's avatar
      treewide: Switch printk users from %pf and %pF to %ps and %pS, respectively · d75f773c
      Sakari Ailus authored
      %pF and %pf are functionally equivalent to %pS and %ps conversion
      specifiers. The former are deprecated, therefore switch the current users
      to use the preferred variant.
      
      The changes have been produced by the following command:
      
      	git grep -l '%p[fF]' | grep -v '^\(tools\|Documentation\)/' | \
      	while read i; do perl -i -pe 's/%pf/%ps/g; s/%pF/%pS/g;' $i; done
      
      And verifying the result.
      
      Link: http://lkml.kernel.org/r/20190325193229.23390-1-sakari.ailus@linux.intel.com
      
      
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: sparclinux@vger.kernel.org
      Cc: linux-um@lists.infradead.org
      Cc: xen-devel@lists.xenproject.org
      Cc: linux-acpi@vger.kernel.org
      Cc: linux-pm@vger.kernel.org
      Cc: drbd-dev@lists.linbit.com
      Cc: linux-block@vger.kernel.org
      Cc: linux-mmc@vger.kernel.org
      Cc: linux-nvdimm@lists.01.org
      Cc: linux-pci@vger.kernel.org
      Cc: linux-scsi@vger.kernel.org
      Cc: linux-btrfs@vger.kernel.org
      Cc: linux-f2fs-devel@lists.sourceforge.net
      Cc: linux-mm@kvack.org
      Cc: ceph-devel@vger.kernel.org
      Cc: netdev@vger.kernel.org
      Signed-off-by: default avatarSakari Ailus <sakari.ailus@linux.intel.com>
      Acked-by: David Sterba <dsterba@suse.com> (for btrfs)
      Acked-by: Mike Rapoport <rppt@linux.ibm.com> (for mm/memblock.c)
      Acked-by: Bjorn Helgaas <bhelgaas@google.com> (for drivers/pci)
      Acked-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: default avatarPetr Mladek <pmladek@suse.com>
      d75f773c
  3. Jan 31, 2019
    • Alexander Duyck's avatar
      async: Add support for queueing on specific NUMA node · 6be9238e
      Alexander Duyck authored
      
      Introduce four new variants of the async_schedule_ functions that allow
      scheduling on a specific NUMA node.
      
      The first two functions are async_schedule_near and
      async_schedule_near_domain end up mapping to async_schedule and
      async_schedule_domain, but provide NUMA node specific functionality. They
      replace the original functions which were moved to inline function
      definitions that call the new functions while passing NUMA_NO_NODE.
      
      The second two functions are async_schedule_dev and
      async_schedule_dev_domain which provide NUMA specific functionality when
      passing a device as the data member and that device has a NUMA node other
      than NUMA_NO_NODE.
      
      The main motivation behind this is to address the need to be able to
      schedule device specific init work on specific NUMA nodes in order to
      improve performance of memory initialization.
      
      I have seen a significant improvement in initialziation time for persistent
      memory as a result of this approach. In the case of 3TB of memory on a
      single node the initialization time in the worst case went from 36s down to
      about 26s for a 10s improvement. As such the data shows a general benefit
      for affinitizing the async work to the node local to the device.
      
      Reviewed-by: default avatarBart Van Assche <bvanassche@acm.org>
      Reviewed-by: default avatarDan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@linux.intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6be9238e
  4. Feb 07, 2018
    • Rasmus Villemoes's avatar
      kernel/async.c: revert "async: simplify lowest_in_progress()" · 4f7e988e
      Rasmus Villemoes authored
      This reverts commit 92266d6e ("async: simplify lowest_in_progress()")
      which was simply wrong: In the case where domain is NULL, we now use the
      wrong offsetof() in the list_first_entry macro, so we don't actually
      fetch the ->cookie value, but rather the eight bytes located
      sizeof(struct list_head) further into the struct async_entry.
      
      On 64 bit, that's the data member, while on 32 bit, that's a u64 built
      from func and data in some order.
      
      I think the bug happens to be harmless in practice: It obviously only
      affects callers which pass a NULL domain, and AFAICT the only such
      caller is
      
        async_synchronize_full() ->
        async_synchronize_full_domain(NULL) ->
        async_synchronize_cookie_domain(ASYNC_COOKIE_MAX, NULL)
      
      and the ASYNC_COOKIE_MAX means that in practice we end up waiting for
      the async_global_pending list to be empty - but it would break if
      somebody happened to pass (void*)-1 as the data element to
      async_schedule, and of course also if somebody ever does a
      async_synchronize_cookie_domain(, NULL) with a "finite" cookie value.
      
      Maybe the "harmless in practice" means this isn't -stable material.  But
      I'm not completely confident my quick git grep'ing is enough, and there
      might be affected code in one of the earlier kernels that has since been
      removed, so I'll leave the decision to the stable guys.
      
      Link: http://lkml.kernel.org/r/20171128104938.3921-1-linux@rasmusvillemoes.dk
      
      
      Fixes: 92266d6e "async: simplify lowest_in_progress()"
      Signed-off-by: default avatarRasmus Villemoes <linux@rasmusvillemoes.dk>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Adam Wallis <awallis@codeaurora.org>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: <stable@vger.kernel.org>	[3.10+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4f7e988e
  5. May 23, 2017
  6. Nov 19, 2015
  7. Oct 10, 2014
  8. Mar 12, 2013
  9. Jan 25, 2013
  10. Jan 23, 2013
    • Tejun Heo's avatar
      async: replace list of active domains with global list of pending items · 9fdb04cd
      Tejun Heo authored
      
      Global synchronization - async_synchronize_full() - is currently
      implemented by keeping a list of all active registered domains and
      syncing them one by one until no domain is active.
      
      While this isn't necessarily a complex scheme, it can easily be
      simplified by keeping global list of the pending items of all
      registered active domains instead of list of domains and simply using
      the globl pending list the same way as domain syncing.
      
      This patch replaces async_domains with async_global_pending and update
      lowest_in_progress() to use the global pending list if @domain is
      %NULL.  async_synchronize_full_domain(NULL) is now allowed and
      equivalent to async_synchronize_full().  As no one is calling with
      NULL domain, this doesn't affect any existing users.
      
      async_register_mutex is no longer necessary and dropped.
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Dan Williams <djbw@fb.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      9fdb04cd
    • Tejun Heo's avatar
      async: keep pending tasks on async_domain and remove async_pending · 52722794
      Tejun Heo authored
      
      Async kept single global pending list and per-domain running lists.
      When an async item is queued, it's put on the global pending list.
      The item is moved to the per-domain running list when its execution
      starts.
      
      At this point, this design complicates execution and synchronization
      without bringing any benefit.  The list only matters for
      synchronization which doesn't care whether a given async item is
      pending or executing.  Also, global synchronization is done by
      iterating through all active registered async_domains, so the global
      async_pending list doesn't help anything either.
      
      Rename async_domain->running to async_domain->pending and put async
      items directly there and remove when execution completes.  This
      simplifies lowest_in_progress() a lot - the first item on the pending
      list is the one with the lowest cookie, and async_run_entry_fn()
      doesn't have to mess with moving the item from pending to running.
      
      After the change, whether a domain is empty or not can be trivially
      determined by looking at async_domain->pending.  Remove
      async_domain->count and use list_empty() on pending instead.
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Dan Williams <djbw@fb.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      52722794
    • Tejun Heo's avatar
      async: use ULLONG_MAX for infinity cookie value · c68eee14
      Tejun Heo authored
      
      Currently, next_cookie is used as the infinity value.  In most cases,
      this should work fine but it theoretically could bring subtle behavior
      difference between async_synchronize_full() and
      async_synchronize_full_domain().
      
      async_synchronize_full() keeps waiting until there's no registered
      async_entry left regardless of what next_cookie was when the function
      was called.  It guarantees that the queue is completely drained at
      least once before returning.
      
      However, async_synchronize_full_domain() doesn't.  It synchronizes
      upto next_cookie and if further async jobs are queued after the
      next_cookie value to synchronize is decided, they won't be waited for.
      
      For unrelated async jobs, the behavior difference doesn't matter;
      however, if async jobs which are related (nested or otherwise) to the
      executing ones are queued while sychronization is in progress, the
      resulting behavior difference could be problematic.
      
      This can be easily fixed by using ULLONG_MAX as the infinity value
      instead.  Define ASYNC_COOKIE_MAX as ULLONG_MAX and use it as the
      infinity value for synchronization.  This makes
      async_synchronize_full_domain() fully drain the domain at least once
      before returning, making its behavior match async_synchronize_full().
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Dan Williams <djbw@fb.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      c68eee14
    • Tejun Heo's avatar
      async: bring sanity to the use of words domain and running · 8723d503
      Tejun Heo authored
      
      In the beginning, running lists were literal struct list_heads.  Later
      on, struct async_domain was added.  For some reason, while the
      conversion substituted list_heads with async_domains, the variable
      names weren't fully converted.  In more places, "running" was used for
      struct async_domain while other places adopted new "domain" name.
      
      The situation is made much worse by having async_domain's running list
      named "domain" and async_entry's field pointing to async_domain named
      "running".
      
      So, we end up with mix of "running" and "domain" for variable names
      for async_domain, with the field names of async_domain and async_entry
      swapped between "running" and "domain".
      
      It feels almost intentionally made to be as confusing as possible.
      Bring some sanity by
      
      * Renaming all async_domain variables "domain".
      
      * s/async_running/async_dfl_domain/
      
      * s/async_domain->domain/async_domain->running/
      
      * s/async_entry->running/async_entry->domain/
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Dan Williams <djbw@fb.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      8723d503
    • Tejun Heo's avatar
      async: fix __lowest_in_progress() · f56c3196
      Tejun Heo authored
      
      Commit 083b804c ("async: use workqueue for worker pool") made it
      possible that async jobs are moved from pending to running out-of-order.
      While pending async jobs will be queued and dispatched for execution in
      the same order, nothing guarantees they'll enter "1) move self to the
      running queue" of async_run_entry_fn() in the same order.
      
      Before the conversion, async implemented its own worker pool.  An async
      worker, upon being woken up, fetches the first item from the pending
      list, which kept the executing lists sorted.  The conversion to
      workqueue was done by adding work_struct to each async_entry and async
      just schedules the work item.  The queueing and dispatching of such work
      items are still in order but now each worker thread is associated with a
      specific async_entry and moves that specific async_entry to the
      executing list.  So, depending on which worker reaches that point
      earlier, which is non-deterministic, we may end up moving an async_entry
      with larger cookie before one with smaller one.
      
      This broke __lowest_in_progress().  running->domain may not be properly
      sorted and is not guaranteed to contain lower cookies than pending list
      when not empty.  Fix it by ensuring sort-inserting to the running list
      and always looking at both pending and running when trying to determine
      the lowest cookie.
      
      Over time, the async synchronization implementation became quite messy.
      We better restructure it such that each async_entry is linked to two
      lists - one global and one per domain - and not move it when execution
      starts.  There's no reason to distinguish pending and running.  They
      behave the same for synchronization purposes.
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f56c3196
  11. Jan 18, 2013
  12. Jan 16, 2013
    • Tejun Heo's avatar
      module, async: async_synchronize_full() on module init iff async is used · 774a1221
      Tejun Heo authored
      If the default iosched is built as module, the kernel may deadlock
      while trying to load the iosched module on device probe if the probing
      was running off async.  This is because async_synchronize_full() at
      the end of module init ends up waiting for the async job which
      initiated the module loading.
      
       async A				modprobe
      
       1. finds a device
       2. registers the block device
       3. request_module(default iosched)
      					4. modprobe in userland
      					5. load and init module
      					6. async_synchronize_full()
      
      Async A waits for modprobe to finish in request_module() and modprobe
      waits for async A to finish in async_synchronize_full().
      
      Because there's no easy to track dependency once control goes out to
      userland, implementing properly nested flushing is difficult.  For
      now, make module init perform async_synchronize_full() iff module init
      has queued async jobs as suggested by Linus.
      
      This avoids the described deadlock because iosched module doesn't use
      async and thus wouldn't invoke async_synchronize_full().  This is
      hacky and incomplete.  It will deadlock if async module loading nests;
      however, this works around the known problem case and seems to be the
      best of bad options.
      
      For more details, please refer to the following thread.
      
        http://thread.gmane.org/gmane.linux.kernel/1420814
      
      
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatarAlex Riesen <raa.lkml@gmail.com>
      Tested-by: default avatarMing Lei <ming.lei@canonical.com>
      Tested-by: default avatarAlex Riesen <raa.lkml@gmail.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      774a1221
  13. Jul 20, 2012
  14. Jan 12, 2012
  15. Oct 31, 2011
  16. Sep 15, 2011
  17. Jun 15, 2011
  18. Jul 14, 2010
  19. Mar 30, 2010
    • Tejun Heo's avatar
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo authored
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Guess-its-ok-by: default avatarChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  20. Jun 08, 2009
    • Linus Torvalds's avatar
      async: Fix lack of boot-time console due to insufficient synchronization · 3af968e0
      Linus Torvalds authored
      Our async work synchronization was broken by "async: make sure
      independent async domains can't accidentally entangle" (commit
      d5a877e8), because it would report
      the wrong lowest active async ID when there was both running and
      pending async work.
      
      This caused things like no being able to read the root filesystem,
      resulting in missing console devices and inability to run 'init',
      causing a boot-time panic.
      
      This fixes it by properly returning the lowest pending async ID: if
      there is any running async work, that will have a lower ID than any
      pending work, and we should _not_ look at the pending work list.
      
      There were alternative patches from Jaswinder and James, but this one
      also cleans up the code by removing the pointless 'ret' variable and
      the unnecesary testing for an empty list around 'for_each_entry()' (if
      the list is empty, the for_each_entry() thing just won't execute).
      
      Fixes-bug: http://bugzilla.kernel.org/show_bug.cgi?id=13474
      
      
      Reported-and-tested-by: default avatarChris Clayton <chris2553@googlemail.com>
      Cc: Jaswinder Singh Rajput <jaswinder@kernel.org>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3af968e0
  21. May 24, 2009
  22. Mar 28, 2009
  23. Feb 08, 2009
  24. Feb 05, 2009
    • Andrew Morton's avatar
      kernel/async.c: fix printk warnings · 58763a29
      Andrew Morton authored
      
      alpha:
      
      kernel/async.c: In function 'run_one_entry':
      kernel/async.c:141: warning: format '%lli' expects type 'long long int', but argument 2 has type 'async_cookie_t'
      kernel/async.c:149: warning: format '%lli' expects type 'long long int', but argument 2 has type 'async_cookie_t'
      kernel/async.c:149: warning: format '%lld' expects type 'long long int', but argument 4 has type 's64'
      kernel/async.c: In function 'async_synchronize_cookie_special':
      kernel/async.c:250: warning: format '%lli' expects type 'long long int', but argument 3 has type 's64'
      
      Cc: Arjan van de Ven <arjan@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      58763a29
  25. Jan 13, 2009
  26. Jan 09, 2009
  27. Jan 08, 2009
  28. Jan 07, 2009
    • Arjan van de Ven's avatar
      async: don't do the initcall stuff post boot · ad160d23
      Arjan van de Ven authored
      
      while tracking the asynchronous calls during boot using the initcall_debug
      convention is useful, doing it once the kernel is done is actually
      bad now that we use asynchronous operations post boot as well...
      
      Signed-off-by: default avatarArjan van de Ven <arjan@linux.intel.com>
      ad160d23
    • Arjan van de Ven's avatar
      async: Asynchronous function calls to speed up kernel boot · 22a9d645
      Arjan van de Ven authored
      
      Right now, most of the kernel boot is strictly synchronous, such that
      various hardware delays are done sequentially.
      
      In order to make the kernel boot faster, this patch introduces
      infrastructure to allow doing some of the initialization steps
      asynchronously, which will hide significant portions of the hardware delays
      in practice.
      
      In order to not change device order and other similar observables, this
      patch does NOT do full parallel initialization.
      
      Rather, it operates more in the way an out of order CPU does; the work may
      be done out of order and asynchronous, but the observable effects
      (instruction retiring for the CPU) are still done in the original sequence.
      
      Signed-off-by: default avatarArjan van de Ven <arjan@linux.intel.com>
      22a9d645
Loading