Skip to content
Snippets Groups Projects
  1. Mar 20, 2006
    • David S. Miller's avatar
      [SPARC64]: Optimized TSB table initialization. · bb8646d8
      David S. Miller authored
      
      We only need to write an invalid tag every 16 bytes,
      so taking advantage of this can save many instructions
      compared to the simple memset() call we make now.
      
      A prefetching implementation is implemented for sun4u
      and a block-init store version if implemented for Niagara.
      
      The next trick is to be able to perform an init and
      a copy_tsb() in parallel when growing a TSB table.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bb8646d8
    • David S. Miller's avatar
      [SPARC64]: Fix and re-enable dynamic TSB sizing. · 7a1ac526
      David S. Miller authored
      
      This is good for up to %50 performance improvement of some test cases.
      The problem has been the race conditions, and hopefully I've plugged
      them all up here.
      
      1) There was a serious race in switch_mm() wrt. lazy TLB
         switching to and from kernel threads.
      
         We could erroneously skip a tsb_context_switch() and thus
         use a stale TSB across a TSB grow event.
      
         There is a big comment now in that function describing
         exactly how it can happen.
      
      2) All code paths that do something with the TSB need to be
         guarded with the mm->context.lock spinlock.  This makes
         page table flushing paths properly synchronize with both
         TSB growing and TLB context changes.
      
      3) TSB growing events are moved to the end of successful fault
         processing.  Previously it was in update_mmu_cache() but
         that is deadlock prone.  At the end of do_sparc64_fault()
         we hold no spinlocks that could deadlock the TSB grow
         sequence.  We also have dropped the address space semaphore.
      
      While we're here, add prefetching to the copy_tsb() routine
      and put it in assembler into the tsb.S file.  This piece of
      code is quite time critical.
      
      There are some small negative side effects to this code which
      can be improved upon.  In particular we grab the mm->context.lock
      even for the tsb insert done by update_mmu_cache() now and that's
      a bit excessive.  We can get rid of that locking, and the same
      lock taking in flush_tsb_user(), by disabling PSTATE_IE around
      the whole operation including the capturing of the tsb pointer
      and tsb_nentries value.  That would work because anyone growing
      the TSB won't free up the old TSB until all cpus respond to the
      TSB change cross call.
      
      I'm not quite so confident in that optimization to put it in
      right now, but eventually we might be able to and the description
      is here for reference.
      
      This code seems very solid now.  It passes several parallel GCC
      bootstrap builds, and our favorite "nut cruncher" stress test which is
      a full "make -j8192" build of a "make allmodconfig" kernel.  That puts
      about 256 processes on each cpu's run queue, makes lots of process cpu
      migrations occur, causes lots of page table and TLB flushing activity,
      incurs many context version number changes, and it swaps the machine
      real far out to disk even though there is 16GB of ram on this test
      system. :-)
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7a1ac526
    • David S. Miller's avatar
      [SPARC64]: Simplify TSB insert checks. · 74ae9987
      David S. Miller authored
      
      Don't try to avoid putting non-base page sized entries
      into the user TSB.  It actually costs us more to check
      this than it helps.
      
      Eventually we'll have a multiple TSB scheme for user
      processes.  Once a process starts using larger pages,
      we'll allocate and use such a TSB.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      74ae9987
    • David S. Miller's avatar
      [SPARC64]: Fix _PAGE_EXEC handling. · 45f791eb
      David S. Miller authored
      
      First of all, use the known _PAGE_EXEC_{4U,4V} value instead
      of loading _PAGE_EXEC from memory.  We either know which one
      to use by context, or we can code patch the test.
      
      Next, we need to check executability of a PTE in the generic
      TSB miss handler.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      45f791eb
    • David S. Miller's avatar
      [SPARC64]: More TLB/TSB handling fixes. · 8b234274
      David S. Miller authored
      
      The SUN4V convention with non-shared TSBs is that the context
      bit of the TAG is clear.  So we have to choose an "invalid"
      bit and initialize new TSBs appropriately.  Otherwise a zero
      TAG looks "valid".
      
      Make sure, for the window fixup cases, that we use the right
      global registers and that we don't potentially trample on
      the live global registers in etrap/rtrap handling (%g2 and
      %g6) and that we put the missing virtual address properly
      in %g5.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8b234274
    • David S. Miller's avatar
      [SPARC64]: Fix some SUN4V TLB handling bugs. · 6c8927c9
      David S. Miller authored
      
      1) Add error return checking for TLB load hypervisor
         calls.
      
      2) Don't fallthru to dtlb tsb miss handler from itlb tsb
         miss handler, oops.
      
      3) On window fixups, propagate fault information to fixup
         handler correctly.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6c8927c9
    • David S. Miller's avatar
      [SPARC64]: Do not write garbage into %pstate in tsb_context_switch(). · a7b31bac
      David S. Miller authored
      
      For SUN4V, we were clobbering %o5 to do the hypervisor call.
      This clobbers the saved %pstate value and we end up writing
      garbage into that register as a result.  Oops.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a7b31bac
    • David S. Miller's avatar
      [SPARC64]: Deal with PTE layout differences in SUN4V. · c4bce90e
      David S. Miller authored
      
      Yes, you heard it right, they changed the PTE layout for
      SUN4V.  Ho hum...
      
      This is the simple and inefficient way to support this.
      It'll get optimized, don't worry.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c4bce90e
    • David S. Miller's avatar
      [SPARC64]: Simplify sun4v TLB handling using macros. · 36a68e77
      David S. Miller authored
      
      There was also a bug in sun4v_itlb_miss, it loaded the
      MMU Fault Status base into %g3 instead of %g2.
      
      This pointed out a fast path for TSB miss processing,
      since we have %g2 with the MMU Fault Status base, we
      can use that to quickly load up the PGD phys address.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      36a68e77
    • David S. Miller's avatar
      [SPARC64]: Fix hypervisor call arg passing. · 164c220f
      David S. Miller authored
      
      Function goes in %o5, args go in %o0 --> %o5.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      164c220f
    • David S. Miller's avatar
      618e9ed9
    • David S. Miller's avatar
      [SPARC64]: Implement sun4v TSB miss handlers. · aa9143b9
      David S. Miller authored
      
      When we register a TSB with the hypervisor, so that it or hardware can
      handle TLB misses and do the TSB walk for us, the hypervisor traps
      down to these trap when it incurs a TSB miss.
      
      Processing is simple, we load the missing virtual address and context,
      and do a full page table walk.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      aa9143b9
    • David S. Miller's avatar
    • David S. Miller's avatar
      [SPARC64]: Initial sun4v TLB miss handling infrastructure. · d257d5da
      David S. Miller authored
      
      Things are a little tricky because, unlike sun4u, we have
      to:
      
      1) do a hypervisor trap to do the TLB load.
      2) do the TSB lookup calculations by hand
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d257d5da
    • David S. Miller's avatar
      [SPARC64]: Sanitize %pstate writes for sun4v. · 45fec05f
      David S. Miller authored
      
      If we're just switching between different alternate global
      sets, nop it out on sun4v.  Also, get rid of all of the
      alternate global save/restore in the OBP CIF trampoline code.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      45fec05f
    • David S. Miller's avatar
      [SPARC64]: Refine register window trap handling. · 314ef685
      David S. Miller authored
      
      When saving and restoing trap state, do the window spill/fill
      handling inline so that we never trap deeper than 2 trap levels.
      This is important for chips like Niagara.
      
      The window fixup code is massively simplified, and many more
      improvements are now possible.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      314ef685
    • David S. Miller's avatar
      [SPARC64]: Add explicit register args to trap state loading macros. · ffe483d5
      David S. Miller authored
      
      This, as well as making the code cleaner, allows a simplification in
      the TSB miss handling path.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ffe483d5
    • David S. Miller's avatar
      [SPARC64]: Access TSB with physical addresses when possible. · 517af332
      David S. Miller authored
      
      This way we don't need to lock the TSB into the TLB.
      The trick is that every TSB load/store is registered into
      a special instruction patch section.  The default uses
      virtual addresses, and the patch instructions use physical
      address load/stores.
      
      We can't do this on all chips because only cheetah+ and later
      have the physical variant of the atomic quad load.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      517af332
    • David S. Miller's avatar
      [SPARC64]: Fix too early reference to %g6 · 9bc657b2
      David S. Miller authored
      
      %g6 is not necessarily set to current_thread_info()
      at sparc64_realfault_common.  So store the fault
      code and address after we invoke etrap and %g6 is
      properly set up.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9bc657b2
    • David S. Miller's avatar
      [SPARC64]: Kill PROM locked TLB entry preservation code. · 3487d1d4
      David S. Miller authored
      
      It is totally unnecessary complexity.  After we take over
      the trap table, we handle all PROM tlb misses fully.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      3487d1d4
    • David S. Miller's avatar
      [SPARC64]: Use sparc64_highest_unlocked_tlb_ent in __tsb_context_switch() · 6b6d0172
      David S. Miller authored
      
      Instead of ugly hard-coded value.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6b6d0172
    • David S. Miller's avatar
    • David S. Miller's avatar
      [SPARC64]: Add infrastructure for dynamic TSB sizing. · 98c5584c
      David S. Miller authored
      
      This also cleans up tsb_context_switch().  The assembler
      routine is now __tsb_context_switch() and the former is
      an inline function that picks out the bits from the mm_struct
      and passes it into the assembler code as arguments.
      
      setup_tsb_parms() computes the locked TLB entry to map the
      TSB.  Later when we support using the physical address quad
      load instructions of Cheetah+ and later, we'll simply use
      the physical address for the TSB register value and set
      the map virtual and PTE both to zero.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      98c5584c
    • David S. Miller's avatar
      [SPARC64]: TSB refinements. · 09f94287
      David S. Miller authored
      
      Move {init_new,destroy}_context() out of line.
      
      Do not put huge pages into the TSB, only base page size translations.
      There are some clever things we could do here, but for now let's be
      correct instead of fancy.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      09f94287
    • David S. Miller's avatar
      [SPARC64]: Elminate all usage of hard-coded trap globals. · 56fb4df6
      David S. Miller authored
      
      UltraSPARC has special sets of global registers which are switched to
      for certain trap types.  There is one set for MMU related traps, one
      set of Interrupt Vector processing, and another set (called the
      Alternate globals) for all other trap types.
      
      For what seems like forever we've hard coded the values in some of
      these trap registers.  Some examples include:
      
      1) Interrupt Vector global %g6 holds current processors interrupt
         work struct where received interrupts are managed for IRQ handler
         dispatch.
      
      2) MMU global %g7 holds the base of the page tables of the currently
         active address space.
      
      3) Alternate global %g6 held the current_thread_info() value.
      
      Such hardcoding has resulted in some serious issues in many areas.
      There are some code sequences where having another register available
      would help clean up the implementation.  Taking traps such as
      cross-calls from the OBP firmware requires some trick code sequences
      wherein we have to save away and restore all of the special sets of
      global registers when we enter/exit OBP.
      
      We were also using the IMMU TSB register on SMP to hold the per-cpu
      area base address, which doesn't work any longer now that we actually
      use the TSB facility of the cpu.
      
      The implementation is pretty straight forward.  One tricky bit is
      getting the current processor ID as that is different on different cpu
      variants.  We use a stub with a fancy calling convention which we
      patch at boot time.  The calling convention is that the stub is
      branched to and the (PC - 4) to return to is in register %g1.  The cpu
      number is left in %g6.  This stub can be invoked by using the
      __GET_CPUID macro.
      
      We use an array of per-cpu trap state to store the current thread and
      physical address of the current address space's page tables.  The
      TRAP_LOAD_THREAD_REG loads %g6 with the current thread from this
      table, it uses __GET_CPUID and also clobbers %g1.
      
      TRAP_LOAD_IRQ_WORK is used by the interrupt vector processing to load
      the current processor's IRQ software state into %g6.  It also uses
      __GET_CPUID and clobbers %g1.
      
      Finally, TRAP_LOAD_PGD_PHYS loads the physical address base of the
      current address space's page tables into %g7, it clobbers %g1 and uses
      __GET_CPUID.
      
      Many refinements are possible, as well as some tuning, with this stuff
      in place.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      56fb4df6
    • David S. Miller's avatar
      [SPARC64]: Move away from virtual page tables, part 1. · 74bf4312
      David S. Miller authored
      
      We now use the TSB hardware assist features of the UltraSPARC
      MMUs.
      
      SMP is currently knowingly broken, we need to find another place
      to store the per-cpu base pointers.  We hid them away in the TSB
      base register, and that obviously will not work any more :-)
      
      Another known broken case is non-8KB base page size.
      
      Also noticed that flush_tlb_all() is not referenced anywhere, only
      the internal __flush_tlb_all() (local cpu only) is used by the
      sparc64 port, so we can get rid of flush_tlb_all().
      
      The kernel gets it's own 8KB TSB (swapper_tsb) and each address space
      gets it's own private 8K TSB.  Later we can add code to dynamically
      increase the size of per-process TSB as the RSS grows.  An 8KB TSB is
      good enough for up to about a 4MB RSS, after which the TSB starts to
      incur many capacity and conflict misses.
      
      We even accumulate OBP translations into the kernel TSB.
      
      Another area for refinement is large page size support.  We could use
      a secondary address space TSB to handle those.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      74bf4312
Loading