- Nov 20, 2023
-
-
Johannes Berg authored
[ Upstream commit 56cfb8ce1f7f6c4e5ca571a2ec0880e131cd0311 ] There may be sometimes reasons to actually run the work if it's pending, add flush functions for both regular and delayed wiphy work that will do this. Signed-off-by:
Johannes Berg <johannes.berg@intel.com> Stable-dep-of: eadfb54756ae ("wifi: mac80211: move sched-scan stop work to wiphy work") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Harshitha Prem authored
[ Upstream commit d48f55e773dcce8fcf9e587073452a4944011b11 ] When max virtual ap interfaces are configured in all the bands with ACS and hostapd restart is done every 60s, a crash is observed at random times because of handling the uninitialized peer fragments with fragment id of packet as 0. "__fls" would have an undefined behavior if the argument is passed as "0". Hence, added changes to handle the same. Tested-on: QCN9274 hw2.0 PCI WLAN.WBE.1.0.1-00029-QCAHKSWPL_SILICONZ-1 Fixes: d8899132 ("wifi: ath12k: driver for Qualcomm Wi-Fi 7 devices") Signed-off-by:
Harshitha Prem <quic_hprem@quicinc.com> Signed-off-by:
Kalle Valo <quic_kvalo@quicinc.com> Link: https://lore.kernel.org/r/20230821130343.29495-3-quic_hprem@quicinc.com Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Anup Patel authored
[ Upstream commit f99b926f6543faeadba1b4524d8dc9c102489135 ] Multi-socket systems have a separate PLIC in each socket, so __plic_init() is invoked for each PLIC. __plic_init() registers syscore operations, which obviously fails on the second invocation. Move it into the already existing condition for installing the CPU hotplug state so it is only invoked once when the first PLIC is initialized. [ tglx: Massaged changelog ] Fixes: e80f0b6a ("irqchip/irq-sifive-plic: Add syscore callbacks for hibernation") Signed-off-by:
Anup Patel <apatel@ventanamicro.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20231025142820.390238-4-apatel@ventanamicro.com Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Chen Yu authored
[ Upstream commit a0b0bad10587ae2948a7c36ca4ffc206007fbcf3 ] When a CPU is about to be offlined, x86 validates that all active interrupts which are targeted to this CPU can be migrated to the remaining online CPUs. If not, the offline operation is aborted. The validation uses irq_matrix_allocated() to retrieve the number of vectors which are allocated on the outgoing CPU. The returned number of allocated vectors includes also vectors which are associated to managed interrupts. That's overaccounting because managed interrupts are: - not migrated when the affinity mask of the interrupt targets only the outgoing CPU - migrated to another CPU, but in that case the vector is already pre-allocated on the potential target CPUs and must not be taken into account. As a consequence the check whether the remaining online CPUs have enough capacity for migrating the allocated vectors from the outgoing CPU might fail incorrectly. Let irq_matrix_allocated() return only the number of allocated non-managed interrupts to make this validation check correct. [ tglx: Amend changelog and fixup kernel-doc comment ] Fixes: 2f75d9e1 ("genirq: Implement bitmap matrix allocator") Reported-by:
Wendy Wang <wendy.wang@intel.com> Signed-off-by:
Chen Yu <yu.c.chen@intel.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20231020072522.557846-1-yu.c.chen@intel.com Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Kees Cook authored
[ Upstream commit 0e108725f6cc5b3be9e607f89c9fbcbb236367b7 ] Arnd noticed we have a case where a shorter source string is being copied into a destination byte array, but this results in a strnlen() call that exceeds the size of the source. This is seen with -Wstringop-overread: In file included from ../include/linux/uuid.h:11, from ../include/linux/mod_devicetable.h:14, from ../include/linux/cpufeature.h:12, from ../arch/x86/coco/tdx/tdx.c:7: ../arch/x86/coco/tdx/tdx.c: In function 'tdx_panic.constprop': ../include/linux/string.h:284:9: error: 'strnlen' specified bound 64 exceeds source size 60 [-Werror=stringop-overread] 284 | memcpy_and_pad(dest, _dest_len, src, strnlen(src, _dest_len), pad); \ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../arch/x86/coco/tdx/tdx.c:124:9: note: in expansion of macro 'strtomem_pad' 124 | strtomem_pad(message.str, msg, '\0'); | ^~~~~~~~~~~~ Use the smaller of the two buffer sizes when calling strnlen(). When src length is unknown (SIZE_MAX), it is adjusted to use dest length, which is what the original code did. Reported-by:
Arnd Bergmann <arnd@arndb.de> Fixes: dfbafa70 ("string: Introduce strtomem() and strtomem_pad()") Tested-by:
Arnd Bergmann <arnd@arndb.de> Cc: Andy Shevchenko <andy@kernel.org> Cc: linux-hardening@vger.kernel.org Signed-off-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Reinette Chatre authored
[ Upstream commit 41efa431244f6498833ff8ee8dde28c4924c5479 ] The IMS related functions (pci_create_ims_domain(), pci_ims_alloc_irq(), and pci_ims_free_irq()) are not declared when CONFIG_PCI_MSI is disabled. Provide definitions of these functions for use when callers are compiled with CONFIG_PCI_MSI disabled. Fixes: 0194425a ("PCI/MSI: Provide IMS (Interrupt Message Store) support") Fixes: c9e5bea2 ("PCI/MSI: Provide pci_ims_alloc/free_irq()") Signed-off-by:
Reinette Chatre <reinette.chatre@intel.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/14ff656899a3757453f8584c1109d7a9b98fa258.1697564731.git.reinette.chatre@intel.com Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Binbin Wu authored
[ Upstream commit 29060633411a02f6f2dd9d5245919385d69d81f0 ] Zero out the buffer for readlink() since readlink() does not append a terminating null byte to the buffer. Also change the buffer length passed to readlink() to 'PATH_MAX - 1' to ensure the resulting string is always null terminated. Fixes: 833c12ce ("selftests/x86/lam: Add inherit test cases for linear-address masking") Signed-off-by:
Binbin Wu <binbin.wu@linux.intel.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Reviewed-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lore.kernel.org/r/20231016062446.695-1-binbin.wu@linux.intel.com Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Peter Zijlstra authored
[ Upstream commit f06cc667f79909e9175460b167c277b7c64d3df0 ] Namhyung reported that bd275681 ("perf: Rewrite core context handling") regresses context switch overhead when perf-cgroup is in use together with 'slow' PMUs like uncore. Specifically, perf_cgroup_switch()'s perf_ctx_disable() / ctx_sched_out() etc.. all iterate the full list of active PMUs for that CPU, even if they don't have cgroup events. Previously there was cgrp_cpuctx_list which linked the relevant PMUs together, but that got lost in the rework. Instead of re-instruducing a similar list, let the perf_event_pmu_context iteration skip those that do not have cgroup events. This avoids growing multiple versions of the perf_event_pmu_context iteration. Measured performance (on a slightly different patch): Before) $ taskset -c 0 ./perf bench sched pipe -l 10000 -G AAA,BBB # Running 'sched/pipe' benchmark: # Executed 10000 pipe operations between two processes Total time: 0.901 [sec] 90.128700 usecs/op 11095 ops/sec After) $ taskset -c 0 ./perf bench sched pipe -l 10000 -G AAA,BBB # Running 'sched/pipe' benchmark: # Executed 10000 pipe operations between two processes Total time: 0.065 [sec] 6.560100 usecs/op 152436 ops/sec Fixes: bd275681 ("perf: Rewrite core context handling") Reported-by:
Namhyung Kim <namhyung@kernel.org> Debugged-by:
Namhyung Kim <namhyung@kernel.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20231009210425.GC6307@noisy.programming.kicks-ass.net Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Jiasheng Jiang authored
[ Upstream commit a19d48f7c5d57c0f0405a7d4334d1d38fe9d3c1c ] Add check for the return value of kstrdup() and return the error if it fails in order to avoid NULL pointer dereference. Fixes: 563ca40d ("pstore/platform: Switch pstore_info::name to const") Signed-off-by:
Jiasheng Jiang <jiasheng@iscas.ac.cn> Link: https://lore.kernel.org/r/20230623022706.32125-1-jiasheng@iscas.ac.cn Signed-off-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Paul E. McKenney authored
[ Upstream commit f44075ecafb726830e63d33fbca29413149eeeb8 ] The ->idt_seq and ->recv_jiffies variables added by: 1a3ea611 ("x86/nmi: Accumulate NMI-progress evidence in exc_nmi()") ... place the exit-time check of the bottom bit of ->idt_seq after the this_cpu_dec_return() that re-enables NMI nesting. This can result in the following sequence of events on a given CPU in kernels built with CONFIG_NMI_CHECK_CPU=y: o An NMI arrives, and ->idt_seq is incremented to an odd number. In addition, nmi_state is set to NMI_EXECUTING==1. o The NMI is processed. o The this_cpu_dec_return(nmi_state) zeroes nmi_state and returns NMI_EXECUTING==1, thus opting out of the "goto nmi_restart". o Another NMI arrives and ->idt_seq is incremented to an even number, triggering the warning. But all is just fine, at least assuming we don't get so many closely spaced NMIs that the stack overflows or some such. Experience on the fleet indicates that the MTBF of this false positive is about 70 years. Or, for those who are not quite that patient, the MTBF appears to be about one per week per 4,000 systems. Fix this false-positive warning by moving the "nmi_restart" label before the initial ->idt_seq increment/check and moving the this_cpu_dec_return() to follow the final ->idt_seq increment/check. This way, all nested NMIs that get past the NMI_NOT_RUNNING check get a clean ->idt_seq slate. And if they don't get past that check, they will set nmi_state to NMI_LATCHED, which will cause the this_cpu_dec_return(nmi_state) to restart. Fixes: 1a3ea611 ("x86/nmi: Accumulate NMI-progress evidence in exc_nmi()") Reported-by:
Chris Mason <clm@fb.com> Signed-off-by:
Paul E. McKenney <paulmck@kernel.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/0cbff831-6e3d-431c-9830-ee65ee7787ff@paulmck-laptop Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Ivaylo Dimitrov authored
[ Upstream commit 12590d4d0e331d3cb9e6b3494515cd61c8a6624e ] clk_get_rate() might sleep, and that prevents dm-timer based PWM from being used from atomic context. Fix that by getting fclk rate in probe() and using a notifier in case rate changes. Fixes: af04aa85 ("ARM: OMAP: Move dmtimer driver out of plat-omap to drivers under clocksource") Signed-off-by:
Ivaylo Dimitrov <ivo.g.dimitrov.75@gmail.com> Reviewed-by:
Tony Lindgren <tony@atomide.com> Signed-off-by:
Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://lore.kernel.org/r/1696312220-11550-1-git-send-email-ivo.g.dimitrov.75@gmail.com Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Frederic Weisbecker authored
[ Upstream commit 4a8e65b0c348e42107c64381e692e282900be361 ] SRCU callbacks acceleration might fail if the preceding callbacks advance also fails. This can happen when the following steps are met: 1) The RCU_WAIT_TAIL segment has callbacks (say for gp_num 8) and the RCU_NEXT_READY_TAIL also has callbacks (say for gp_num 12). 2) The grace period for RCU_WAIT_TAIL is observed as started but not yet completed so rcu_seq_current() returns 4 + SRCU_STATE_SCAN1 = 5. 3) This value is passed to rcu_segcblist_advance() which can't move any segment forward and fails. 4) srcu_gp_start_if_needed() still proceeds with callback acceleration. But then the call to rcu_seq_snap() observes the grace period for the RCU_WAIT_TAIL segment (gp_num 8) as completed and the subsequent one for the RCU_NEXT_READY_TAIL segment as started (ie: 8 + SRCU_STATE_SCAN1 = 9) so it returns a snapshot of the next grace period, which is 16. 5) The value of 16 is passed to rcu_segcblist_accelerate() but the freshly enqueued callback in RCU_NEXT_TAIL can't move to RCU_NEXT_READY_TAIL which already has callbacks for a previous grace period (gp_num = 12). So acceleration fails. 6) Note in all these steps, srcu_invoke_callbacks() hadn't had a chance to run srcu_invoke_callbacks(). Then some very bad outcome may happen if the following happens: 7) Some other CPU races and starts the grace period number 16 before the CPU handling previous steps had a chance. Therefore srcu_gp_start() isn't called on the latter sdp to fix the acceleration leak from previous steps with a new pair of call to advance/accelerate. 8) The grace period 16 completes and srcu_invoke_callbacks() is finally called. All the callbacks from previous grace periods (8 and 12) are correctly advanced and executed but callbacks in RCU_NEXT_READY_TAIL still remain. Then rcu_segcblist_accelerate() is called with a snaphot of 20. 9) Since nothing started the grace period number 20, callbacks stay unhandled. This has been reported in real load: [3144162.608392] INFO: task kworker/136:12:252684 blocked for more than 122 seconds. [3144162.615986] Tainted: G O K 5.4.203-1-tlinux4-0011.1 #1 [3144162.623053] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [3144162.631162] kworker/136:12 D 0 252684 2 0x90004000 [3144162.631189] Workqueue: kvm-irqfd-cleanup irqfd_shutdown [kvm] [3144162.631192] Call Trace: [3144162.631202] __schedule+0x2ee/0x660 [3144162.631206] schedule+0x33/0xa0 [3144162.631209] schedule_timeout+0x1c4/0x340 [3144162.631214] ? update_load_avg+0x82/0x660 [3144162.631217] ? raw_spin_rq_lock_nested+0x1f/0x30 [3144162.631218] wait_for_completion+0x119/0x180 [3144162.631220] ? wake_up_q+0x80/0x80 [3144162.631224] __synchronize_srcu.part.19+0x81/0xb0 [3144162.631226] ? __bpf_trace_rcu_utilization+0x10/0x10 [3144162.631227] synchronize_srcu+0x5f/0xc0 [3144162.631236] irqfd_shutdown+0x3c/0xb0 [kvm] [3144162.631239] ? __schedule+0x2f6/0x660 [3144162.631243] process_one_work+0x19a/0x3a0 [3144162.631244] worker_thread+0x37/0x3a0 [3144162.631247] kthread+0x117/0x140 [3144162.631247] ? process_one_work+0x3a0/0x3a0 [3144162.631248] ? __kthread_cancel_work+0x40/0x40 [3144162.631250] ret_from_fork+0x1f/0x30 Fix this with taking the snapshot for acceleration _before_ the read of the current grace period number. The only side effect of this solution is that callbacks advancing happen then _after_ the full barrier in rcu_seq_snap(). This is not a problem because that barrier only cares about: 1) Ordering accesses of the update side before call_srcu() so they don't bleed. 2) See all the accesses prior to the grace period of the current gp_num The only things callbacks advancing need to be ordered against are carried by snp locking. Reported-by:
Yong He <alexyonghe@tencent.com> Co-developed-by:
: Yong He <alexyonghe@tencent.com> Signed-off-by:
Yong He <alexyonghe@tencent.com> Co-developed-by:
Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by:
Joel Fernandes (Google) <joel@joelfernandes.org> Co-developed-by:
Neeraj upadhyay <Neeraj.Upadhyay@amd.com> Signed-off-by:
Neeraj upadhyay <Neeraj.Upadhyay@amd.com> Link: http://lore.kernel.org/CANZk6aR+CqZaqmMWrC2eRRPY12qAZnDZLwLnHZbNi=xXMB401g@mail.gmail.com Fixes: da915ad5 ("srcu: Parallelize callback handling") Signed-off-by:
Frederic Weisbecker <frederic@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Thomas Gleixner authored
[ Upstream commit 965e05ff8af98c44f9937366715c512000373164 ] The SMT control mechanism got added as speculation attack vector mitigation. The implemented logic relies on the primary thread mask to be set up properly. This turns out to be an issue with XEN/PV guests because their CPU hotplug mechanics do not enumerate APICs and therefore the mask is never correctly populated. This went unnoticed so far because by chance XEN/PV ends up with smp_num_siblings == 2. So cpu_smt_control stays at its default value CPU_SMT_ENABLED and the primary thread mask is never evaluated in the context of CPU hotplug. This stopped "working" with the upcoming overhaul of the topology evaluation which legitimately provides a fake topology for XEN/PV. That sets smp_num_siblings to 1, which causes the core CPU hot-plug core to refuse to bring up the APs. This happens because cpu_smt_control is set to CPU_SMT_NOT_SUPPORTED which causes cpu_bootable() to evaluate the unpopulated primary thread mask with the conclusion that all non-boot CPUs are not valid to be plugged. The core code has already been made more robust against this kind of fail, but the primary thread mask really wants to be populated to avoid other issues all over the place. Just fake the mask by pretending that all XEN/PV vCPUs are primary threads, which is consistent because all of XEN/PVs topology is fake or non-existent. Fixes: 6a4d2657 ("x86/smp: Provide topology_is_primary_thread()") Fixes: f54d4434 ("x86/apic: Provide cpu_primary_thread mask") Reported-by:
Juergen Gross <jgross@suse.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Tested-by:
Juergen Gross <jgross@suse.com> Tested-by:
Sohil Mehta <sohil.mehta@intel.com> Tested-by:
Michael Kelley <mikelley@microsoft.com> Tested-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by:
Zhang Rui <rui.zhang@intel.com> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20230814085112.210011520@linutronix.de Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Thomas Gleixner authored
[ Upstream commit d91bdd96b55cc3ce98d883a60f133713821b80a6 ] The SMT control mechanism got added as speculation attack vector mitigation. The implemented logic relies on the primary thread mask to be set up properly. This turns out to be an issue with XEN/PV guests because their CPU hotplug mechanics do not enumerate APICs and therefore the mask is never correctly populated. This went unnoticed so far because by chance XEN/PV ends up with smp_num_siblings == 2. So smt_hotplug_control stays at its default value CPU_SMT_ENABLED and the primary thread mask is never evaluated in the context of CPU hotplug. This stopped "working" with the upcoming overhaul of the topology evaluation which legitimately provides a fake topology for XEN/PV. That sets smp_num_siblings to 1, which causes the core CPU hot-plug core to refuse to bring up the APs. This happens because smt_hotplug_control is set to CPU_SMT_NOT_SUPPORTED which causes cpu_smt_allowed() to evaluate the unpopulated primary thread mask with the conclusion that all non-boot CPUs are not valid to be plugged. Make cpu_smt_allowed() more robust and take CPU_SMT_NOT_SUPPORTED and CPU_SMT_NOT_IMPLEMENTED into account. Rename it to cpu_bootable() while at it as that makes it more clear what the function is about. The primary mask issue on x86 XEN/PV needs to be addressed separately as there are users outside of the CPU hotplug code too. Fixes: 05736e4a ("cpu/hotplug: Provide knobs to control SMT") Reported-by:
Juergen Gross <jgross@suse.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Tested-by:
Juergen Gross <jgross@suse.com> Tested-by:
Sohil Mehta <sohil.mehta@intel.com> Tested-by:
Michael Kelley <mikelley@microsoft.com> Tested-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by:
Zhang Rui <rui.zhang@intel.com> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20230814085112.149440843@linutronix.de Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Yuntao Wang authored
[ Upstream commit 001470fed5959d01faecbd57fcf2f60294da0de1 ] Since the size value is added to the base address to yield the last valid byte address of the GDT, the current size value of startup_gdt_descr is incorrect (too large by one), fix it. [ mingo: This probably never mattered, because startup_gdt[] is only used in a very controlled fashion - but make it consistent nevertheless. ] Fixes: 866b556e ("x86/head/64: Install startup GDT") Signed-off-by:
Yuntao Wang <ytcoode@gmail.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/20230807084547.217390-1-ytcoode@gmail.com Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Adam Dunlap authored
[ Upstream commit f79936545fb122856bd78b189d3c7ee59928c751 ] Previously, if copy_from_kernel_nofault() was called before boot_cpu_data.x86_virt_bits was set up, then it would trigger undefined behavior due to a shift by 64. This ended up causing boot failures in the latest version of ubuntu2204 in the gcp project when using SEV-SNP. Specifically, this function is called during an early #VC handler which is triggered by a CPUID to check if NX is implemented. Fixes: 1aa9aa8e ("x86/sev-es: Setup GHCB-based boot #VC handler") Suggested-by:
Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by:
Adam Dunlap <acdunlap@google.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Tested-by:
Jacob Xu <jacobhxu@google.com> Link: https://lore.kernel.org/r/20230912002703.3924521-2-acdunlap@google.com Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Waiman Long authored
[ Upstream commit 6fcdb0183bf024a70abccb0439321c25891c708d ] Commit a86ce680 ("cgroup/cpuset: Extract out CS_CPU_EXCLUSIVE & CS_SCHED_LOAD_BALANCE handling") adds a new helper function update_partition_sd_lb() to update the load balance state of the cpuset. However the new load balance is determined by just looking at whether the cpuset is a valid isolated partition root or not. That is not enough if the cpuset is not a valid partition root but its parent is in the isolated state (load balance off). Update the function to set the new state to be the same as its parent in this case like what has been done in commit c8c92620 ("cgroup/cpuset: Inherit parent's load balance state in v2"). Fixes: a86ce680 ("cgroup/cpuset: Extract out CS_CPU_EXCLUSIVE & CS_SCHED_LOAD_BALANCE handling") Signed-off-by:
Waiman Long <longman@redhat.com> Signed-off-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Alison Schofield authored
[ Upstream commit 8f1004679987302b155f14b966ca6d4335814fcb ] Commit fd49f99c ("ACPI: NUMA: Add a node and memblk for each CFMWS not in SRAT") did not account for the case where the BIOS only partially describes a CFMWS Window in the SRAT. That means the omitted address ranges, of a partially described CFMWS Window, do not get assigned to a NUMA node. Replace the call to phys_to_target_node() with numa_add_memblks(). Numa_add_memblks() searches an HPA range for existing memblk(s) and extends those memblk(s) to fill the entire CFMWS Window. Extending the existing memblks is a simple strategy that reuses SRAT defined proximity domains from part of a window to fill out the entire window, based on the knowledge* that all of a CFMWS window is of a similar performance class. *Note that this heuristic will evolve when CFMWS Windows present a wider range of characteristics. The extension of the proximity domain, implemented here, is likely a step in developing a more sophisticated performance profile in the future. There is no change in behavior when the SRAT does not describe the CFMWS Window at all. In that case, a new NUMA node with a single memblk covering the entire CFMWS Window is created. Fixes: fd49f99c ("ACPI: NUMA: Add a node and memblk for each CFMWS not in SRAT") Reported-by:
Derick Marks <derick.w.marks@intel.com> Suggested-by:
Dan Williams <dan.j.williams@intel.com> Signed-off-by:
Alison Schofield <alison.schofield@intel.com> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by:
Dan Williams <dan.j.williams@intel.com> Tested-by:
Derick Marks <derick.w.marks@intel.com> Link: https://lore.kernel.org/all/eaa0b7cffb0951a126223eef3cbe7b55b8300ad9.1689018477.git.alison.schofield%40intel.com Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Alison Schofield authored
[ Upstream commit 8f012db27c9516be1a7aca93ea4a6ca9c75056c9 ] numa_fill_memblks() fills in the gaps in numa_meminfo memblks over an physical address range. The ACPI driver will use numa_fill_memblks() to implement a new Linux policy that prescribes extending proximity domains in a portion of a CFMWS window to the entire window. Dan Williams offered this explanation of the policy: A CFWMS is an ACPI data structure that indicates *potential* locations where CXL memory can be placed. It is the playground where the CXL driver has free reign to establish regions. That space can be populated by BIOS created regions, or driver created regions, after hotplug or other reconfiguration. When BIOS creates a region in a CXL Window it additionally describes that subset of the Window range in the other typical ACPI tables SRAT, SLIT, and HMAT. The rationale for BIOS not pre-describing the entire CXL Window in SRAT, SLIT, and HMAT is that it can not predict the future. I.e. there is nothing stopping higher or lower performance devices being placed in the same Window. Compare that to ACPI memory hotplug that just onlines additional capacity in the proximity domain with little freedom for dynamic performance differentiation. That leaves the OS with a choice, should unpopulated window capacity match the proximity domain of an existing region, or should it allocate a new one? This patch takes the simple position of minimizing proximity domain proliferation by reusing any proximity domain intersection for the entire Window. If the Window has no intersections then allocate a new proximity domain. Note that SRAT, SLIT and HMAT information can be enumerated dynamically in a standard way from device provided data. Think of CXL as the end of ACPI needing to describe memory attributes, CXL offers a standard discovery model for performance attributes, but Linux still needs to interoperate with the old regime. Reported-by:
Derick Marks <derick.w.marks@intel.com> Suggested-by:
Dan Williams <dan.j.williams@intel.com> Signed-off-by:
Alison Schofield <alison.schofield@intel.com> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by:
Dan Williams <dan.j.williams@intel.com> Tested-by:
Derick Marks <derick.w.marks@intel.com> Link: https://lore.kernel.org/all/ef078a6f056ca974e5af85997013c0fda9e3326d.1689018477.git.alison.schofield%40intel.com Stable-dep-of: 8f1004679987 ("ACPI/NUMA: Apply SRAT proximity domain to entire CFMWS window") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Ben Wolsieffer authored
[ Upstream commit c73801ae4f22b390228ebf471d55668e824198b6 ] On no-MMU, all futexes are treated as private because there is no need to map a virtual address to physical to match the futex across processes. This doesn't quite work though, because private futexes include the current process's mm_struct as part of their key. This makes it impossible for one process to wake up a shared futex being waited on in another process. Fix this bug by excluding the mm_struct from the key. With a single address space, the futex address is already a unique key. Fixes: 784bdf3b ("futex: Assume all mappings are private on !MMU systems") Signed-off-by:
Ben Wolsieffer <ben.wolsieffer@hefring.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Darren Hart <dvhart@infradead.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: André Almeida <andrealmeid@igalia.com> Link: https://lore.kernel.org/r/20231019204548.1236437-2-ben.wolsieffer@hefring.com Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Josh Poimboeuf authored
[ Upstream commit eeb9f34df065f42f0c9195b322ba6df420c9fc92 ] CONFIG_CPU_SRSO isn't dependent on CONFIG_CPU_UNRET_ENTRY (AMD Retbleed), so the two features are independently configurable. Fix several issues for the (presumably rare) case where CONFIG_CPU_SRSO is enabled but CONFIG_CPU_UNRET_ENTRY isn't. Fixes: fb3bd914 ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
Borislav Petkov (AMD) <bp@alien8.de> Acked-by:
Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/299fb7740174d0f2335e91c58af0e9c242b4bac1.1693889988.git.jpoimboe@kernel.org Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Josh Poimboeuf authored
[ Upstream commit dc6306ad5b0dda040baf1fde3cfd458e6abfc4da ] The SRSO default safe-ret mitigation is reported as "mitigated" even if microcode hasn't been updated. That's wrong because userspace may still be vulnerable to SRSO attacks due to IBPB not flushing branch type predictions. Report the safe-ret + !microcode case as vulnerable. Also report the microcode-only case as vulnerable as it leaves the kernel open to attacks. Fixes: fb3bd914 ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
Borislav Petkov (AMD) <bp@alien8.de> Acked-by:
Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/a8a14f97d1b0e03ec255c81637afdf4cf0ae9c99.1693889988.git.jpoimboe@kernel.org Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Josh Poimboeuf authored
[ Upstream commit de9f5f7b06a5b7adbfdd8016f011120a4e928add ] When overriding the requested mitigation with IBPB due to retbleed=ibpb, print the mitigation in the usual format instead of a custom error message. Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
Borislav Petkov (AMD) <bp@alien8.de> Acked-by:
Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/ec3af919e267773d896c240faf30bfc6a1fd6304.1693889988.git.jpoimboe@kernel.org Stable-dep-of: dc6306ad5b0d ("x86/srso: Fix vulnerability reporting for missing microcode") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Josh Poimboeuf authored
[ Upstream commit 1d1142ac51307145dbb256ac3535a1d43a1c9800 ] Make the SBPB check more robust against the (possible) case where future HW has SRSO fixed but doesn't have the SRSO_NO bit set. Fixes: 1b5277c0 ("x86/srso: Add SRSO_NO support") Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
Borislav Petkov (AMD) <bp@alien8.de> Acked-by:
Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/cee5050db750b391c9f35f5334f8ff40e66c01b9.1693889988.git.jpoimboe@kernel.org Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Jingbo Xu authored
[ Upstream commit 6654408a33e6297d8e1d2773409431d487399b95 ] The cgwb cleanup routine will try to release the dying cgwb by switching the attached inodes. It fetches the attached inodes from wb->b_attached list, omitting the fact that inodes only with dirty timestamps reside in wb->b_dirty_time list, which is the case when lazytime is enabled. This causes enormous zombie memory cgroup when lazytime is enabled, as inodes with dirty timestamps can not be switched to a live cgwb for a long time. It is reasonable not to switch cgwb for inodes with dirty data, as otherwise it may break the bandwidth restrictions. However since the writeback of inode metadata is not accounted for, let's also switch inodes with dirty timestamps to avoid zombie memory and block cgroups when laztytime is enabled. Fixes: c22d70a1 ("writeback, cgroup: release dying cgwbs by switching attached inodes") Reviewed-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Jingbo Xu <jefflexu@linux.alibaba.com> Link: https://lore.kernel.org/r/20231014125511.102978-1-jefflexu@linux.alibaba.com Acked-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Christian Brauner <brauner@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Reuben Hawkins authored
[ Upstream commit 7116c0af4b8414b2f19fdb366eea213cbd9d91c2 ] Readahead was factored to call generic_fadvise. That refactor added an S_ISREG restriction which broke readahead on block devices. In addition to S_ISREG, this change checks S_ISBLK to fix block device readahead. There is no change in behavior with any file type besides block devices in this change. Fixes: 3d8f7615 ("vfs: implement readahead(2) using POSIX_FADV_WILLNEED") Signed-off-by:
Reuben Hawkins <reubenhwk@gmail.com> Link: https://lore.kernel.org/r/20231003015704.2415-1-reubenhwk@gmail.com Reviewed-by:
Amir Goldstein <amir73il@gmail.com> Signed-off-by:
Christian Brauner <brauner@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Trond Myklebust authored
[ Upstream commit d59b3515ab021e010fdc58a8f445ea62dd2f7f4c ] The nfsd_open code handles EOPENSTALE correctly, by retrying the call to fh_verify() and __nfsd_open(). However the filecache just drops the error on the floor, and immediately returns nfserr_stale to the caller. This patch ensures that we propagate the EOPENSTALE code back to nfsd_file_do_acquire, and that we handle it correctly. Fixes: 65294c1f ("nfsd: add a new struct file caching facility to nfsd") Signed-off-by:
Trond Myklebust <trond.myklebust@hammerspace.com> Reviewed-by:
Jeff Layton <jlayton@kernel.org> Message-Id: <20230911183027.11372-1-trond.myklebust@hammerspace.com> Signed-off-by:
Chuck Lever <chuck.lever@oracle.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Peter Zijlstra authored
[ Upstream commit f0498d2a54e7966ce23cd7c7ff42c64fa0059b07 ] Kuyo reported sporadic failures on a sched_setaffinity() vs CPU hotplug stress-test -- notably affine_move_task() remains stuck in wait_for_completion(), leading to a hung-task detector warning. Specifically, it was reported that stop_one_cpu_nowait(.fn = migration_cpu_stop) returns false -- this stopper is responsible for the matching complete(). The race scenario is: CPU0 CPU1 // doing _cpu_down() __set_cpus_allowed_ptr() task_rq_lock(); takedown_cpu() stop_machine_cpuslocked(take_cpu_down..) <PREEMPT: cpu_stopper_thread() MULTI_STOP_PREPARE ... __set_cpus_allowed_ptr_locked() affine_move_task() task_rq_unlock(); <PREEMPT: cpu_stopper_thread()\> ack_state() MULTI_STOP_RUN take_cpu_down() __cpu_disable(); stop_machine_park(); stopper->enabled = false; /> /> stop_one_cpu_nowait(.fn = migration_cpu_stop); if (stopper->enabled) // false!!! That is, by doing stop_one_cpu_nowait() after dropping rq-lock, the stopper thread gets a chance to preempt and allows the cpu-down for the target CPU to complete. OTOH, since stop_one_cpu_nowait() / cpu_stop_queue_work() needs to issue a wakeup, it must not be ran under the scheduler locks. Solve this apparent contradiction by keeping preemption disabled over the unlock + queue_stopper combination: preempt_disable(); task_rq_unlock(...); if (!stop_pending) stop_one_cpu_nowait(...) preempt_enable(); This respects the lock ordering contraints while still avoiding the above race. That is, if we find the CPU is online under rq-lock, the targeted stop_one_cpu_nowait() must succeed. Apply this pattern to all similar stop_one_cpu_nowait() invocations. Fixes: 6d337eab ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()") Reported-by:
"Kuyo Chang (張建文)" <Kuyo.Chang@mediatek.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by:
"Kuyo Chang (張建文)" <Kuyo.Chang@mediatek.com> Link: https://lkml.kernel.org/r/20231010200442.GA16515@noisy.programming.kicks-ass.net Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Aaron Plattner authored
[ Upstream commit e959c279d391c10b35ce300fb4b0fe3b98e86bd2 ] If objtool runs into a problem that causes it to exit early, the overall tool still returns a status code of 0, which causes the build to continue as if nothing went wrong. Note this only affects early errors, as later errors are still ignored by check(). Fixes: b51277eb ("objtool: Ditch subcommands") Signed-off-by:
Aaron Plattner <aplattner@nvidia.com> Link: https://lore.kernel.org/r/cb6a28832d24b2ebfafd26da9abb95f874c83045.1696355111.git.aplattner@nvidia.com Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Qais Yousef authored
[ Upstream commit 23c9519def98ee0fa97ea5871535e9b136f522fc ] find_energy_efficient_cpu() bails out early if effective util of the task is 0 as the delta at this point will be zero and there's nothing for EAS to do. When uclamp is being used, this could lead to wrong decisions when uclamp_max is set to 0. In this case the task is capped to performance point 0, but it is actually running and consuming energy and we can benefit from EAS energy calculations. Rework the condition so that it bails out when both util and uclamp_min are 0. We can do that without needing to use uclamp_task_util(); remove it. Fixes: d81304bc ("sched/uclamp: Cater for uclamp in find_energy_efficient_cpu()'s early exit condition") Signed-off-by:
Qais Yousef (Google) <qyousef@layalina.io> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Reviewed-by:
Vincent Guittot <vincent.guittot@linaro.org> Reviewed-by:
Dietmar Eggemann <dietmar.eggemann@arm.com> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20230916232955.2099394-3-qyousef@layalina.io Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Qais Yousef authored
[ Upstream commit 6b00a40147653c8ea748e8f4396510f252763364 ] When uclamp_max is being used, the util of the task could be higher than the spare capacity of the CPU, but due to uclamp_max value we force-fit it there. The way the condition for checking for max_spare_cap in find_energy_efficient_cpu() was constructed; it ignored any CPU that has its spare_cap less than or _equal_ to max_spare_cap. Since we initialize max_spare_cap to 0; this lead to never setting max_spare_cap_cpu and hence ending up never performing compute_energy() for this cluster and missing an opportunity for a better energy efficient placement to honour uclamp_max setting. max_spare_cap = 0; cpu_cap = capacity_of(cpu) - cpu_util(p); // 0 if cpu_util(p) is high ... util_fits_cpu(...); // will return true if uclamp_max forces it to fit ... // this logic will fail to update max_spare_cap_cpu if cpu_cap is 0 if (cpu_cap > max_spare_cap) { max_spare_cap = cpu_cap; max_spare_cap_cpu = cpu; } prev_spare_cap suffers from a similar problem. Fix the logic by converting the variables into long and treating -1 value as 'not populated' instead of 0 which is a viable and correct spare capacity value. We need to be careful signed comparison is used when comparing with cpu_cap in one of the conditions. Fixes: 1d42509e ("sched/fair: Make EAS wakeup placement consider uclamp restrictions") Signed-off-by:
Qais Yousef (Google) <qyousef@layalina.io> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Reviewed-by:
Vincent Guittot <vincent.guittot@linaro.org> Reviewed-by:
Dietmar Eggemann <dietmar.eggemann@arm.com> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20230916232955.2099394-2-qyousef@layalina.io Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
David Howells authored
[ Upstream commit 066baf92bed934c9fb4bcee97a193f47aa63431c ] copy_mc_to_user() has the destination marked __user on powerpc, but not on x86; the latter results in a sparse warning in lib/iov_iter.c. Fix this by applying the tag on x86 too. Fixes: ec6347bb ("x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}()") Signed-off-by:
David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/20230925120309.1731676-3-dhowells@redhat.com cc: Dan Williams <dan.j.williams@intel.com> cc: Thomas Gleixner <tglx@linutronix.de> cc: Ingo Molnar <mingo@redhat.com> cc: Borislav Petkov <bp@alien8.de> cc: Dave Hansen <dave.hansen@linux.intel.com> cc: "H. Peter Anvin" <hpa@zytor.com> cc: Alexander Viro <viro@zeniv.linux.org.uk> cc: Jens Axboe <axboe@kernel.dk> cc: Christoph Hellwig <hch@lst.de> cc: Christian Brauner <christian@brauner.io> cc: Matthew Wilcox <willy@infradead.org> cc: Linus Torvalds <torvalds@linux-foundation.org> cc: David Laight <David.Laight@ACULAB.COM> cc: x86@kernel.org cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org Signed-off-by:
Christian Brauner <brauner@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Chengming Zhou authored
[ Upstream commit c0490bc9bb62d9376f3dd4ec28e03ca0fef97152 ] We don't need to maintain per-queue leaf_cfs_rq_list on !SMP, since it's used for cfs_rq load tracking & balancing on SMP. But sched debug interface uses it to print per-cfs_rq stats. This patch fixes the !SMP version of cfs_rq_is_decayed(), so the per-queue leaf_cfs_rq_list is also maintained correctly on !SMP, to fix the warning in assert_list_leaf_cfs_rq(). Fixes: 0a00a354 ("sched/fair: Delete useless condition in tg_unthrottle_up()") Reported-by:
Leo Yu-Chi Liang <ycliang@andestech.com> Signed-off-by:
Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Tested-by:
Leo Yu-Chi Liang <ycliang@andestech.com> Reviewed-by:
Vincent Guittot <vincent.guittot@linaro.org> Closes: https://lore.kernel.org/all/ZN87UsqkWcFLDxea@swlinux02/ Link: https://lore.kernel.org/r/20230913132031.2242151-1-chengming.zhou@linux.dev Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Yury Norov authored
[ Upstream commit 8ab63d418d4339d996f80d02a00dbce0aa3ff972 ] When CONFIG_NUMA is enabled, sched_numa_find_nth_cpu() searches for a CPU in sched_domains_numa_masks. The masks includes only online CPUs, so effectively offline CPUs are skipped. When CONFIG_NUMA is disabled, the fallback function should be consistent. Fixes: cd7f5535 ("sched: add sched_numa_find_nth_cpu()") Signed-off-by:
Yury Norov <yury.norov@gmail.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Cc: Mel Gorman <mgorman@suse.de> Link: https://lore.kernel.org/r/20230819141239.287290-5-yury.norov@gmail.com Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Yury Norov authored
[ Upstream commit 617f2c38cb5ce60226042081c09e2ee3a90d03f8 ] When the node provided by user is CPU-less, corresponding record in sched_domains_numa_masks is not set. Trying to dereference it in the following code leads to kernel crash. To avoid it, start searching from the nearest node with CPUs. Fixes: cd7f5535 ("sched: add sched_numa_find_nth_cpu()") Reported-by:
Yicong Yang <yangyicong@hisilicon.com> Reported-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Yury Norov <yury.norov@gmail.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Reviewed-by:
Yicong Yang <yangyicong@hisilicon.com> Cc: Mel Gorman <mgorman@suse.de> Link: https://lore.kernel.org/r/20230819141239.287290-4-yury.norov@gmail.com Closes: https://lore.kernel.org/lkml/CAAH8bW8C5humYnfpW3y5ypwx0E-09A3QxFE1JFzR66v+mO4XfA@mail.gmail.com/T/ Closes: https://lore.kernel.org/lkml/ZMHSNQfv39HN068m@yury-ThinkPad/T/#mf6431cb0b7f6f05193c41adeee444bc95bf2b1c4 Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Yury Norov authored
[ Upstream commit b1f099b1cf51d553c510c6c8141c27d9ba7ea1fe ] The function in fact searches the nearest node for a given one, based on a N_ONLINE state. This is a common pattern to search for a nearest node. This patch converts numa_map_to_online_node() to numa_nearest_node() so that others won't need to opencode the logic. Signed-off-by:
Yury Norov <yury.norov@gmail.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Cc: Mel Gorman <mgorman@suse.de> Link: https://lore.kernel.org/r/20230819141239.287290-2-yury.norov@gmail.com Stable-dep-of: 617f2c38cb5c ("sched/topology: Fix sched_numa_find_nth_cpu() in CPU-less case") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Zev Weiss authored
commit 920057ad521dc8669e534736c2a12c14ec9fb2d7 upstream. In the regmap conversion in commit 4ef27745 ("hwmon: (nct6775) Convert register access to regmap API") I reused the 'reg' variable for all three register reads in the fan speed calculation loop in nct6775_update_device(), but failed to notice that the value from the first one (data->REG_FAN[i]) is actually used in the call to nct6775_select_fan_div() at the end of the loop body. Since that patch the register value passed to nct6775_select_fan_div() has been (conditionally) incorrectly clobbered with the value of a different register than intended, which has in at least some cases resulted in fan speeds being adjusted down to zero. Fix this by using dedicated temporaries for the two intermediate register reads instead of 'reg'. Signed-off-by:
Zev Weiss <zev@bewilderbeest.net> Fixes: 4ef27745 ("hwmon: (nct6775) Convert register access to regmap API") Reported-by:
Thomas Zajic <zlatko@gmx.at> Tested-by:
Thomas Zajic <zlatko@gmx.at> Cc: stable@vger.kernel.org # v5.19+ Link: https://lore.kernel.org/r/20230929200822.964-2-zev@bewilderbeest.net Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- Nov 08, 2023
-
-
Greg Kroah-Hartman authored
Link: https://lore.kernel.org/r/20231106130257.903265688@linuxfoundation.org Tested-by:
Ronald Warsow <rwarsow@gmx.de> Tested-by:
SeongJae Park <sj@kernel.org> Tested-by:
Florian Fainelli <florian.fainelli@broadcom.com> Tested-by:
Allen Pais <apais@linux.microsoft.com> Tested-by:
Rudi Heitbaum <rudi@heitbaum.com> Tested-by:
Bagas Sanjaya <bagasdotme@gmail.com> Tested-by:
Ron Economos <re@w6rz.net> Tested-by:
Takeshi Ogasawara <takeshi.ogasawara@futuring-girl.com> Tested-by:
Shuah Khan <skhan@linuxfoundation.org> Tested-by:
Ricardo B. Marliere <ricardo@marliere.net> Tested-by:
Guenter Roeck <linux@roeck-us.net> Tested-by:
Linux Kernel Functional Testing <lkft@linaro.org> Tested-by:
Jon Hunter <jonathanh@nvidia.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mark Hasemeyer authored
commit 7dd692217b861a8292ff8ac2c9d4458538fd6b96 upstream. Some Chromebooks do not populate the product family DMI value resulting in firmware load failures. Add another quirk detection entry that looks for "Google" in the BIOS version. Theoretically, PRODUCT_FAMILY could be replaced with BIOS_VERSION, but it is left as a quirk to be conservative. Cc: stable@vger.kernel.org Signed-off-by:
Mark Hasemeyer <markhas@chromium.org> Acked-by:
Curtis Malainey <cujomalainey@chromium.org> Link: https://lore.kernel.org/r/20231020145953.v1.1.Iaf5702dc3f8af0fd2f81a22ba2da1a5e15b3604c@changeid Signed-off-by:
Mark Brown <broonie@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mark Hasemeyer authored
commit 7c05b44e1a50d9cbfc4f731dddc436a24ddc129a upstream. Some Jasperlake Chromebooks overwrite the system vendor DMI value to the name of the OEM that manufactured the device. This breaks Chromebook quirk detection as it expects the system vendor to be "Google". Add another quirk detection entry that looks for "Google" in the BIOS version. Cc: stable@vger.kernel.org Signed-off-by:
Mark Hasemeyer <markhas@chromium.org> Reviewed-by:
Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Link: https://lore.kernel.org/r/20231018235944.1860717-1-markhas@chromium.org Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-