- Oct 02, 2012
-
-
David Howells authored
Partition the header include path flags into two sets, one for kernelspace builds and one for userspace builds. Add the following directories to build after the ordinary include directories so that #include will pick up the UAPI header directly if the kernel header has been moved there. The userspace set (represented by the USERINCLUDE make variable) contains: -I $(srctree)/arch/$(hdr-arch)/include/uapi -I arch/$(hdr-arch)/include/generated/uapi -I $(srctree)/include/uapi -I include/generated/uapi -include $(srctree)/include/linux/kconfig.h and the kernelspace set (represented by the LINUXINCLUDE make variable) contains: -I $(srctree)/arch/$(hdr-arch)/include -I arch/$(hdr-arch)/include/generated -I $(srctree)/include -I include --- if not building in the source tree plus everything in the USERINCLUDE set. Then use USERINCLUDE in building the x86 boot code. Signed-off-by:
David Howells <dhowells@redhat.com> Acked-by:
Arnd Bergmann <arnd@arndb.de> Acked-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by:
Dave Jones <davej@redhat.com>
-
- Sep 18, 2012
-
-
Suresh Siddha authored
xsaveopt/xrstor support optimized state save/restore by tracking the INIT state and MODIFIED state during context-switch. Enable eagerfpu by default for processors supporting xsaveopt. Can be disabled by passing "eagerfpu=off" boot parameter. Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1347300665-6209-3-git-send-email-suresh.b.siddha@intel.com Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Suresh Siddha authored
Decouple non-lazy/eager fpu restore policy from the existence of the xsave feature. Introduce a synthetic CPUID flag to represent the eagerfpu policy. "eagerfpu=on" boot paramter will enable the policy. Requested-by:
H. Peter Anvin <hpa@zytor.com> Requested-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1347300665-6209-2-git-send-email-suresh.b.siddha@intel.com Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- Sep 09, 2012
-
-
H. Peter Anvin authored
Add CPUID feature bit for Supervisor Mode Access Prevention (SMAP). Signed-off-by:
H. Peter Anvin <hpa@zytor.com> Link: http://lkml.kernel.org/n/tip-ethzcr5nipikl6hd5q8ssepq@git.kernel.org
-
- Jul 20, 2012
-
-
H. Peter Anvin authored
Add the RDSEED and ADX features documented in section 9.1 of the Intel Architecture Instruction Set Extensions Programming Reference, document 319433, version 013b, available from http://software.intel.com/en-us/avx/ The PREFETCHW bit is already supported in Linux under the name 3DNOWPREFETCH. Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/n/tip-lgr6482ufk1bvxzvc2hr8qbp@git.kernel.org
-
- Jun 25, 2012
-
-
H. Peter Anvin authored
It makes sense to label "Digital Thermal Sensor" as "DTS", but unfortunately the string "dts" was already used for "Debug Store", and /proc/cpuinfo is a user space ABI. Therefore, rename this to "dtherm". This conflict went into mainline via the hwmon tree without any x86 maintainer ack, and without any kind of hint in the subject. a4659053 x86/hwmon: fix initialization of coretemp Reported-by:
Jean Delvare <khali@linux-fr.org> Link: http://lkml.kernel.org/r/4FE34BCB.5050305@linux.intel.com Cc: Jan Beulich <JBeulich@suse.com> Cc: <stable@vger.kernel.org> v2.6.36..v3.4 Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- Feb 22, 2012
-
-
H. Peter Anvin authored
Add CPU features from the Intel Archicture Instruction Set Extensions Programming Reference version 012A (Feb 2012), document number 319433-012A. Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- Jan 27, 2012
-
-
Thomas Renninger authored
It is rather similar to CPB (boot capability) feature and exists since fam10h (can be looked up in AMD's BKDG). The feature is needed for powernow-k8 to cleanup init functions and to provide proper autoloading matching with the new x86cpu modalias feature. Cc: Kay Sievers <kay.sievers@vrfy.org> Cc: Dave Jones <davej@redhat.com> Cc: Borislav Petkov <bp@amd64.org> Signed-off-by:
Thomas Renninger <trenn@suse.de> Acked-by:
H. Peter Anvin <hpa@zytor.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
- Jan 26, 2012
-
-
Andreas Herrmann authored
That is the last one missing for those CPUs. Others were recently added with commits fb215366 (KVM: expose latest Intel cpu new features (BMI1/BMI2/FMA/AVX2) to guest) and commit 969df4b8 (x86: Report cpb and eff_freq_ro flags correctly) Signed-off-by:
Andreas Herrmann <andreas.herrmann3@amd.com> Link: http://lkml.kernel.org/r/20120120163823.GC24508@alberich.amd.com Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- Dec 27, 2011
-
-
Liu, Jinsong authored
Intel latest cpu add 6 new features, refer http://software.intel.com/file/36945 The new feature cpuid listed as below: 1. FMA CPUID.EAX=01H:ECX.FMA[bit 12] 2. MOVBE CPUID.EAX=01H:ECX.MOVBE[bit 22] 3. BMI1 CPUID.EAX=07H,ECX=0H:EBX.BMI1[bit 3] 4. AVX2 CPUID.EAX=07H,ECX=0H:EBX.AVX2[bit 5] 5. BMI2 CPUID.EAX=07H,ECX=0H:EBX.BMI2[bit 8] 6. LZCNT CPUID.EAX=80000001H:ECX.LZCNT[bit 5] This patch expose these features to guest. Among them, FMA/MOVBE/LZCNT has already been defined, MOVBE/LZCNT has already been exposed. This patch defines BMI1/AVX2/BMI2, and exposes FMA/BMI1/AVX2/BMI2 to guest. Signed-off-by:
Liu, Jinsong <jinsong.liu@intel.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
- Sep 25, 2011
-
-
Liu, Jinsong authored
This pre-defination is preparing for KVM tsc deadline timer emulation, but theirself are not kvm specific. Signed-off-by:
Liu, Jinsong <jinsong.liu@intel.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
- Sep 15, 2011
-
-
Linus Torvalds authored
On x86-64, they were just wasteful: with the explicitly added (now unnecessary) padding, the size of the alternatives structure was 16 bytes, and an alignment of 8 bytes didn't hurt much. However, it was still silly, since the natural size and alignment for the structure is actually just 12 bytes, 4-byte aligned since commit 59e97e4d ("x86: Make alternative instruction pointers relative"). So removing the padding, and removing the extra alignment is just a good idea. On x86-32, the alignment of 4 bytes was correct, but was incorrectly hardcoded as 8 bytes in <asm/alternative-asm.h>. That header file had used to be an x86-64 only header file, but various unification efforts have made it be used for x86-32 too (ie the unification of rwlock and rwsem). That in turn caused x86-32 boot failures, because the extra alignment would result in random zero-filled words in the altinstructions section, causing oopses early at boot when doing alternative instruction replacement. So just remove all the alignment noise entirely. It's wrong, and it's unnecessary. The section itself is already properly aligned by the linker scripts, and all additions to the section had better be of the proper 12-byte format, keeping it aligned. So if the align directive were to ever make a difference, that would be an indication of a serious bug to begin with. Reported-by:
Werner Landgraf <w.landgraf@ru.r> Acked-by:
Andrew Lutomirski <luto@mit.edu> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Aug 22, 2011
-
-
Arun Thomas authored
This patch add a flag for Process-Context Identifiers (PCIDs) aka Address Space Identifiers (ASIDs) aka Tagged TLB support. Signed-off-by:
Arun Thomas <arun.thomas@gmail.com> Link: http://lkml.kernel.org/r/1313782943-3898-1-git-send-email-arun.thomas@gmail.com Acked-by:
Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- Aug 10, 2011
-
-
Mathias Krause authored
This is an assembler implementation of the SHA1 algorithm using the Supplemental SSE3 (SSSE3) instructions or, when available, the Advanced Vector Extensions (AVX). Testing with the tcrypt module shows the raw hash performance is up to 2.3 times faster than the C implementation, using 8k data blocks on a Core 2 Duo T5500. For the smalest data set (16 byte) it is still 25% faster. Since this implementation uses SSE/YMM registers it cannot safely be used in every situation, e.g. while an IRQ interrupts a kernel thread. The implementation falls back to the generic SHA1 variant, if using the SSE/YMM registers is not possible. With this algorithm I was able to increase the throughput of a single IPsec link from 344 Mbit/s to 464 Mbit/s on a Core 2 Quad CPU using the SSSE3 variant -- a speedup of +34.8%. Saving and restoring SSE/YMM state might make the actual throughput fluctuate when there are FPU intensive userland applications running. For example, meassuring the performance using iperf2 directly on the machine under test gives wobbling numbers because iperf2 uses the FPU for each packet to check if the reporting interval has expired (in the above test I got min/max/avg: 402/484/464 MBit/s). Using this algorithm on a IPsec gateway gives much more reasonable and stable numbers, albeit not as high as in the directly connected case. Here is the result from an RFC 2544 test run with a EXFO Packet Blazer FTB-8510: frame size sha1-generic sha1-ssse3 delta 64 byte 37.5 MBit/s 37.5 MBit/s 0.0% 128 byte 56.3 MBit/s 62.5 MBit/s +11.0% 256 byte 87.5 MBit/s 100.0 MBit/s +14.3% 512 byte 131.3 MBit/s 150.0 MBit/s +14.2% 1024 byte 162.5 MBit/s 193.8 MBit/s +19.3% 1280 byte 175.0 MBit/s 212.5 MBit/s +21.4% 1420 byte 175.0 MBit/s 218.7 MBit/s +25.0% 1518 byte 150.0 MBit/s 181.2 MBit/s +20.8% The throughput for the largest frame size is lower than for the previous size because the IP packets need to be fragmented in this case to make there way through the IPsec tunnel. Signed-off-by:
Mathias Krause <minipli@googlemail.com> Cc: Maxim Locktyukhin <maxim.locktyukhin@intel.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Jul 13, 2011
-
-
Andy Lutomirski authored
This save a few bytes on x86-64 and means that future patches can apply alternatives to unrelocated code. Signed-off-by:
Andy Lutomirski <luto@mit.edu> Link: http://lkml.kernel.org/r/ff64a6b9a1a3860ca4a7b8b6dc7b4754f9491cd7.1310563276.git.luto@mit.edu Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- Jun 25, 2011
-
-
Christoph Lameter authored
A simple implementation that only supports the word size and does not have a fallback mode (would require a spinlock). Add 32 and 64 bit support for cmpxchg_double. cmpxchg double uses the cmpxchg8b or cmpxchg16b instruction on x86 processors to compare and swap 2 machine words. This allows lockless algorithms to move more context information through critical sections. Set a flag CONFIG_CMPXCHG_DOUBLE to signal that support for double word cmpxchg detection has been build into the kernel. Note that each subsystem using cmpxchg_double has to implement a fall back mechanism as long as we offer support for processors that do not implement cmpxchg_double. Reviewed-by:
H. Peter Anvin <hpa@zytor.com> Cc: Tejun Heo <tj@kernel.org> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by:
Christoph Lameter <cl@linux.com> Link: http://lkml.kernel.org/r/20110601172614.173427964@linux.com Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
- May 24, 2011
-
-
Kees Cook authored
The Intel manual changed the name of the CPUID bit to match the instruction name. We should follow suit for sanity's sake. (See Intel SDM Volume 2, Table 3-20 "Feature Information Returned in the ECX Register".) [ hpa: we can only do this at this time because there are currently no CPUs with this feature on the market, hence this is pre-hardware enabling. However, Cc:'ing stable so that stable can present a consistent ABI. ] Signed-off-by:
Kees Cook <kees.cook@canonical.com> Link: http://lkml.kernel.org/r/20110524232926.GA27728@outflux.net Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: <stable@kernel.org> v2.6.36-39
-
- May 18, 2011
-
-
Fenghua Yu authored
Add support for newly documented SMEP (Supervisor Mode Execution Protection) CPU feature flag. SMEP prevents the CPU in kernel-mode to jump to an executable page that has the user flag set in the PTE. This prevents the kernel from executing user-space code accidentally or maliciously, so it for example prevents kernel exploits from jumping to specially prepared user-mode shell code. [ hpa: added better description by Ingo Molnar ] Signed-off-by:
Fenghua Yu <fenghua.yu@intel.com> LKML-Reference: <1305683069-25394-2-git-send-email-fenghua.yu@intel.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- May 17, 2011
-
-
Fenghua Yu authored
Intel processors are adding enhancements to REP MOVSB/STOSB and the use of REP MOVSB/STOSB for optimal memcpy/memset or similar functions is recommended. Enhancement availability is indicated by CPUID.7.0.EBX[9] (Enhanced REP MOVSB/ STOSB). Signed-off-by:
Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1305671358-14478-2-git-send-email-fenghua.yu@intel.com Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- Mar 29, 2011
-
-
Christoph Lameter authored
Add this_cpu_has() which determines if the current cpu has a certain ability using a segment prefix and a bit test operation. For that we need to add bit operations to x86s percpu.h. Many uses of cpu_has use a pointer passed to a function to determine the current flags. That is no longer necessary after this patch. However, this patch only converts the straightforward cases where cpu_has is used with this_cpu_ptr. The rest is work for later. -tj: Rolled up patch to add x86_ prefix and use percpu_read() instead of percpu_read_stable(). Signed-off-by:
Christoph Lameter <cl@linux.com> Acked-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Tejun Heo <tj@kernel.org>
-
- Feb 16, 2011
-
-
Robert Richter authored
This patch adds support for AMD family 15h core counters. There are major changes compared to family 10h. First, there is a new perfctr msr range for up to 6 counters. Northbridge counters are separate now. This patch only adds support for core counters. Second, certain events may only be scheduled on certain counters. For this we need to extend the event scheduling and constraints. We use cpu feature flags to calculate family 15h msr address offsets. This way we later can implement a faster ALTERNATIVE() version for this. Signed-off-by:
Robert Richter <robert.richter@amd.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20110215135210.GB5874@erda.amd.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- Sep 24, 2010
-
-
Jan Beulich authored
Using cpuid_eax() to determine feature availability on other than the current CPU is invalid. And feature availability should also be checked in the hotplug code path. Signed-off-by:
Jan Beulich <jbeulich@novell.com> Cc: Rudolf Marek <r.marek@assembler.cz> Cc: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by:
Guenter Roeck <guenter.roeck@ericsson.com>
-
- Sep 13, 2010
-
-
Tetsuo Handa authored
Gcc 3.x generates a warning arch/x86/include/asm/cpufeature.h: In function `__static_cpu_has': arch/x86/include/asm/cpufeature.h:326: warning: asm operand 1 probably doesn't match constraints on each file. But static_cpu_has() for gcc 3.x does not need __static_cpu_has(). Signed-off-by:
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> LKML-Reference: <201008300127.o7U1RC6Z044051@www262.sakura.ne.jp> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- Sep 08, 2010
-
-
Andre Przywara authored
The recently updated CPUID specification names new SVM feature bits. Add them to the list of reported features. Signed-off-by:
Andre Przywara <andre.przywara@amd,com> LKML-Reference: <1283778860-26843-5-git-send-email-andre.przywara@amd.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Andre Przywara authored
AMD's public CPUID specification has been updated and some bits have got names. Add them to properly describe new CPU features. Signed-off-by:
Andre Przywara <andre.przywara@amd.com> LKML-Reference: <1283778860-26843-3-git-send-email-andre.przywara@amd.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Andre Przywara authored
The AMD SSE5 feature set as-it has been replaced by some extensions to the AVX instruction set. Thus the bit formerly advertised as SSE5 is re-used for one of these extensions (XOP). Although this changes the /proc/cpuinfo output, it is not user visible, as there are no CPUs (yet) having this feature. To avoid confusion this should be added to the stable series, too. Cc: stable@kernel.org [.32.x .34.x, .35.x] Signed-off-by:
Andre Przywara <andre.przywara@amd.com> LKML-Reference: <1283778860-26843-2-git-send-email-andre.przywara@amd.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- Aug 02, 2010
-
-
Michal Schmidt authored
Accomodate the original C1E-aware idle routine to the different times during boot when the BIOS enables C1E. While at it, remove the synthetic CPUID flag in favor of a single global setting which denotes C1E status on the system. [ hpa: changed c1e_enabled to be a bool; clarified cpu bit 3:21 comment ] Signed-off-by:
Michal Schmidt <mschmidt@redhat.com> LKML-Reference: <20100727165335.GA11630@aftab> Signed-off-by:
Borislav Petkov <borislav.petkov@amd.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com> Acked-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Jul 30, 2010
-
-
Fenghua Yu authored
Add package level thermal and power limit feature support. The two MSRs and features are new starting with Intel's Sandy Bridge processor. Please check Intel 64 and IA-32 Architectures SDMV Vol 3A 14.5.6 Power Limit Notification and 14.6 Package Level Thermal Management. This patch also fixes a bug which defines reverse THERM_INT_LOW_ENABLE bit and THERM_INT_HIGH_ENABLE bit. [ hpa: fixed up against current tip:x86/cpu ] Signed-off-by:
Fenghua Yu <fenghua.yu@intel.com> LKML-Reference: <1280448826-12004-2-git-send-email-fenghua.yu@intel.com> Reviewed-by:
Len Brown <len.brown@intel.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- Jul 20, 2010
-
-
H. Peter Anvin authored
Clean up the formatting in cpufeature.h, and remove an unnecessary name override. Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <tip-*@git.kernel.org>
-
- Jul 19, 2010
-
-
Suresh Siddha authored
Add cpu feature bit support for the XSAVEOPT instruction. Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <20100719230205.523204988@sbs-t61.sc.intel.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- Jul 08, 2010
-
-
H. Peter Anvin authored
Intel has defined CPUID leaf 7 as the next set of feature flags (see the AVX specification, version 007). Add support for this new feature flags word. Signed-off-by:
H. Peter Anvin <hpa@zytor.com> LKML-Reference: <tip-*@vger.kernel.org>
-
- Jul 07, 2010
-
-
H. Peter Anvin authored
We already have cpufeature indicies above 255, so use a 16-bit number for the alternatives index. This consumes a padding field and so doesn't add any size, but it means that abusing the padding field to create assembly errors on overflow no longer works. We can retain the test simply by redirecting it to the .discard section, however. [ v3: updated to include open-coded locations ] Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com> LKML-Reference: <tip-f88731e3068f9d1392ba71cc9f50f035d26a0d4f@git.kernel.org> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
H. Peter Anvin authored
Add support for the newly documented F16C (16-bit floating point conversions) and RDRND (RDRAND instruction) CPU feature flags. Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
- Jun 16, 2010
-
-
Venkatesh Pallipadi authored
The new IA32_ENERGY_PERF_BIAS MSR allows system software to give hardware a hint whether OS policy favors more power saving, or more performance. This allows the OS to have some influence on internal hardware power/performance tradeoffs where the OS has previously had no influence. The support for this feature is indicated by CPUID.06H.ECX.bit3, as documented in the Intel Architectures Software Developer's Manual. This patch discovers support of this feature and displays it as "epb" in /proc/cpuinfo. Signed-off-by:
Venkatesh Pallipadi <venki@google.com> LKML-Reference: <alpine.LFD.2.00.1006032310160.6669@localhost.localdomain> Signed-off-by:
Len Brown <len.brown@intel.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- May 27, 2010
-
-
H. Peter Anvin authored
gcc 3 is too braindamaged to be able to compile static_cpu_has() -- apparently it can't tell that a constant passed to an inline function is still a constant -- so if we're using gcc 3, just use the dynamic test. This is bad for performance, but if you care about performance, don't use an ancient, known-to-optimize-poorly compiler. Reported-and-tested-by:
Eric Dumazet <eric.dumazet@gmail.com> LKML-Reference: <4BF2FF82.7090005@zytor.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- May 12, 2010
-
-
H. Peter Anvin authored
For CPU-feature-specific code that touches performance-critical paths, introduce a static patching version of [boot_]cpu_has(). This is run at alternatives time and is therefore not appropriate for most initialization code, but on the other hand initialization code is generally not performance critical. On gcc 4.5+ this uses the new "asm goto" feature. Signed-off-by:
H. Peter Anvin <hpa@zytor.com> Cc: Avi Kivity <avi@redhat.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1273135546-29690-2-git-send-email-avi@redhat.com>
-
- Apr 09, 2010
-
-
Borislav Petkov authored
By semi-popular demand, this adds the Core Performance Boost feature flag to /proc/cpuinfo. Possible use case for this is userspace tools like cpufreq-aperf, for example, so that they don't have to jump through hoops of accessing "/dev/cpu/%d/cpuid" in order to check for CPB hw support, or call cpuid from userspace. Signed-off-by:
Borislav Petkov <borislav.petkov@amd.com> LKML-Reference: <1270065406-1814-2-git-send-email-bp@amd64.org> Reviewed-by:
Thomas Renninger <trenn@suse.de> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
- Feb 13, 2010
-
-
Joerg Roedel authored
This patch adds code to cpu initialization path to detect the extended virtualization features of AMD cpus to show them in /proc/cpuinfo. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> LKML-Reference: <1260792521-15212-1-git-send-email-joerg.roedel@amd.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
- Dec 16, 2009
-
-
Andreas Herrmann authored
Use NodeId MSR to get NodeId and number of nodes per processor. Signed-off-by:
Andreas Herrmann <andreas.herrmann3@amd.com> LKML-Reference: <20091216144355.GB28798@alberich.amd.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
- Oct 19, 2009
-
-
Huang Ying authored
PCLMULQDQ is used to accelerate the most time-consuming part of GHASH, carry-less multiplication. More information about PCLMULQDQ can be found at: http://software.intel.com/en-us/articles/carry-less-multiplication-and-its-usage-for-computing-the-gcm-mode/ Because PCLMULQDQ changes XMM state, its usage must be enclosed with kernel_fpu_begin/end, which can be used only in process context, the acceleration is implemented as crypto_ahash. That is, request in soft IRQ context will be defered to the cryptd kernel thread. Signed-off-by:
Huang Ying <ying.huang@intel.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-