| 
									
										
										
										
											2008-10-22 22:26:29 -07:00
										 |  |  | #ifndef _ASM_X86_CMPXCHG_32_H
 | 
					
						
							|  |  |  | #define _ASM_X86_CMPXCHG_32_H
 | 
					
						
							| 
									
										
										
										
											2007-05-08 00:35:02 -07:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											2007-07-19 14:30:14 +03:00
										 |  |  | /*
 | 
					
						
							|  |  |  |  * Note: if you use set64_bit(), __cmpxchg64(), or their variants, you | 
					
						
							|  |  |  |  *       you need to test for the feature in boot_cpu_data. | 
					
						
							|  |  |  |  */ | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											2007-05-08 00:35:02 -07:00
										 |  |  | /*
 | 
					
						
							| 
									
										
										
										
											2010-07-27 23:29:52 -07:00
										 |  |  |  * CMPXCHG8B only writes to the target if we had the previous | 
					
						
							|  |  |  |  * value in registers, otherwise it acts as a read and gives us the | 
					
						
							|  |  |  |  * "new previous" value.  That is why there is a loop.  Preloading | 
					
						
							|  |  |  |  * EDX:EAX is a performance optimization: in the common case it means | 
					
						
							|  |  |  |  * we need only one locked operation. | 
					
						
							| 
									
										
										
										
											2007-05-08 00:35:02 -07:00
										 |  |  |  * | 
					
						
							| 
									
										
										
										
											2010-07-27 23:29:52 -07:00
										 |  |  |  * A SIMD/3DNOW!/MMX/FPU 64-bit store here would require at the very | 
					
						
							|  |  |  |  * least an FPU save and/or %cr0.ts manipulation. | 
					
						
							|  |  |  |  * | 
					
						
							|  |  |  |  * cmpxchg8b must be used with the lock prefix here to allow the | 
					
						
							|  |  |  |  * instruction to be executed atomically.  We need to have the reader | 
					
						
							|  |  |  |  * side to see the coherent 64bit value. | 
					
						
							| 
									
										
										
										
											2007-05-08 00:35:02 -07:00
										 |  |  |  */ | 
					
						
							| 
									
										
										
										
											2010-07-27 23:29:52 -07:00
										 |  |  | static inline void set_64bit(volatile u64 *ptr, u64 value) | 
					
						
							| 
									
										
										
										
											2007-05-08 00:35:02 -07:00
										 |  |  | { | 
					
						
							| 
									
										
										
										
											2010-07-27 23:29:52 -07:00
										 |  |  | 	u32 low  = value; | 
					
						
							|  |  |  | 	u32 high = value >> 32; | 
					
						
							|  |  |  | 	u64 prev = *ptr; | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											2008-03-23 01:01:51 -07:00
										 |  |  | 	asm volatile("\n1:\t" | 
					
						
							| 
									
										
										
										
											2010-07-27 23:29:52 -07:00
										 |  |  | 		     LOCK_PREFIX "cmpxchg8b %0\n\t" | 
					
						
							| 
									
										
										
										
											2008-03-23 01:01:51 -07:00
										 |  |  | 		     "jnz 1b" | 
					
						
							| 
									
										
										
										
											2010-07-27 23:29:52 -07:00
										 |  |  | 		     : "=m" (*ptr), "+A" (prev) | 
					
						
							|  |  |  | 		     : "b" (low), "c" (high) | 
					
						
							|  |  |  | 		     : "memory"); | 
					
						
							| 
									
										
										
										
											2007-05-08 00:35:02 -07:00
										 |  |  | } | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | #define __HAVE_ARCH_CMPXCHG 1
 | 
					
						
							| 
									
										
											  
											
												x86: fall back on interrupt disable in cmpxchg8b on 80386 and 80486
Actually, on 386, cmpxchg and cmpxchg_local fall back on
cmpxchg_386_u8/16/32: it disables interruptions around non atomic
updates to mimic the cmpxchg behavior.
The comment:
/* Poor man's cmpxchg for 386. Unsuitable for SMP */
already present in cmpxchg_386_u32 tells much about how this cmpxchg
implementation should not be used in a SMP context. However, the cmpxchg_local
can perfectly use this fallback, since it only needs to be atomic wrt the local
cpu.
This patch adds a cmpxchg_486_u64 and uses it as a fallback for cmpxchg64
and cmpxchg64_local on 80386 and 80486.
Q:
but why is it called cmpxchg_486 when the other functions are called
A:
Because the standard cmpxchg is missing only on 386, but cmpxchg8b is
missing both on 386 and 486.
Citing Intel's Instruction set reference:
cmpxchg:
This instruction is not supported on Intel processors earlier than the
Intel486 processors.
cmpxchg8b:
This instruction encoding is not supported on Intel processors earlier
than the Pentium processors.
Q:
What's the reason to have cmpxchg64_local on 32 bit architectures?
Without that need all this would just be a few simple defines.
A:
cmpxchg64_local on 32 bits architectures takes unsigned long long
parameters, but cmpxchg_local only takes longs. Since we have cmpxchg8b
to execute a 8 byte cmpxchg atomically on pentium and +, it makes sense
to provide a flavor of cmpxchg and cmpxchg_local using this instruction.
Also, for 32 bits architectures lacking the 64 bits atomic cmpxchg, it
makes sense _not_ to define cmpxchg64 while cmpxchg could still be
available.
Moreover, the fallback for cmpxchg8b on i386 for 386 and 486 is a
However, cmpxchg64_local will be emulated by disabling interrupts on all
architectures where it is not supported atomically.
Therefore, we *could* turn cmpxchg64_local into a cmpxchg_local, but it
would make the 386/486 fallbacks ugly, make its design different from
cmpxchg/cmpxchg64 (which really depends on atomic operations and cannot
be emulated) and require the __cmpxchg_local to be expressed as a macro
rather than an inline function so the parameters would not be fixed to
unsigned long long in every case.
So I think cmpxchg64_local makes sense there, but I am open to
suggestions.
Q:
Are there any callers?
A:
I am actually using it in LTTng in my timestamping code. I use it to
work around CPUs with asynchronous TSCs. I need to update 64 bits
values atomically on this 32 bits architecture.
Changelog:
- Ran though checkpatch.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
											
										 
											2008-01-30 13:30:47 +01:00
										 |  |  | 
 | 
					
						
							|  |  |  | #ifdef CONFIG_X86_CMPXCHG64
 | 
					
						
							| 
									
										
										
										
											2008-03-23 01:01:51 -07:00
										 |  |  | #define cmpxchg64(ptr, o, n)						\
 | 
					
						
							|  |  |  | 	((__typeof__(*(ptr)))__cmpxchg64((ptr), (unsigned long long)(o), \ | 
					
						
							|  |  |  | 					 (unsigned long long)(n))) | 
					
						
							|  |  |  | #define cmpxchg64_local(ptr, o, n)					\
 | 
					
						
							|  |  |  | 	((__typeof__(*(ptr)))__cmpxchg64_local((ptr), (unsigned long long)(o), \ | 
					
						
							|  |  |  | 					       (unsigned long long)(n))) | 
					
						
							| 
									
										
										
										
											2007-05-08 00:35:02 -07:00
										 |  |  | #endif
 | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											2010-07-28 15:18:35 -07:00
										 |  |  | static inline u64 __cmpxchg64(volatile u64 *ptr, u64 old, u64 new) | 
					
						
							| 
									
										
											  
											
												x86: fall back on interrupt disable in cmpxchg8b on 80386 and 80486
Actually, on 386, cmpxchg and cmpxchg_local fall back on
cmpxchg_386_u8/16/32: it disables interruptions around non atomic
updates to mimic the cmpxchg behavior.
The comment:
/* Poor man's cmpxchg for 386. Unsuitable for SMP */
already present in cmpxchg_386_u32 tells much about how this cmpxchg
implementation should not be used in a SMP context. However, the cmpxchg_local
can perfectly use this fallback, since it only needs to be atomic wrt the local
cpu.
This patch adds a cmpxchg_486_u64 and uses it as a fallback for cmpxchg64
and cmpxchg64_local on 80386 and 80486.
Q:
but why is it called cmpxchg_486 when the other functions are called
A:
Because the standard cmpxchg is missing only on 386, but cmpxchg8b is
missing both on 386 and 486.
Citing Intel's Instruction set reference:
cmpxchg:
This instruction is not supported on Intel processors earlier than the
Intel486 processors.
cmpxchg8b:
This instruction encoding is not supported on Intel processors earlier
than the Pentium processors.
Q:
What's the reason to have cmpxchg64_local on 32 bit architectures?
Without that need all this would just be a few simple defines.
A:
cmpxchg64_local on 32 bits architectures takes unsigned long long
parameters, but cmpxchg_local only takes longs. Since we have cmpxchg8b
to execute a 8 byte cmpxchg atomically on pentium and +, it makes sense
to provide a flavor of cmpxchg and cmpxchg_local using this instruction.
Also, for 32 bits architectures lacking the 64 bits atomic cmpxchg, it
makes sense _not_ to define cmpxchg64 while cmpxchg could still be
available.
Moreover, the fallback for cmpxchg8b on i386 for 386 and 486 is a
However, cmpxchg64_local will be emulated by disabling interrupts on all
architectures where it is not supported atomically.
Therefore, we *could* turn cmpxchg64_local into a cmpxchg_local, but it
would make the 386/486 fallbacks ugly, make its design different from
cmpxchg/cmpxchg64 (which really depends on atomic operations and cannot
be emulated) and require the __cmpxchg_local to be expressed as a macro
rather than an inline function so the parameters would not be fixed to
unsigned long long in every case.
So I think cmpxchg64_local makes sense there, but I am open to
suggestions.
Q:
Are there any callers?
A:
I am actually using it in LTTng in my timestamping code. I use it to
work around CPUs with asynchronous TSCs. I need to update 64 bits
values atomically on this 32 bits architecture.
Changelog:
- Ran though checkpatch.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
											
										 
											2008-01-30 13:30:47 +01:00
										 |  |  | { | 
					
						
							| 
									
										
										
										
											2010-07-28 15:18:35 -07:00
										 |  |  | 	u64 prev; | 
					
						
							| 
									
										
										
										
											2010-07-27 17:01:49 -07:00
										 |  |  | 	asm volatile(LOCK_PREFIX "cmpxchg8b %1" | 
					
						
							|  |  |  | 		     : "=A" (prev), | 
					
						
							| 
									
										
										
										
											2010-07-28 15:18:35 -07:00
										 |  |  | 		       "+m" (*ptr) | 
					
						
							|  |  |  | 		     : "b" ((u32)new), | 
					
						
							|  |  |  | 		       "c" ((u32)(new >> 32)), | 
					
						
							| 
									
										
										
										
											2010-07-27 17:01:49 -07:00
										 |  |  | 		       "0" (old) | 
					
						
							| 
									
										
										
										
											2008-03-23 01:01:51 -07:00
										 |  |  | 		     : "memory"); | 
					
						
							| 
									
										
											  
											
												x86: fall back on interrupt disable in cmpxchg8b on 80386 and 80486
Actually, on 386, cmpxchg and cmpxchg_local fall back on
cmpxchg_386_u8/16/32: it disables interruptions around non atomic
updates to mimic the cmpxchg behavior.
The comment:
/* Poor man's cmpxchg for 386. Unsuitable for SMP */
already present in cmpxchg_386_u32 tells much about how this cmpxchg
implementation should not be used in a SMP context. However, the cmpxchg_local
can perfectly use this fallback, since it only needs to be atomic wrt the local
cpu.
This patch adds a cmpxchg_486_u64 and uses it as a fallback for cmpxchg64
and cmpxchg64_local on 80386 and 80486.
Q:
but why is it called cmpxchg_486 when the other functions are called
A:
Because the standard cmpxchg is missing only on 386, but cmpxchg8b is
missing both on 386 and 486.
Citing Intel's Instruction set reference:
cmpxchg:
This instruction is not supported on Intel processors earlier than the
Intel486 processors.
cmpxchg8b:
This instruction encoding is not supported on Intel processors earlier
than the Pentium processors.
Q:
What's the reason to have cmpxchg64_local on 32 bit architectures?
Without that need all this would just be a few simple defines.
A:
cmpxchg64_local on 32 bits architectures takes unsigned long long
parameters, but cmpxchg_local only takes longs. Since we have cmpxchg8b
to execute a 8 byte cmpxchg atomically on pentium and +, it makes sense
to provide a flavor of cmpxchg and cmpxchg_local using this instruction.
Also, for 32 bits architectures lacking the 64 bits atomic cmpxchg, it
makes sense _not_ to define cmpxchg64 while cmpxchg could still be
available.
Moreover, the fallback for cmpxchg8b on i386 for 386 and 486 is a
However, cmpxchg64_local will be emulated by disabling interrupts on all
architectures where it is not supported atomically.
Therefore, we *could* turn cmpxchg64_local into a cmpxchg_local, but it
would make the 386/486 fallbacks ugly, make its design different from
cmpxchg/cmpxchg64 (which really depends on atomic operations and cannot
be emulated) and require the __cmpxchg_local to be expressed as a macro
rather than an inline function so the parameters would not be fixed to
unsigned long long in every case.
So I think cmpxchg64_local makes sense there, but I am open to
suggestions.
Q:
Are there any callers?
A:
I am actually using it in LTTng in my timestamping code. I use it to
work around CPUs with asynchronous TSCs. I need to update 64 bits
values atomically on this 32 bits architecture.
Changelog:
- Ran though checkpatch.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
											
										 
											2008-01-30 13:30:47 +01:00
										 |  |  | 	return prev; | 
					
						
							|  |  |  | } | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											2010-07-28 15:18:35 -07:00
										 |  |  | static inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new) | 
					
						
							| 
									
										
											  
											
												x86: fall back on interrupt disable in cmpxchg8b on 80386 and 80486
Actually, on 386, cmpxchg and cmpxchg_local fall back on
cmpxchg_386_u8/16/32: it disables interruptions around non atomic
updates to mimic the cmpxchg behavior.
The comment:
/* Poor man's cmpxchg for 386. Unsuitable for SMP */
already present in cmpxchg_386_u32 tells much about how this cmpxchg
implementation should not be used in a SMP context. However, the cmpxchg_local
can perfectly use this fallback, since it only needs to be atomic wrt the local
cpu.
This patch adds a cmpxchg_486_u64 and uses it as a fallback for cmpxchg64
and cmpxchg64_local on 80386 and 80486.
Q:
but why is it called cmpxchg_486 when the other functions are called
A:
Because the standard cmpxchg is missing only on 386, but cmpxchg8b is
missing both on 386 and 486.
Citing Intel's Instruction set reference:
cmpxchg:
This instruction is not supported on Intel processors earlier than the
Intel486 processors.
cmpxchg8b:
This instruction encoding is not supported on Intel processors earlier
than the Pentium processors.
Q:
What's the reason to have cmpxchg64_local on 32 bit architectures?
Without that need all this would just be a few simple defines.
A:
cmpxchg64_local on 32 bits architectures takes unsigned long long
parameters, but cmpxchg_local only takes longs. Since we have cmpxchg8b
to execute a 8 byte cmpxchg atomically on pentium and +, it makes sense
to provide a flavor of cmpxchg and cmpxchg_local using this instruction.
Also, for 32 bits architectures lacking the 64 bits atomic cmpxchg, it
makes sense _not_ to define cmpxchg64 while cmpxchg could still be
available.
Moreover, the fallback for cmpxchg8b on i386 for 386 and 486 is a
However, cmpxchg64_local will be emulated by disabling interrupts on all
architectures where it is not supported atomically.
Therefore, we *could* turn cmpxchg64_local into a cmpxchg_local, but it
would make the 386/486 fallbacks ugly, make its design different from
cmpxchg/cmpxchg64 (which really depends on atomic operations and cannot
be emulated) and require the __cmpxchg_local to be expressed as a macro
rather than an inline function so the parameters would not be fixed to
unsigned long long in every case.
So I think cmpxchg64_local makes sense there, but I am open to
suggestions.
Q:
Are there any callers?
A:
I am actually using it in LTTng in my timestamping code. I use it to
work around CPUs with asynchronous TSCs. I need to update 64 bits
values atomically on this 32 bits architecture.
Changelog:
- Ran though checkpatch.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
											
										 
											2008-01-30 13:30:47 +01:00
										 |  |  | { | 
					
						
							| 
									
										
										
										
											2010-07-28 15:18:35 -07:00
										 |  |  | 	u64 prev; | 
					
						
							| 
									
										
										
										
											2010-07-27 17:01:49 -07:00
										 |  |  | 	asm volatile("cmpxchg8b %1" | 
					
						
							|  |  |  | 		     : "=A" (prev), | 
					
						
							| 
									
										
										
										
											2010-07-28 15:18:35 -07:00
										 |  |  | 		       "+m" (*ptr) | 
					
						
							|  |  |  | 		     : "b" ((u32)new), | 
					
						
							|  |  |  | 		       "c" ((u32)(new >> 32)), | 
					
						
							| 
									
										
										
										
											2010-07-27 17:01:49 -07:00
										 |  |  | 		       "0" (old) | 
					
						
							| 
									
										
										
										
											2008-03-23 01:01:51 -07:00
										 |  |  | 		     : "memory"); | 
					
						
							| 
									
										
											  
											
												x86: fall back on interrupt disable in cmpxchg8b on 80386 and 80486
Actually, on 386, cmpxchg and cmpxchg_local fall back on
cmpxchg_386_u8/16/32: it disables interruptions around non atomic
updates to mimic the cmpxchg behavior.
The comment:
/* Poor man's cmpxchg for 386. Unsuitable for SMP */
already present in cmpxchg_386_u32 tells much about how this cmpxchg
implementation should not be used in a SMP context. However, the cmpxchg_local
can perfectly use this fallback, since it only needs to be atomic wrt the local
cpu.
This patch adds a cmpxchg_486_u64 and uses it as a fallback for cmpxchg64
and cmpxchg64_local on 80386 and 80486.
Q:
but why is it called cmpxchg_486 when the other functions are called
A:
Because the standard cmpxchg is missing only on 386, but cmpxchg8b is
missing both on 386 and 486.
Citing Intel's Instruction set reference:
cmpxchg:
This instruction is not supported on Intel processors earlier than the
Intel486 processors.
cmpxchg8b:
This instruction encoding is not supported on Intel processors earlier
than the Pentium processors.
Q:
What's the reason to have cmpxchg64_local on 32 bit architectures?
Without that need all this would just be a few simple defines.
A:
cmpxchg64_local on 32 bits architectures takes unsigned long long
parameters, but cmpxchg_local only takes longs. Since we have cmpxchg8b
to execute a 8 byte cmpxchg atomically on pentium and +, it makes sense
to provide a flavor of cmpxchg and cmpxchg_local using this instruction.
Also, for 32 bits architectures lacking the 64 bits atomic cmpxchg, it
makes sense _not_ to define cmpxchg64 while cmpxchg could still be
available.
Moreover, the fallback for cmpxchg8b on i386 for 386 and 486 is a
However, cmpxchg64_local will be emulated by disabling interrupts on all
architectures where it is not supported atomically.
Therefore, we *could* turn cmpxchg64_local into a cmpxchg_local, but it
would make the 386/486 fallbacks ugly, make its design different from
cmpxchg/cmpxchg64 (which really depends on atomic operations and cannot
be emulated) and require the __cmpxchg_local to be expressed as a macro
rather than an inline function so the parameters would not be fixed to
unsigned long long in every case.
So I think cmpxchg64_local makes sense there, but I am open to
suggestions.
Q:
Are there any callers?
A:
I am actually using it in LTTng in my timestamping code. I use it to
work around CPUs with asynchronous TSCs. I need to update 64 bits
values atomically on this 32 bits architecture.
Changelog:
- Ran though checkpatch.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
											
										 
											2008-01-30 13:30:47 +01:00
										 |  |  | 	return prev; | 
					
						
							|  |  |  | } | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | #ifndef CONFIG_X86_CMPXCHG64
 | 
					
						
							|  |  |  | /*
 | 
					
						
							|  |  |  |  * Building a kernel capable running on 80386 and 80486. It may be necessary | 
					
						
							|  |  |  |  * to simulate the cmpxchg8b on the 80386 and 80486 CPU. | 
					
						
							|  |  |  |  */ | 
					
						
							| 
									
										
										
										
											2007-05-08 00:35:02 -07:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											2009-09-30 17:07:54 +02:00
										 |  |  | #define cmpxchg64(ptr, o, n)					\
 | 
					
						
							|  |  |  | ({								\ | 
					
						
							|  |  |  | 	__typeof__(*(ptr)) __ret;				\ | 
					
						
							|  |  |  | 	__typeof__(*(ptr)) __old = (o);				\ | 
					
						
							|  |  |  | 	__typeof__(*(ptr)) __new = (n);				\ | 
					
						
							| 
									
										
										
										
											2010-02-24 10:54:23 +01:00
										 |  |  | 	alternative_io(LOCK_PREFIX_HERE				\ | 
					
						
							|  |  |  | 			"call cmpxchg8b_emu",			\ | 
					
						
							| 
									
										
										
										
											2009-09-30 17:07:54 +02:00
										 |  |  | 			"lock; cmpxchg8b (%%esi)" ,		\ | 
					
						
							|  |  |  | 		       X86_FEATURE_CX8,				\ | 
					
						
							|  |  |  | 		       "=A" (__ret),				\ | 
					
						
							|  |  |  | 		       "S" ((ptr)), "0" (__old),		\ | 
					
						
							|  |  |  | 		       "b" ((unsigned int)__new),		\ | 
					
						
							|  |  |  | 		       "c" ((unsigned int)(__new>>32))		\ | 
					
						
							|  |  |  | 		       : "memory");				\ | 
					
						
							|  |  |  | 	__ret; }) | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											2010-07-28 17:05:11 -07:00
										 |  |  | #define cmpxchg64_local(ptr, o, n)				\
 | 
					
						
							|  |  |  | ({								\ | 
					
						
							|  |  |  | 	__typeof__(*(ptr)) __ret;				\ | 
					
						
							|  |  |  | 	__typeof__(*(ptr)) __old = (o);				\ | 
					
						
							|  |  |  | 	__typeof__(*(ptr)) __new = (n);				\ | 
					
						
							|  |  |  | 	alternative_io("call cmpxchg8b_emu",			\ | 
					
						
							|  |  |  | 		       "cmpxchg8b (%%esi)" ,			\ | 
					
						
							|  |  |  | 		       X86_FEATURE_CX8,				\ | 
					
						
							|  |  |  | 		       "=A" (__ret),				\ | 
					
						
							|  |  |  | 		       "S" ((ptr)), "0" (__old),		\ | 
					
						
							|  |  |  | 		       "b" ((unsigned int)__new),		\ | 
					
						
							|  |  |  | 		       "c" ((unsigned int)(__new>>32))		\ | 
					
						
							|  |  |  | 		       : "memory");				\ | 
					
						
							|  |  |  | 	__ret; }) | 
					
						
							| 
									
										
											  
											
												x86: fall back on interrupt disable in cmpxchg8b on 80386 and 80486
Actually, on 386, cmpxchg and cmpxchg_local fall back on
cmpxchg_386_u8/16/32: it disables interruptions around non atomic
updates to mimic the cmpxchg behavior.
The comment:
/* Poor man's cmpxchg for 386. Unsuitable for SMP */
already present in cmpxchg_386_u32 tells much about how this cmpxchg
implementation should not be used in a SMP context. However, the cmpxchg_local
can perfectly use this fallback, since it only needs to be atomic wrt the local
cpu.
This patch adds a cmpxchg_486_u64 and uses it as a fallback for cmpxchg64
and cmpxchg64_local on 80386 and 80486.
Q:
but why is it called cmpxchg_486 when the other functions are called
A:
Because the standard cmpxchg is missing only on 386, but cmpxchg8b is
missing both on 386 and 486.
Citing Intel's Instruction set reference:
cmpxchg:
This instruction is not supported on Intel processors earlier than the
Intel486 processors.
cmpxchg8b:
This instruction encoding is not supported on Intel processors earlier
than the Pentium processors.
Q:
What's the reason to have cmpxchg64_local on 32 bit architectures?
Without that need all this would just be a few simple defines.
A:
cmpxchg64_local on 32 bits architectures takes unsigned long long
parameters, but cmpxchg_local only takes longs. Since we have cmpxchg8b
to execute a 8 byte cmpxchg atomically on pentium and +, it makes sense
to provide a flavor of cmpxchg and cmpxchg_local using this instruction.
Also, for 32 bits architectures lacking the 64 bits atomic cmpxchg, it
makes sense _not_ to define cmpxchg64 while cmpxchg could still be
available.
Moreover, the fallback for cmpxchg8b on i386 for 386 and 486 is a
However, cmpxchg64_local will be emulated by disabling interrupts on all
architectures where it is not supported atomically.
Therefore, we *could* turn cmpxchg64_local into a cmpxchg_local, but it
would make the 386/486 fallbacks ugly, make its design different from
cmpxchg/cmpxchg64 (which really depends on atomic operations and cannot
be emulated) and require the __cmpxchg_local to be expressed as a macro
rather than an inline function so the parameters would not be fixed to
unsigned long long in every case.
So I think cmpxchg64_local makes sense there, but I am open to
suggestions.
Q:
Are there any callers?
A:
I am actually using it in LTTng in my timestamping code. I use it to
work around CPUs with asynchronous TSCs. I need to update 64 bits
values atomically on this 32 bits architecture.
Changelog:
- Ran though checkpatch.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
											
										 
											2008-01-30 13:30:47 +01:00
										 |  |  | 
 | 
					
						
							|  |  |  | #endif
 | 
					
						
							| 
									
										
										
										
											2007-05-08 00:35:02 -07:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											2011-06-01 12:25:47 -05:00
										 |  |  | #define system_has_cmpxchg_double() cpu_has_cx8
 | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											2008-10-22 22:26:29 -07:00
										 |  |  | #endif /* _ASM_X86_CMPXCHG_32_H */
 |