Jump to content

Talk:XOR swap algorithm

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Pascal implementation

[edit]

I believe the pascal implementation was wrong, although it's been a very long time since I worked in pure pascal (I'm not even sure if turbo pascal 4.0 didn't add borland features to the language which weren't real). Namely the problem I fixed was that pascal does not treat "var" parameters as pointers - they're different again. The original code compared X and Y by value - not by address location. That'd work if it was:

type
 PInteger = ^Integer;
procedure XorSwap(X, Y: PInteger);
begin
  if X <> Y then
  begin
    X^ := X^ xor Y^;
    Y^ := X^ xor Y^;
    X^ := X^ xor Y^
  end
end;

but as the parameters were passed as var integers, it would be their value that would be compared. I think I've fixed it by using the @ operator, although I'm not sure if that's pascal I'm thinking of... it may supposed to be Addr(X) <> Addr(Y); correct me if I'm wrong. Themania 04:56, 26 February 2007 (UTC)[reply]

I don't think standard Pascal has an "address-of" operator; you seem to be trying to use "@" like the unary "&" operator in C. In regular Pascal, it looks like "@" is a synonym for "^", which is the postfix pointer-dereference operator.[1] Anyway, the original code was definitely correct, so I've put it back. (It is more conservative than it needs to be, because, as you observed, it compares the values of X and Y instead of their addresses; but that's not wrong. Still, I wouldn't object to simply removing the Pascal example. Pseudocode and assembly are probably enough, actually.) --Quuxplusone 08:40, 26 February 2007 (UTC)[reply]
You're right of course that comparing by value would be fair. I just think it's a little bit misleading however - at a first glance at the article it made me wonder if xor swap works if you pass it two different variables with the same value - the answer is of course yes. The comment above it does state that at first the memory locations are checked - so it's a bit of a contradiction to what's actually shown. Personally I'm in favour of the removal of everything but the pseudcode and resulting ops (/assembler)... but if the pascal implementation is to stay, I'd like it to be replaced with Addr(x) <> Addr(y), where Addr is a language defined function: http://web.mit.edu/sunsoft_v5.1/www/pascal/lang_ref/ref_lex.doc.html . The @ symbol I was using is valid only in borland pascal it seems. http://www.moorecad.com/standardpascal/iso7185.txt http://www.moorecad.com/standardpascal/iso10206.txt are where I looked. address, addr, @ all reveal no matches.
I know @ is address of in the borland dialects of pascal, also in the borland dialects (* and { are NOT aliases. Does anyone have actual information about standard pascal beyond a document of unknown origins that admits the information it gives "may be wrong or incomplete". Plugwash 14:42, 26 February 2007 (UTC)[reply]
I've just done a fair amount of looking up - apparently addr was added by borland. So quuxplusone was quite right.. but then from the same document it sounds like pascal is not even supposed to support short-circuit evaluation, so it could be argued that the iso standard pascal is dead... or as dead as one could hope. Still, it looks like there's no way to get the address of variable - perhaps for portability reasons? So the current implementation would be the only valid one to match

See "Why Pascal Is Not My Favorite Language". :) It's far and away the clearest tutorial pseudocodish-yet-compilable mainstream language, though, so it's a pity that certain ideas can't be expressed in it. Perhaps pseudocode is the way to go; but that would require a clear and unambiguous pseudocode notation for "address-of". Python uses "is" to compare references, which is nice, but it uses "^" instead of "xor", which is less nice, and I think it might have worse weirdnesses. I'm looking at how to rework the whole article to use one of: a consistent pseudocode, Pascal, [x]or Python. --Quuxplusone 05:11, 28 February 2007 (UTC)[reply]

I think that would be a great improvement. Agree with your comments about pascal there 100% =). As you said above the current implementation is in no way wrong - but I still stand by that it's a bit misleading considering that it just follows a statement "Note that we do not swap the integers passed immediately, but first check if their memory locations are distinct.". One possible solution would be to simply remove that line, or comment the pascal code as to why we perform the check, but -shrug-. Better let you come up with the better wording/pseudocode =) Themania 16:46, 1 March 2007 (UTC)[reply]
Someone's commented it now. I can't come up with anything better at the moment; turns out Python has a Java-style object model, which means no pass-by-reference (since ints aren't objects) and no pass-by-pointer (since there's no address-of operator). As a C fan, I'd like to keep the caveat about sequence points, but if the Pascal goes, I suppose the entire "Implementations" section should go. --Quuxplusone 18:59, 3 March 2007 (UTC)[reply]
Technically, Python ints are objects; their value is simply immutable. You can subclass ints in Python. -- intgr [talk] 01:16, 10 November 2007 (UTC)[reply]

The article is wrong. In pascal, to get the address of a statically allocated variable, you simply type '@' in front of the variable name like so: @var_name. As in if not(@x = @y) then ...—Preceding unsigned comment added by 86.104.2.236 (talk) 22:37, 9 November 2007 (UTC)[reply]

Code snippets, algorithm vs. pattern philosophy

[edit]

Do we really need separate articles for code in each language? It's pretty obvious, once you've read a couple, to see how to do it in another. Otherwise we'll end up with articles on XOR swap in { Forth, LISP, MIXAL, SNOBOL, C#, Eiffel, Smalltalk ... }

Just a suggestion - if one wishes to keep the code snippets, maybe they should be merged into this single article?
This is an encyclopedia, it is meant to provide information, many of the algorithm articles have long listings of code in different languages, people who read this article will want to come and instantly find the code in their own prefered language. I beleive that for the users every language possible should be included, after all, an encyclopedia is about teaching people and providing information. Tell me what you think, and please, someone add the GML version back. I beleive that it is quite usefull and that this algorithm should be posted in many languages.

71.28.13.158 22:33, 27 November 2006 (UTC)[reply]

the whole 'Effiency' discussion is plain wrong, x86 optimisation is too complex for these statements, sometimes using a temp is more efficient, the xchg instruction is not very efficient on pentium and so forth (making reg dep and so), also I don't see the relevance of putting various impl and discussions here about how it would be in x86, C , pascal and who know what.

... another point: 'Basically, this trick should never be used in real code' is 'baised', who's to say what's to use in 'real' code, or what is 'real' code? personally i've used it in comertial (real?) production code, more than once, and I know of several other cases, this code is still used AFAIK in some commrtial products used by 'serious' companies, it is not obfuscated, and I'll be suprised if IOCCC judges would be impressed. I know of no xor-swapping compiler, and i think the last paragraph should be removed or altered.

I agree, it's POV. Also, code is not obscure unless it is not commented - this section can simply have the preceding comment /* swap X and Y */ and it becomes clear - what's the problem? GRAHAMUK 01:20, 21 Oct 2003 (UTC)

Shoudn't XOR swapping be regarded more a pattern than algorithm? About production code, and use, it's really a very straight forward use of plain parity nature of XOR, as used also in RAID for example, or any parity checking. For some math or hardware oriented people this doesn't seem at all obfuscated or hard to analyse.

An algorithm is a finite sequence of steps that performs some task. The XOR swap is a finite sequence of steps that swaps two variables. It's an algorithm. Dysprosia 09:44, 24 Oct 2003 (UTC)
But what about it's pattern'ish use, as in xor-blitting a sprite on a computer screen for example (as patented by IBM), the pattern of symmetric diff is used here quite trivially. Proboably my usage of 'pattern' term is skewed here, what I mean is that it's useful in the middle of other algorithms. I can give you some code examples, from production code, of computer games, using xor-swapping embedded in a velocity calculations, or for example in specific optimised mul routines on Z80. It is an 'algorithm' in the same manner that adding together 2 numbers is.

,

So is adding two numbers a pattern? :) Define pattern, in any case - I still think algorithm is the best way to describe this. We still also use algorithms in the middle of other algorithms however; and I don't think "pattern" to describe a section of code is a widely used term - we may end up confusing people... Dysprosia 08:19, 28 Oct 2003 (UTC)
I will try to avoid a semantic disccussion about patterns, however adding two numbers is certainly frequent enough, even to be embedded in silicon. The term pattern is confusing for it, so I would describe them now, more as 'idioms' than algorithms or patterns. Anyway, let's please avoid this discoussion.

Btw: I still find the code snipplets just heavy on the eye, and irrelevant to the page. For anyone familiar with a lang's syntax writing xor swap should be trivial, and I see no reason turning lots of pages into '100 bottles of beer in 99999 diffrent langs'

I agree a little with you, but the issue should be discussed here before it is removed. Dysprosia 09:18, 28 Oct 2003 (UTC)
so that's what we're doing aren't we? aren't there some more global discussions about that? programming-pages wikipedia guidelines?

Of course addition is an algorithm. e.g

add a 0 = a
add a b = add (pred a) (succ a)

We just often forget this, since it is not usually necessary to remind ourselves how a particular addition is done (the one here only works for natural numbers...). There are several different algorithms for addition, and many for multiplication. These are not all trivial.

Re the example here, XOR is not the only way to swap two numbers, but it is just quite an easy one to understand since it only uses one operation. If the numbers to be swapped are in registers A,B and have initial values a,b the following also works:

A <- A - B (A contains a-b)
B <- B + A (B contains a)
A <- B - A (A contains (a - (a-b) = b)

The reason the XOR method would be preferred is that it applies to the whole length of the numbers being swapped. The algorithm given here might not work if there are problems with the carry bit/overflow situations, whereas the XOR algorithm should always work. David Martland 09:36, 28 Oct 2003 (UTC)

I have found that it is easier to see what is going on by using this addition reasoning, since XOR is the same as bitwise addition modulo 2. (While I'm here, I'd like to suggest emphasising the special case bug a bit more, and mentioning it in the introductory material.) PML.
Addition is not one algorithm, there's plenty of ways to do that, same way GCD isn't an algorithm, but euclid's algorithm is a gcd algorithm.
So, the other addition algorithms are still algorithms, it doesn't make them invalid in any way. The actual number that is the gcd of two numbers isn't an algorithm, but the method you use to calculate the gcd is. Dysprosia 10:44, 28 Oct 2003 (UTC)

I'm sorry, I'm abit off topic, but why do so many programming related pages in wikipedia look like a cross between a programming guide and a flame war? I couldn't find any general guidelines specific to programming or sciene in the FAQs. Shouldn't pages like this and others talk more about the history or sociology or wide-speadity of entries, instade of recommending wherther or not to do them, or wherther or not their are efficient? how should 'code' examples look in wikipedia? too many prg pages start with demostrating how good or bad a certain technic or prg lang is or is not...

Yes, you are a bit off topic :) Only messing. Anyway. I don't see how this article looks like a flame-war, the discussion down the bottom is a discussion, and can't really be a flame war because it's supposed to be NPOV.
What is wrong discussing the efficiency of an algorithm? Many studies are done into the efficiency and practicability of algorithms. Dysprosia 10:34, 5 Nov 2003 (UTC)
For what it is worth, I did a test of the XOR swapping method versus using a temporary variable when swapping 32-bit integers on a P4. Using a temp var was nearly four times as fast. Bubba73 18:56, July 14, 2005 (UTC)

Machine instructions

[edit]

First:

However, all x86 microprocessors have an XCHG instruction which does the same work on its operands more efficiently than the above sequence of XORs.

Then:

In actual code, the need to exchange the contents of two variables occurs frequently. At least one machine, the Datacraft (later Harris) 6024 series did provide such a capability (...)

So x86 processors support this operation but later they might not? Are there others? I'm not an expert on what instructions CPU:s support, so I won't edit.Fredrik 01:45, 5 Mar 2004 (UTC)

Ignorance (of the X86), stupidity, and carelessness (failure to read the preceding article) on my part, I'm afraid. Dpbsmith 01:58, 5 Mar 2004 (UTC)
So, we are left with the puzzle that we have a commonly needed operation, that is efficiently implemented in the most popular machine architecture available, but has no compact way to express it in a high-level language. Do X86 environments typically provide macros or library calls or any other way to tell the compiler that you're trying to swap two variables and the XCHG instruction should be used?
The x86 XCHG afaik can only swap integers. Swapping in higher level languages takes a bit more effort. Dysprosia 10:32, 5 Mar 2004 (UTC)
How the heck is an integer different from any other kind of data (with the same number of bits?) Dpbsmith 11:00, 5 Mar 2004 (UTC)
I apologize - what I mean is, for example, if you have, say, a C structure or union, XCHG on its own isn't going to work - you either have to copy the memory block the struct/union occupies, or, copy each component of the struct one by one, making sure you're not violating/changing any extra alignment padding. Dysprosia 11:24, 5 Mar 2004 (UTC)
There's no need for special macros. Most modern C compilers are good at optimization - they can be expected to use XCHG if it's useful to do so. (But it's not useful as frequently as you might think. The exchange of two ints can often be avoided completely by being careful about which registers are used to store intermediate results, and compilers such as GCC are clever enough to do this.) --Zundark 13:39, 5 Mar 2004 (UTC)
There's no need for a ++ operator either; the good old ALGOL a := a + 1 expresses the intention well enough and most compiler optimizers can handle it. That doesn't mean ++ wasn't a good idea.
More to the point... I added the section on machines with hardware swap capability, but I'm now unsure whether or not it belongs. I still think it does have some relevance, but it's a bit marginal, and if someone wants to delete it, I'm not going to try to restore it. Dpbsmith 13:47, 5 Mar 2004 (UTC)
I've been told that ++ exists in C because it maps nicely to a PDP-11 opcode for increment. Dysprosia 13:54, 5 Mar 2004 (UTC)

Parallelisation

[edit]

One paragraph in the article states that modern computer can do parallel computation and hence other swap algorithm may be faster than the XOR swap. Can someone explain how other swapping technique achieve parallel processing?

Take the vanilla approach for example.

var tmp1 <- var A
var A <- var B
var B <- var tmp1

regardless how you slice and dice the steps and memory access to the registers. You still have to wait till the data transfer is complete before you can do the next step, even if you parallellize everything, these operation needs to be synchronized. How would parallellism benefit? Am I missing something?

Here's a parallel version:
tmp1 <- A  ||  tmp2 <- B
B <- tmp1  ||  A <- tmp2
-- Smjg 16:52, 12 Apr 2005 (UTC)

The XOR algorithm was commonly known to assembly language programmers many years ago. It gets the best benefit at register level when memory access is eliminated and the use of an intermediate register is eliminated. I don't see why someone what to do this trick in high level languages because this kind of optimization can be automated by a good code generator of a compiler.

Modern PC processors use register renaming. Therefore one register in your machine code may map to different physical registeres on the CPU. What matters is the dependency chain, that is what instructions depend on what other instructions not really what registers the programmer decided to call the result. Plugwash 10:29, 22 March 2006 (UTC)[reply]

Generalisation

[edit]

Has anyone tried generalising the xor swap algorithm to cyclic permutations of three or more values?

It is possible to write the 3-cycle as five ^= statements

x ^= y;
y ^= x;
z ^= x;
x ^= z;
z ^= x ^ y;

but that's still six XOR operations, the same as swapping y<->z and then x<->y.

The interesting question is: Is it possible, for any n, to cycle n variables in place using fewer than 3n operations?

You need a temporary register for storing the result of x ^ y. That means no register space is saved with this algorithm. --Graue 16:59, 18 September 2005 (UTC)[reply]
By the commutativity of ^, z ^= x; z ^= y; is the same as z ^= x ^ y but avoids the temporary. Note that this then makes steps 3-5 the same as z<->x and as step 6 is z ^= y this means that you can write steps 3-6 as x ^= y; z<->x; and thus steps 1-6 as x<->y; z<->x; --80.175.250.218 12:15, 30 January 2007 (UTC)[reply]

Caveat is wrong

[edit]

The caveat section says:

One would assume that "swapping a variable with itself" effectively does nothing. The standard, intuitive algorithm that uses a temporary variable indeed comes out that way, but the XOR algorithm will xor the variable with itself, thus setting it to zero instead.

But this is wrong. Why don't we let A = B = 5, and see what happens:

  1. A := A xor B
    A = 5 xor 5 = 0
  2. B := A xor B
    B = 0 xor 5 = 5
  3. A := A xor B
    A = 0 xor 5 = 5

Sure, there's a zero involved, but then the other property of XOR kicks in: a number XORed with 0 is the number you started with. I'm going to remove this section. MShonle 19:20, 14 August 2005 (UTC)[reply]

Ah, looking at this a little further the case where this does break is when X and Y are aliased to the same memory location. A brief sentence could mention this, it doesn't need to be a section. MShonle 21:49, 14 August 2005 (UTC)[reply]
Same variable means same memory location. Variable != value. Fredrik Johansson - talk - contribs 21:05, 17 January 2006 (UTC)[reply]
If we're going to have a whole article about that trick, and are going to get into the details of how it works, a section could easily be dedicated to pointing out this flaw and working it out. Aliasing can occur in many situations, starting with random shuffles or sorting algorithms that use a sentinel. If your basic swap operation breaks in such cases, you're in for a world of hurt. This deserves more than just a passing mention. 199.111.193.97 04:48, 22 March 2006 (UTC)[reply]
I'm adding to the article to point out that the technique works even if the values are equal. It's not entirely clear from the article that value equality does not cause a problem; only aliasing does. -Chinmay —Preceding unsigned comment added by 59.163.212.199 (talk) 08:43, 22 January 2008 (UTC)[reply]
The article says both "However, the problem still remains that if x and y use the same storage location, the value stored in that location will be zeroed out by the first XOR instruction, and then remain zero..." and "...but first checks if their memory locations are distinct. This will remove problems caused by possible aliasing.". There's also a subsection entitled "Aliasing". This seems sufficient to me. Oli Filth(talk) 09:46, 22 January 2008 (UTC)[reply]

eL: on close obervation of the code sample "if(*x!=*y)" tests the value and not the location. it should be "if(x!=y)"... fixed. —Preceding unsigned comment added by 196.212.27.12 (talk) 15:21, 4 February 2010 (UTC)[reply]

application?

[edit]

When would you wish to swap values (say on an abacus)? Why can one not simply use B instead of A when needed? Even when doing something like sorting, say: a. check if A>B b. if (a) then swap A,B

would need some register to store the result of A-B etc.

Swap is the fundamental operation of sorting and other algorithms. —Ben FrantzDale 08:01, 9 February 2007 (UTC)[reply]

Too much strength given to the xor swap.

[edit]

I'm going to edit this page shortly, but I thought I'd ask for some comments first.

Firstly, "Another advantage is that this algorithm is actually easier to remember and write without mistakes than the standard temporary variable swap algorithm.."

Surely that is not true? You have to remember to write first to the variable you copied into the temp. That is all you have to remember. Surely that's easier than what is written down.

Also, one very important thing I think is missing from this article is that when using an optimising compiler for a language like C, C++, java, then a temporary is almost always better.

This is because any optimising compiler will turn:

int temp = a;
a = b;
temp = a;

into:

Load a into register 1
Load b into register 2
Put register 1 into memory location b
Put register 2 into memory location a

and if a and b are already in register, the "temporary variable swap" will turn into no code at all, and the compiler will just note that the locations of the variables have swapped around.

If it is ever better to swap the variables using the xor trick, then your compiler will know about it :) —Preceding unsigned comment added by Mrjeff (talkcontribs)

Yep. This is mentioned in the article, but I agree that it needs work. People keep adding little snippets of (mis?)information about assembly programming on various platforms, the history of C, and whatnot, which really ought to be consolidated into coherent sections of the article. For example, there could be one section on the abstract algorithm, one section on actual (documented) uses, and one section on reasons it should never be used. :) However, it needs to be worked out so that people don't keep reading the first paragraph, discovering that it doesn't mention how many CPU cycles XOR swap takes on a PDP-42, and adding that in. There needs to be a clearly demarcated section for random machine-language exhumations. --Quuxplusone 05:52, 16 April 2006 (UTC)[reply]

Proposed merge of XOR and ADD/SUB article pairs

[edit]

I propose to briefly summarize Swap by addition and subtraction as a section of the XOR swap algorithm article, and change the former into a redirect. The two methods share everything in common except for the particular arithmetic operator they choose to use.

At the same time, I propose to briefly summarize Subtraction edge as a section of the XOR linked list article, and change the former into a redirect. The two methods, blah, blah, blah.

Discuss here; I'll perform the merges in a few days if I hear no strong objections. --Quuxplusone 03:39, 28 November 2006 (UTC)[reply]

Sure. Ideally this would involve renaming the articles to something more inclusive, but I can't think of anything. Deco 05:54, 28 November 2006 (UTC)[reply]
I don't think they should be merged as they are different methods after all. Also, in practive, swap by addition and subtraction is not done.... However, it would be good to have them links under the see also section. --Cyktsui 23:33, 6 December 2006 (UTC)[reply]
XOR swapping is probably the more commonly discussed trick for swapping, and is also linked with XOR linked list. This page is also more likely to be found by someone searching for XOR and swap. Does anyone have a historical reference for XOR swap? Was XOR swap perceived before arithmetic swaps? cbenz 14 Decemeber 2006 —The preceding unsigned comment was added by 66.92.80.211 (talk) 23:17, 14 December 2006 (UTC).[reply]

Merge complete, now that I finally got around to it. (Cyktsui, duly noted, but then many things which are not quite the same often share a Wikipedia article.) --Quuxplusone 05:44, 18 December 2006 (UTC)[reply]

N.B.

[edit]

From the point of view of logic, the add-subtract algorithm requires additional storage, whereas XOR doesn't. --VKokielov 18:09, 30 April 2007 (UTC)[reply]

...no, not really; you can build any circuit you like. --VKokielov 18:50, 30 April 2007 (UTC)[reply]
You could say that the additional storage, in this case, is the additional space it takes to store the table corresponding to subtraction. Since, first, this table is always on hand, and, second, it is stored once, there may still be an advantage. --VKokielov 18:54, 30 April 2007 (UTC)[reply]
You seem to be arguing with yourself, VKokielov! :) FYI, modern computers typically use two's complement arithmetic, in which the operation "subtract X from Y" is equivalent to the operation "add NOT(X) to Y, and then add 1 to the result". As you seem to have realized, the addition algorithm can be performed in hardware very easily; see the Wikipedia article Adder (electronics) for some more details.
Bottom line: Addition requires no more and no less "additional storage" than XOR. And in binary arithmetic, there are no "tables" to memorize, the way we memorize times tables for decimal arithmetic. In binary, there are only two digits, so any "table" would only have four entries — it's actually easier to encode the rules of arithmetic directly in the hardware than to bother with external tables. Hope this helps! --Quuxplusone 04:48, 1 May 2007 (UTC)[reply]
obviously there are two of us.  ;) Well, no...but I'm less prudent these days...I first talk and then think. --VKokielov 13:07, 1 May 2007 (UTC)[reply]
"add NOT(X) to Y, and then add 1 to the result" <-- or rather add NOT(X) to Y while feeding 1 to carry-in. Plugwash (talk) 12:23, 25 January 2010 (UTC)[reply]

Short note

[edit]

I've written a small note about this subject. It can be found here: Swapping the values of two variables. Do you think any of its content could be posted here?

Thanks. --jff 15:32, 11 July 2007 (UTC)[reply]

Looks to me like most of its content is posted here. :) You demonstrate the XOR-swap trick and the "swap by addition and subtraction" trick, and mention that they can be generalized to use other operators instead of xor/add/subtract. That's basically what the Wikipedia article says at the moment, right? (The WP article goes off on other tangents also, of course.) --Quuxplusone 02:12, 12 July 2007 (UTC)[reply]
Well, the XOR-swap trick in the note is just an instantiation. Why not use the "bitwise equality"? And why does the article mentions that XOR is commutative? It doesn't matter for the problem! The only important properties are associativity and unitpotency! Also, the note considers a generalisation: why use just one operator? IMO, I think it contributes with something new :) Thanks for your comment! --jff 12:13, 12 July 2007 (UTC)[reply]

Removed code-tags...

[edit]

...because they caused layout problems. If they were there for a good reason, please reinsert them, but try to do it in such a way that these problems don't return. Shinobu 17:02, 19 August 2007 (UTC)[reply]

Group theoretic proof

[edit]

Please dont delete the group-theoretic proof of why this works. It's very important to show WHY the XOR operations behaves in this way. The other proofs here simply show that it will work, but the group theoretic proof also says why. This proof clearly shows how the L4a property, each element is its own inverse, enables the XOR function to be used as its own inverse. I spent a lot of time on this proof and it's not right to remove it because it more clearly explains something that is already explained here poorly. Leave both. WikiPedia is not running low on HDD space.

Furthermore, this group theoretic proof is formally rigorous, the others are not. The register-based proof in the table is loose with its logical inference. steps are grouped and not clearly explained. there is no mention of the essential property of XOR, that each string is it's own inverse. that little feature enables this swap algorithm to be formally proven. a proof without reference to that property is not formally rigorous.

76.109.12.53 (talk) 15:51, 25 April 2008 (UTC)[reply]

Hi,
First, please note that just because we can add arbitrary amounts of information to pages, it doesn't mean that we should. Articles should be concise and to the point; there is no need for redundant material, nor the introduction of material of unnecessary complexity.
Back to the point in hand. I appreciate that the existing "proof" is perhaps not as clear as it could be, but I'm not sure that "defining a division theorem of XOR" or proving the "XOR quotient" will improve any reader's understanding of what's going on; it's simply an excursion via an unnecessary layer of abstraction. It's certainly not pertinent to describe it as a "group theoretic" proof, as it relies on a property that isn't that of groups in general (Abelian or not). I've tidied up the proof somewhat, but I'm not sure any more needs to be said than:
(albeit via references to the appropriate properties).
However, this is essentially what's already described in the "reduction" table. I await your response! Oli Filth(talk) 19:02, 25 April 2008 (UTC)[reply]
It's been a few days, so I've removed the group-theoretic stuff, as I've already improved the table. The relevance to abelian groups is still mentioned, but in a footnote, as really an interesting aside, not part of the main flow of the proof. Oli Filth(talk) 19:04, 28 April 2008 (UTC)[reply]

Why can't you just leave it there? Is the proof incorrect? If you dont like the comparison to multiplication & division then offer a better metaphor. Similarly, you are welcome to offer a better name than "division theorem." This is a characteristic of XOR that applies to more than just registers so there is good reason for higher abstraction. A formal proof is warranted by the fact that this swap will work for any pair of equal length bit strings (a.k.a. integers.) Call it group theory, number theory, or theoretical computer science, it's all important when attempting to prove this sort of result. As for the proof by register example: what does it have to say about values with more bits than A? How can we prove that XOR-swap is valid for them if they do not fit in a register? Lots of integers will be too big to fit in your register, but we still want to prove that XOR-swap is valid for them. Remember, because we're working with bit strings, this is really a proof about the integers. 76.109.12.53 (talk) 05:02, 29 April 2008 (UTC)[reply]

As I've already said, information being correct is not a reason for its retention; the whole point of an encyclopaedia article is to target the article at the reader by being selective in the material presented. I'm arguing that the "formal" proof is both redundant and distracting, and therefore should not be retained.
By "redundant", I mean that as I've already said, I fail to see what the "formal" proof offers that is not already covered. Your example of "what does it have to say about values with more bits than A?" is irrelevant; this article is a discussion of an efficient software algorithm, explicitly in the context of hardware variables (i.e. registers); in a real-world context, the values we're trying to swap will never be longer than the registers they're stored in! Besides which, what further insight does the "formal" proof lend on this matter? And at any rate, the basic concept of why the algorithm works is not complicated; the few lines I've quoted above (which are what the "register" example boils down to) are enough to clearly demonstrate what's going on. It ought to be self-evident that this works only for values that fit within the registers, but that the lengths of the registers do not affect the concept.
As for "distracting"; in my opinion, this diversion via group theory (or whatever we choose to call it) is unnecessarily complicated. I'm fairly well-versed in maths, and it took me a while to see what the proof was getting at, what with its reference to division theorems, etc., and the fact that it doesn't explain what Y represents, nor why B being the "division quotient of A and Y" is in any way relevant to the problem in hand. Goodness knows what the typical reader will make of this. Granted, these are things we could fix, but by that point, it'll have become highly verbose, and really will only have presented the existing proof cast in a slightly different format (if we were to substitute A^B for Y at all points, they would be identical, save the superfluous references to A-1). The point is, none of this is required to demonstrate that the algorithm always works, so why keep it?
I would suggest that the material you've added may be more suitable for the XOR article, as that article is more geared towards theory and maths. This article, however, isn't; the maths is really totally outside the scope of what the article is about. Oli Filth(talk) 08:11, 29 April 2008 (UTC)[reply]
Again, as there has been no reply for some days now, I've reverted the reversion for the reasons given above. Oli Filth(talk) 20:56, 1 May 2008 (UTC)[reply]

it should be noted that, speaking about groups, commutativity is not the point here. in particular, the reference to _abelian_ groups is pointless.

( this can be seen easily starting from the usual group axioms: A1: associativity of +, A2: existence of identity 0 (A+0=A for any A) A3: existence of inverse elements (for A, theres B sth. A+B=B+A=0). we find, that 0 is unique: another identity, 0' satisfies 0'+0' = 0'. thus 0 + (0' + 0') = 0 which is also equal to (0+0') + 0' = 0'. this 0 also is a left identity: 0+A = (A+B)+A = A + (B+A) = A. (here B comes from A3). also, a left inverse corresonds to a unique right inverse. if A+B=0, there is B' satisfying B'+A=0, thus B' = B'+(A+B) = (B'+A) + B = B. in particular inverse elements are unique. in fact, if we take a (not a priori abelian) group of involutions (i.e. A+A=0 for all A), we find that A+B is the inverse of A+B _and_, by associativity, the inverse of B+A. thus A+B and B+A coincide -- all without L1. )

I suggest the removal of the misleading commutative group proof, since, due to L4, it only works for xor, where only 4 cases need to be checked. instead i suggest adding a link to groups_(mathematics), not citing the group axioms and mentioning (proving if somebody insists) that for any group, the operations x <- x * y; y <- x * (y^-1); x <- (y^-1) * x; do the job. this wouldnt only reveal the concept (instead of filling a table with a calculation), but also fits to the case mentioned in section "Variations" (where L4 doesnt even hold).(felixs|not registered)92.228.58.83 (talk) 19:23, 20 February 2011 (UTC)[reply]

Sequence points

[edit]
The body of this function is sometimes seen incorrectly shortened to if (x != y) *x^=*y^=*x^=*y;. This may not obtain desired result because of a lack of sequence points; no evaluation order was given for the assignments.

Even if this is true, I don't really understand it. I get the general idea of sequence points and why things like x = foo(y++, y++) do not work predictably, but assignment operators always evaluate right-to-left; x = y = z = 0 is a very common and predictable idiom. The only difference here is we're using compound assignments and pointers, and I don't see how that changes things. So what's the situation here? - furrykef (Talk at me) 17:15, 27 July 2008 (UTC)[reply]

x = y = z = 0, because there are no side effects that can possibly affect the outcome. However, in the example above, *x is altered twice, and *y is accessed twice but altered once. The C spec only guarantees that new values ("side effects") are completed at sequence points, and there are none in the XOR example. For instance, there is no guarantee that *y has taken on its new value by the time that the second (i.e. left-most) *x^=*y is evaluated. Oli Filth(talk|contribs) 20:01, 27 July 2008 (UTC)[reply]
I see. Thanks. :) - furrykef (Talk at me) 22:43, 28 July 2008 (UTC)[reply]
The problem here is not evaluation order. The problem is that modifying an lvalue twice without an intervening sequence point is undefined. --Mellum (talk) 21:29, 30 July 2008 (UTC)[reply]
Surely the problem is all about evaluation order! (Although what you said is another of putting it.) From the C99 spec (don't have the C spec, but I imagine it's phrased similarly):
"Evaluation of an expression may produce side effects. At certain specified points in the execution sequence called sequence points, all side effects of previous evaluations shall be complete and no side effects of subsequent evaluations shall have taken place." Oli Filth(talk|contribs) 21:49, 30 July 2008 (UTC)[reply]
If the problem was evaluation order, then there would be only a small number of well-defined different results, depending on the actual evaluation order. This is not the case here, since the behavior is undefined; anything might happen, such as the program crashing. Since this expression does not have defined behavior, it is meaningless to talk about the evaluation order of subexpressions. --Mellum (talk) 17:42, 31 July 2008 (UTC)[reply]

Reasons for use in practice

[edit]

I'm adding back in the "citation needed" tags around the section about reasons for using the xor swap. I've never come across a time when it's actually been used in practice, and I'm fairly confident that using it is always a bad idea. If you can give examples of when it's a good idea, by all means add in citations, and I will be satisfied.

I've rewritten the section to reflect what I know about it, although I still don't have any references. It's rarely ever useful in practice and I'm not aware of any compiler that generates it. I think the section as it stood was quite misleading. Dcoetzee 08:33, 9 March 2009 (UTC)[reply]

CglDW1 (talk) 15:43, 30 December 2012 (UTC) XOR-Swap is interesting, but not useful, The XOR swap C-function used more stack-memory and has a longer runtime than the use of a temporary variable. Minimum Stack used: 2 Pointers and 1 Return address. This solution is minimum three times worse then the use of a temporary variable. Other languages than assembler or c have to much overhead, so that XOR-swap is never useful. If you ever use XOR shift, use this c-marco or a comparable assembler-macro:[reply]

#define xor_swap(X,Y) \
X = X^Y; \
Y = X^Y; \
X = X^Y;

If you swap an address with ifself, the value is set to 0. I would used XOR-Shift only on system under sixty-four bytes of RAM, thus never.

"distinct memory addresses"

[edit]

The introduction notes the variables need to be stored at "distinct memory addresses" but that is insufficient, for instance, if you have one 32 bit value at 0x0000 and the other on 0x0001 (counting 8 bit units) this does not work. --94.222.150.6 (talk) 01:09, 28 July 2010 (UTC)[reply]

I have edited the article to clarify this; hopefully it satisfies you. XOR swap algorithm (as of now), click on "prev" link to see changes made. PleaseStand (talk) 19:37, 28 July 2010 (UTC)[reply]
Very good, thank you! --94.222.150.6 (talk) —Preceding undated comment added 03:36, 29 July 2010 (UTC).[reply]
No need to clarify anything, the whole concept of swapping two variables is undefined if the variables only partially overlap. Think about it. What value should each involved byte of memory take after the operation? -- Anonymous 22:01, 12 December 2010 (UTC) —Preceding unsigned comment added by 79.186.6.204 (talk)
True (unless the result is stored in two new variables, even "worse" in terms of memory consumption than what the XOR swap attempts to avoid). I still don't see a problem with just including "distinct, non-overlapping" at the beginning of the article; it is just clarifying our assumptions, just as mathematicians include wording such as "for all real numbers" in theorems. PleaseStand (talk) 22:40, 13 December 2010 (UTC)[reply]
I don't think we need the disclaimer. It's no more necessary than a disclaimer that "both the memory addresses must exist"! Oli Filth(talk|contribs) 23:22, 13 December 2010 (UTC)[reply]
I think that now, most of us all agree that for practical purposes, "non-overlapping" is irrelevant, and I have returned the lead back to roughly how it was before this discussion started. PleaseStand (talk) 00:00, 14 December 2010 (UTC)[reply]

Hardware example illustrating the inherent complexity of XOR swap

[edit]

When I first encountered a question from a job interview in 2001, I analysed the usefulness of the principle (http://sw-amt.ws/var-swap/VARIABLEN-TAUSCH.html in German). The conclusion was, that the algorithm is only ever useful in assembler within a very constrained context. In C it actually confuses the compiler and for interpreted languages it is outright preposterous.

To somewhat demonstrate the silliness of the entire XOR swapping, I used a somewhat ironic technique to exchange two signals in hardware without crossing wires:

 X ---+----XOR-->  Q1 (==Y)
       \   /
        XOR
       /   \
 Y ---+----XOR-->  Q2 (==X)

The swapping property is derived directly by induction from the truth table:

X Y X^Y Q1 = X^(X^Y) Q2 = Y^(X^Y)
0 0 0 0 0
0 1 1 1 0
1 0 1 0 1
1 1 0 1 1

This technique successfully avoids crossing two wires:

X ---+   +--->  Y
      \ /
       /
      / \
Y ---+   +-->  X

Wolfmanx (talk) 00:42, 8 January 2017 (UTC)[reply]