I've always thought of two's complement arithmetic (on say an 32 bit machine) as modulo 2^32 arithmetic with two qualifications: 1) hardware flags are set when overflow/underflow occurs. 2) When testing the values of numbers (x > 0, x > y, etc.) 2^32 is (conceptually) subtracted from each operand whose value is >= 2^31 for the purposes of the test. To my mind this model is easy to understand and explains why 2's complement is used (it is easy to implement). For example (on an 8 bit machine) 127 + 1 = 128 (in hex 7F + 1 = 80) but whenever 128 (hex 80) is used in a test it is treated as if its value is 128 - 256 = -128. in hex (80 - 100) = -80. Note that (in C) when unsigned ints are used the values remain the same. The only difference is that the rules for comparisons are changed; real modulo 2^32 arithmetic is used on unsigned values. Perhaps compiler writers out there can verify this. Hope this helps. Ralph Boland