Perhaps one thing that is missing in this conversation is the fact that
when the ALU adds one to any memory value, the same thing always happens.
Basic integer math doesn't care about sign. There is no concept, at the
ALU level, of "signed" or "unsigned". It's all just bits being added.
In four bits (binary):
0000
plus one is
0001
plus one is
0010
plus one is
0011
plus one is
0100
plus one is
0101
plus one is
0110
plus one is
0111
plus one is (also causes the overflow flag to be set in the CPU)
1000
plus one is
1001
plus one is
1010
plus one is
1011
plus one is
1100
plus one is
1101
plus one is
1110
plus one is
1111
plus one is (also causes carry and overflow flags to be set in the CPU)
0000
plus one is
0001
...etc...
For addition (or subtraction) it doesn't matter if the variable is signed
or unsigned - when you say "x = x + 1" the ALU just adds one to the bits
in memory and sets some CPU flags. Signed, unsigned, same thing.
In C language with 32-bit integers:
signed int sx = 2000000000;
signed int sy = 2000000001;
signed int sz = sx + sy;
unsigned int ux = 2000000000;
unsigned int uy = 2000000001;
unsigned int uz = ux + uy;
printf("sz is 0x%x which could be %d or %u\n", sz, sz, sz);
printf("uz is 0x%x which could be %d or %u\n", sz, uz, uz);
Output (first output is in hexadecimal):
sz is 0xEE6B2801 which could be -294967295 or 4000000001
uz is 0xEE6B2801 which could be -294967295 or 4000000001
The hex value 0xEE6B2801 when turned into 32-bit binary looks like this:
11101110011010110010100000000001
See - absolutely no difference in the two bit patterns in memory. The bit
patterns in memory for signed and unsigned math are the same; because,
there is no difference in what the ALU does. The ALU doesn't care.
Adding/subtracting don't care about sign.
As you see above, once we have some bits in memory, we can tell our
program to interpret the bits in either an unsigned manner ("%u" - all
the bit patterns represent non-negative numbers) or in a signed manner
("%d" - about half of the bit patterns will be displayed as negative,
with leading minus).
Signed/unsigned doesn't matter in math. Where a signed/unsigned
declaration plays a big part is in *comparing* numbers (and in
bit-shifting; but, save that for another day). If we compare the bit
pattern 0xEE6B2801 (above) with zero, is the bit pattern less than zero
or much greater than zero? *That* answer depends on whether we declared
the memory location (the variable) to be signed or unsigned.
If we declare a 32-bit variable as unsigned, numeric comparisons
treat all 4,294,967,296 bit patterns from zero up to 0xFFFFFFFF
(11111111111111111111111111111111) as non-negative numbers (zero to
4,294,967,295 for 32-bit numbers). Comparisons (unsigned) will never
say any of these patterns are less than zero.
If we declare a 32-bit variable as signed, we are saying to our compiler
that numeric comparisons using that variable will treat about half of
those 4,294,967,296 bit patterns as negative numbers. Any bit patterns
in signed variables that have the leftmost bit (the sign bit) set,
will be interpreted and compared as "less than zero".
More C code:
if ( sz < 0 )
printf("sz is less than zero\n");
else
printf("sz is not less than zero\n");
if ( uz < 0 )
printf("uz is less than zero\n");
else
printf("uz is not less than zero\n");
Output:
sz is less than zero
uz is not less than zero
Remember that sz and uz contain exactly the same bit patterns!
Because we declared the sz variable to be signed, the code generated
by the compiler to test whether the bit pattern in sz is less than zero
tests the sign bit (leftmost bit) on the sz memory, notices that is is on
('1') and declares the bit pattern as "less than zero" (signed). All the
bit patterns with the sign bit on will be interpreted as negative numbers.
Because we declared the uz variable to be unsigned, the code generated
by the compiler to test whether the bit pattern in uz is less than zero
has no work to do at all - unsigned numbers are never less than zero.
The compiler arranges to print: "uz is not less than zero"
As a programmer, declaring variables as signed/unsigned is just a
convenient way of having the compiler conspire with us to treat half
the bit patterns as "less than zero". For math (adding/subtracting),
the declaration of signed/unsigned doesn't make any difference.
For comparisons (and bit-shifting), it does.
See also "odometer math" in:
http://elearning.algonquincollege.com/coursemat/pincka/dat2343/lectures.f03/04-Unsigned-Binary-Encoding.htm
http://elearning.algonquincollege.com/coursemat/pincka/dat2343/lectures.f03/06-Signed-Binary-Encoding.htm
--
| Ian! D. Allen - idallen [ at ] idallen [ dot ] ca - Ottawa, Ontario, Canada
| Home Page: http://idallen.com/ Contact Improv: http://contactimprov.ca/
| College professor (Open Source / Linux) via: http://teaching.idallen.com/
| Defend digital freedom: http://eff.org/ and have fun: http://fools.ca/