On Wed, 2007-09-19 at 14:07 -0400, Bart Trojanowski wrote: > * William Case <billlinux [ at ] rogers [ dot ] com> [070919 13:19]: > > > > > > > > Given that I am using an AMD Athalon 64 x2 CPU: > > > > > > And given that "sizeof(int)" returns 8 bytes or maximum of > ^^^^^^^^^^^^^^^^^^^^^^ > > I think you mean 4. 32bits. On Linux int has always been 32bit. Yep. Got a little C program here I wrote to check type sizes. I cut and pasted the 'double' line by mistake. > > 4,294,967,295; while "INT_MAX" returns 2,147,483,647. I understand that > > "INT_MAX" shows one less bit for the case of signed integers which use > > the most significant bit to indicate the sign. 1 == minus, 0 == plus > > > > But, it seems to me that the distinction between "sizeof(int)" and > > "INT_MAX" is no longer important except for those people who have to > > learn how to program for mainframes. Or, for those who are programming > > on desktop but are concerned that their program someday, someway, my get > > ported to a mainframe. -- This, although written like a statement, is > > a question. > > I don't know how mainframes differ wrt integer size. > > It matters for *all* people that have integer values that go over 2^31. > > > Two's complement is none other than inverting the bit representation of > > a decimal number (i.e. 1's become 0's; and 0's become 1's) and adding 1, > > followed by adding the two numbers (operands). When subtracting or > > using negative numbers this makes such obvious sense it is almost > > trivial. But, boy, can the academics ever mess it up. (As usual) > > Two's complement is a way to represent positive and negative values. > You can represent absolute values (unsigned int) without two's > complement... because unsigned int does not have a signed bit. > Sorry Bart, but that is not the way I read it? It seems to me, the compiler only has to change values that will be subtracted. Either because one is a negative value or one is preceded by the subtraction operator. During optimization, if there is both a negative quantity and a subtraction or the addition of two negative quantities, then the numbers are left as addition. At least that was why I asked in my previous post "I am trying to confirm that subtracting a negatively signed integer (or floating point) is *always* a double negative resulting in addition". > > I gather you are saying, that if there is a subtraction operation in > > some C code, it is the *compiler* that alters the value of the number to > > be subtracted (or negated) into it's two's complement binary > > representation and that is the value stored in memory. ?? > > Not always. You can perform subtraction on two unsigned int's just > fine. Well then, if my ALU uses twos complement arithmetic, at some point one of the unsigned ints has to be transformed into a two's complement representation. Or, I am very confused. Isn't the subtraction operator just used be the compiler as a signal to transform the subtractand (the number to be subtracted) into its two's complement form? > > I said that the signed bit only matters if you are encoding or decoding > the value. And for updating the CPU flags register which includes > things like sign bit and overflow bit. That makes sense, you have to have that info stored somewhere so that it can be reattached, as it were, to the outcome. > > > Once stored as a two's complement representation, when fetched for use > > in the ALU, the ALU just performs a normal add instruction. > > Correct. ALU does not know that the value is to be interpreted as > signed or unsigned. > > Only the programmer and compiler know that. > > Although, if you read any assembly you can often infer that information. > > > I.E. For me, all I would probably ever use is an unsigned integer (or > > short or float or double) and the subtract operator -- correct?? > > Well, don't mix float and double in this. Float and double always have > a signed bit, iirc. > > Only integer types can be unsigned (iirc). > OK. I won't mix them in. I will go back and re-read about floating point. -- Regards Bill