home | list info | list archive | date index | thread index

Re: [OCLUG-Tech] Double checking re: twos complement & signed types ??

  • Subject: Re: [OCLUG-Tech] Double checking re: twos complement & signed types ??
  • From: Bart Trojanowski <bart-oclug [ at ] jukie [ dot ] net>
  • Date: Wed, 19 Sep 2007 14:07:24 -0400
* William Case <billlinux [ at ] rogers [ dot ] com> [070919 13:19]:
> > > 
> > > Given that I am using an AMD Athalon 64 x2 CPU:
> > > 
> And given that "sizeof(int)" returns 8 bytes or maximum of
                  ^^^^^^^^^^^^^^^^^^^^^^

I think you mean 4.  32bits.  On Linux int has always been 32bit.

> 4,294,967,295; while "INT_MAX" returns 2,147,483,647.  I understand that
> "INT_MAX" shows one less bit for the case of signed integers which use
> the most significant bit to indicate the sign. 1 == minus, 0 == plus
>
> But, it seems to me that the distinction between "sizeof(int)" and
> "INT_MAX" is no longer important except for those people who have to
> learn how to program for mainframes.  Or, for those who are programming
> on desktop but are concerned that their program someday, someway, my get
> ported to a mainframe.   -- This, although written like a statement, is
> a question.

I don't know how mainframes differ wrt integer size.

It matters for *all* people that have integer values that go over 2^31.

> Two's complement is none other than inverting the bit representation of
> a decimal number (i.e. 1's become 0's; and 0's become 1's) and adding 1,
> followed by adding the two numbers (operands).  When subtracting or
> using negative numbers this makes such obvious sense it is almost
> trivial.  But, boy, can the academics ever mess it up.  (As usual)

Two's complement is a way to represent positive and negative values.
You can represent absolute values (unsigned int) without two's
complement... because unsigned int does not have a signed bit.

> I gather you are saying, that if there is a subtraction operation in
> some C code, it is the *compiler* that alters the value of the number to
> be subtracted (or negated) into it's two's complement binary
> representation and that is the value stored in memory.  ??

Not always.  You can perform subtraction on two unsigned int's just
fine.

I said that the signed bit only matters if you are encoding or decoding
the value.  And for updating the CPU flags register which includes
things like sign bit and overflow bit.

> Once stored as a two's complement representation, when fetched for use
> in the ALU, the ALU just performs a normal add instruction.

Correct.  ALU does not know that the value is to be interpreted as
signed or unsigned.

Only the programmer and compiler know that.

Although, if you read any assembly you can often infer that information.

> I.E. For me, all I would probably ever use is an unsigned integer (or
> short or float or double) and the subtract operator -- correct??

Well, don't mix float and double in this.  Float and double always have
a signed bit, iirc.

Only integer types can be unsigned (iirc).

-Bart

-- 
				WebSig: http://www.jukie.net/~bart/sig/