CCS C Software and Maintenance Offers
FAQFAQ   FAQForum Help   FAQOfficial CCS Support   SearchSearch  RegisterRegister 

ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

CCS does not monitor this forum on a regular basis.

Please do not post bug reports on this forum. Send them to support@ccsinfo.com

16 bit 2's complement to 12 bit 2's complement conversion

 
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion
View previous topic :: View next topic  
Author Message
Hans Wedemeyer



Joined: 15 Sep 2003
Posts: 226

View user's profile Send private message

16 bit 2's complement to 12 bit 2's complement conversion
PostPosted: Mon Jun 06, 2005 7:57 pm     Reply with quote

16 bit 2's complement to 12 bit 2's complement conversion
Sound easy and perhaps I doing somethign wrong.

0111111111111111 = 32767
0000000000000001 = 1
0
1111111111111111 = -1
1000000000000000 = -32767

If the value is shifted right four places to make it 12 bit, it looks like the sign bit is preserved and the conversion is complete.

The original values from 1 through 15 will be lost. this is fine and acceptable.

0000011111111111 = 2047
0000000000000001 = 1
0
0000111111111111 = -1
0000100000000000 = -2047


I think it looks correct but an old program that displays 12 bit 2's Complement data from an ADC chokes on this converted data !

What did I do wrong here... !
PCM programmer



Joined: 06 Sep 2003
Posts: 21708

View user's profile Send private message

PostPosted: Mon Jun 06, 2005 11:52 pm     Reply with quote

I'm not sure I understand you problem.
I would assume that you're getting a 12-bit signed value from
your ADC and you want to display it. So it seems to me that
you actually want to sign-extend it to a 16-bit value, and then
give that to printf() to display it. Is that correct ?
Printf won't work with 12-bit signed values. It has to be to
be 8, 16 or 32 bits.
Guest








PostPosted: Tue Jun 07, 2005 1:15 am     Reply with quote

What kind of shifting? There are two types: arithmetic and logic shift. Logic shift is the "unsigned" version: simply shift left or right.
Arithmetic shift preserves the sign bit: it copies the previous MSB to the new MSB.
So after the 4th shift you have to check bit11 of your number. If it is 1 then set the 4 MSBs to 1:
if (bit_test(mynumber,11)) mynumber |= 0xF000;
Hans Wedemeyer



Joined: 15 Sep 2003
Posts: 226

View user's profile Send private message

PostPosted: Tue Jun 07, 2005 11:45 am     Reply with quote

PCM programmer wrote:
I'm not sure I understand you problem.
I would assume that you're getting a 12-bit signed value from
your ADC and you want to display it. So it seems to me that
you actually want to sign-extend it to a 16-bit value, and then
give that to printf() to display it. Is that correct ?
Printf won't work with 12-bit signed values. It has to be to
be 8, 16 or 32 bits.


Thanks for the reply:
At the moment the data is received as 16 bit 2's complement (two bytes)
I need to convert the data to 12 bit 2's complement.
It looked like a simple shift right would do the trick, may be the results are correct, it's just the distortion of chopping the lower 4 bits is not what I expected.
Hans Wedemeyer



Joined: 15 Sep 2003
Posts: 226

View user's profile Send private message

PostPosted: Tue Jun 07, 2005 11:47 am     Reply with quote

Anonymous wrote:
What kind of shifting? There are two types: arithmetic and logic shift. Logic shift is the "unsigned" version: simply shift left or right.
Arithmetic shift preserves the sign bit: it copies the previous MSB to the new MSB.
So after the 4th shift you have to check bit11 of your number. If it is 1 then set the 4 MSBs to 1:
if (bit_test(mynumber,11)) mynumber |= 0xF000;

Well I'm trying to emulate the 12 bit data output by a Linear 12 bit ADC, and it does nothing witht he upper nibble.
I think my conversion is correct, Ijust have to figure out if the distortion is acceptable
Thanks.
sseidman



Joined: 14 Mar 2005
Posts: 159

View user's profile Send private message

PostPosted: Tue Jun 07, 2005 11:52 am     Reply with quote

Hans Wedemeyer wrote:

Well I'm trying to emulate the 12 bit data output by a Linear 12 bit ADC, and it does nothing witht he upper nibble.
I think my conversion is correct, Ijust have to figure out if the distortion is acceptable
Thanks.


Is there something wrong with just doing an integer divide by 2^4, and letting the compiler handle the how's? Shifting signed integers could be asking for trouble.

Scott
bfemmel



Joined: 18 Jul 2004
Posts: 40
Location: San Carlos, CA.

View user's profile Send private message Visit poster's website

PostPosted: Tue Jun 07, 2005 12:05 pm     Reply with quote

So if I understand the problem correctly, your 12th bit will be the sign bit to emulate the output from a signed 12 bit DAC. If you use a shift you will lose the data in the lower bits so that leaves setting the bits yourself. You will need to test the sign bit and either set or clear the 12th bit. I am not sure what needs to be done with bits 13-16, that depends on your application but try something like this.
Code:

signed int16 dataOut;

(dataOut < 0 ? dataOut |= 1111100000000000b : dataOut |= dataOut &= 0000011111111111b )

This should set or clear the bit you are interested in while preserving the others intact.

Hope this helps.

- Bruce
Display posts from previous:   
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion All times are GMT - 6 Hours
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group