CCS C Software and Maintenance Offers
FAQFAQ   FAQForum Help   FAQOfficial CCS Support   SearchSearch  RegisterRegister 

ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

CCS does not monitor this forum on a regular basis.

Please do not post bug reports on this forum. Send them to CCS Technical Support

bitmask confusion

 
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion
View previous topic :: View next topic  
Author Message
pmuldoon



Joined: 26 Sep 2003
Posts: 218
Location: Northern Indiana

View user's profile Send private message

bitmask confusion
PostPosted: Wed Nov 12, 2014 11:36 am     Reply with quote

CCS PCH C Compiler, Version 5.032,

I'm hoping somebody can enlighten me as to what I'm doing wrong.

I'm trying to set/clear a bit in a 32-bit variable. What I've discovered is that the upper bits are getting cleared when I only want to clear the desired bit.
I've tried different ways and found that if I type cast the constant, it works. But I didn't think that would be necessary. Can anyone tell me what the correct method in proper C would be, and is the compiler handling things correctly? Why is it clearing the upper bits?

To save time I've just shown the necessary lines and the associated list snippet. I even substituted the literal value for the define name just for fun.

Code:

#define LED_POWER         0x00000020

unsigned int32                           LedMask;

   LedMask &= ~LED_POWER;
   LedMask &= ~(unsigned int32)LED_POWER;
   LedMask &= 0xffffffdf;
   LedMask &= ~0x00000020;



    .................... LedMask &= ~LED_POWER;
    050FE: BCF 2E.5
    05100: CLRF 2F
    05102: CLRF 30
    05104: CLRF 31
    .................... LedMask &= ~(unsigned int32)LED_POWER;
    05106: BCF 2E.5
    .................... LedMask &= 0xffffffdf;
    05108: BCF 2E.5
    .................... LedMask &= ~0x00000020;
    0510A: BCF 2E.5
    0510C: CLRF 2F
    0510E: CLRF 30
    05110: CLRF 31
Ttelmah



Joined: 11 Mar 2010
Posts: 19539

View user's profile Send private message

PostPosted: Wed Nov 12, 2014 12:37 pm     Reply with quote

#define, is not a constant. It is a text substitution macro.

If you want a const, then use a const:

const int32 Led_Power=0x20;

This will then have a 'size' inherent in it.

Then:
Code:

const int32 LED_POWER = 0x20;

unsigned int32                           LedMask;

   LedMask |= LED_POWER; //turns the bit on
   LedMask &= (~LED_POWER); //turns the bit off

....................    LedMask |= LED_POWER; //turns the bit on
0014:  BSF    04.5
....................    LedMask &= (~LED_POWER); //turns the bit off
0016:  BCF    04.5
PCM programmer



Joined: 06 Sep 2003
Posts: 21708

View user's profile Send private message

PostPosted: Wed Nov 12, 2014 1:06 pm     Reply with quote

The compiler is treating 0x00000020 as a 8-bit value. It has to be cast
to an int32 to make it work. The test program shown below has this output:
Quote:

~0x00000020 = 000000df
~0x00000020 = ffffffdf

Code:
#include <18F4520.h>
#fuses INTRC_IO, BROWNOUT, PUT, NOWDT
#use delay(clock=4M)
#use rs232(baud=9600, UART1, ERRORS)

//============================
void main()
{

printf("~0x00000020 = %08lx \r", ~0x00000020);

printf("~0x00000020 = %08lx \r", ~(int32)0x00000020);

while(TRUE);
}
 
Ttelmah



Joined: 11 Mar 2010
Posts: 19539

View user's profile Send private message

PostPosted: Wed Nov 12, 2014 2:04 pm     Reply with quote

Yes, this is one of the classic differences between a #define, and a const. The #define is 'sizeless', while the const, has a type/size associated with it.

I must admit I'm surprised that the compiler doesn't make the context solution and work out the size of the other object involved in the function, but it is performing the bitwise not at compile time, and using the default type for this....

It will of course accept it correctly as 32bit, if you force this in the define:

#define LED_MASK 0x20LL
pmuldoon



Joined: 26 Sep 2003
Posts: 218
Location: Northern Indiana

View user's profile Send private message

PostPosted: Wed Nov 12, 2014 2:25 pm     Reply with quote

Thanks, guys.
That clears it up for me.

And thanks T for reminding me of the LL suffix. I did have a vague recollection of "casting" defines that way, but I couldn't think of what to google to look that up.

I guess it's time I learn to use 'const'.
That makes the most sense and looks clear and unambiguous.
Ttelmah



Joined: 11 Mar 2010
Posts: 19539

View user's profile Send private message

PostPosted: Wed Nov 12, 2014 2:34 pm     Reply with quote

There are also a lot of other ways you can do this.

bit_set, and bit_clear.

Or use a #bit.

So:
Code:

unsigned int32                           LedMask;
#bit LED_POWER = LedMask.5

   LED_POWER=TRUE;
   LED_POWER=FALSE;   
RF_Developer



Joined: 07 Feb 2011
Posts: 839

View user's profile Send private message

PostPosted: Thu Nov 13, 2014 5:13 am     Reply with quote

Ttelmah wrote:

I must admit I'm surprised that the compiler doesn't make the context solution and work out the size of the other object involved in the function, but it is performing the bitwise not at compile time, and using the default type for this....


It's correct, defined C behaviour. All integer literals, unless theres a type specifier such as LL etc., and whether #defined or not, are assumed to be of type int. Int in CCS C for most processors is unsigned 8 bit (?signed? 16 bit on 24s?). This dates right back to the earliest C implementations where there were only a handful of basic types, and char was just a variant on int: 'A' is the same as 65, 0x41, 0101 and 0b01000001, assuming ASCII coding of course (I don't recall a definition of character coding in earlier Cs as it was hardware dependant. These days, 'A' in C# is not ASCII coded, its Unicode and is not just one byte, and you need to call a conversion routine to convert ASCII into "char"). In some cases you could have two letter characters in an int, so 'AB' was a valid character literal.

The point is that without the LL or whatever, ALL Cs should interpret a literal such as 0x0000020 as a int, whatever an int is on that implementation of C. For most CCS Cs that means UInt8. C++s and C#s and so on, are not the same language and define literal differently, though sharing some of the same syntax. By the way, there is no binary representation in C#, and the syntax for hex and octal is different to that of C. Strong typing means that characters are treated as being fundamentally different from and incompatible with integers, rather than being just a representation for them.

As a historical note, PDP-11s, with which C is often closely associated even though C was not developed on it or for it, though it was ported to the series early on, tended to use octal a lot, despite them being 16 bit machines. Addresses in assembler and when debugging were always in octal for example.

DEC also used their own coding for some strings, called Radix-50, which packed three upper case only characters, letters, digits and barely a handful of punctuation, into 16 bits. Radix-50 was used in many DEC operating systems file systems. Its use created the six letter names with three character file extensions that we are still familiarish with today, despite their being extended and their distinctiveness - the extension was separate from the name, and the dot between was implied and not stored - being largely dropped. IBM systems used their own eight bit coding, EBCDIC, which came in even more flavours than ASCII! So we should not assume that 'A' has always and everywhere encoded to the same integer value. Thankfully these days it always does... except in C# where its different (though compatible) due to Unicode.


Last edited by RF_Developer on Thu Nov 13, 2014 5:37 am; edited 1 time in total
Ttelmah



Joined: 11 Mar 2010
Posts: 19539

View user's profile Send private message

PostPosted: Thu Nov 13, 2014 5:37 am     Reply with quote

Agreed.
However they will of course be cast to a higher type when a maths operation is applied, if there is a second parameter of a higher type involved.

The key point is that the bitwise not, is considered a unary operator (only one parameter), so no casting takes place when this operator is applied. It does immediately afterwards on the result.
pmuldoon



Joined: 26 Sep 2003
Posts: 218
Location: Northern Indiana

View user's profile Send private message

PostPosted: Thu Nov 13, 2014 6:35 am     Reply with quote

Again, thanks guys.
RF, you are a true C historian! If I were grading your reply I would have to give it an ASCII 'A'.

And that is a good point about the default int type. I believe it was the PCD compiler that changed the int default from unsigned to signed and thru me for a loop. Being a bit dyslexic, I've been confused about what the default is ever since. So since then I've always explicitly stated signed or unsigned when defining variables. I think the extra clarity makes up for the wordiness, just like using longer, more descriptive variable names. (I do have to cater a bit to my dyslexia and short memory!)

And your absolutely right, T. I could have saved myself some grief by using bit_set() and bit_clear(). I just like to get into habits that are not going to be compiler-specific - at least for this simple kind of stuff. And had I not done that, then I'd have missed out on all these great replies and not learned anything.

Oh, and thank guys for not yelling at me for not posting a simple, compilable program that demonstrates the problem.
Display posts from previous:   
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion All times are GMT - 6 Hours
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group