View previous topic :: View next topic |
Author |
Message |
Trampas Guest
|
BYTE and delay_us(x) function |
Posted: Mon Aug 09, 2004 7:20 am |
|
|
First off I know that I am doing things CCS never inteneded...
The first thing is that I have my own datatype definitions where I define a BYTE as a signed int8. Well in CCS' device header files they define a BYTE as an unsigned value. Therefore I remove the definition from the device header file so that my typedef will work. This may be a source of problems for me, but I am not sure...
Well I have noticed in latest revision of the compiler that the delay_us function stopped working as expected. That is I did a delay_us(200) and the results was not 200us. but more like just a couple. Therefore I was figuring that CCS may have a been using a signed value for the delay_us(x) function but I am not sure. I replaced my call with delay_us((UBYTE)200) and now things seem to be normal.
I have noticed this problem with compiler 3.206, I think 3.151 worked without the usigned typecasting.
Maybe someone can answer how the delay_us(x) is implemented? Is it just a macro? Would it be possible to make the code a little neater by including the macros in the header files? That is have a compiler generated typedef for FOSC which is set using the #use delay(clock=40000000) pragma. Then just have the delay_us and delay_ms as macros where the user can modify.
Thanks
Trampas |
|
|
Trampas Guest
|
|
Posted: Mon Aug 09, 2004 6:41 pm |
|
|
After more testing I found that the delay_us() function does not work correctly in 3.191 but appears to in 3.206. This might be problems with interrupt context saving or other problems...
As you guys can imagine, I am now playing compiler crap shooting trying to get a version of the compiler which compiles and exectues code correctly as that 3.207 does not appear to work...
Trampas |
|
|
PCM programmer
Joined: 06 Sep 2003 Posts: 21708
|
|
Posted: Mon Aug 09, 2004 7:16 pm |
|
|
Quote: | The first thing is that I have my own datatype definitions where I define a BYTE as a signed int8. Well in CCS' device header files they define a BYTE as an unsigned value. Therefore I remove the definition from the device header file so that my typedef will work. This may be a source of problems for me, but I am not sure... |
Come on Trampas. CCS uses BYTE for variable declarations in most
of their driver files. If you change a basic data type into something else,
you're just begging for errors. It's like throwing a wrench into the
gears. Doing this invalidates any other testing that you're doing, IMO. |
|
|
Trampas Guest
|
|
Posted: Tue Aug 10, 2004 6:20 am |
|
|
I try not to use any of CCS' drivers, other than compiler defined functions. I have enough problems with the compiler, I do need to add debuging the drivers as well.
My question was more along the lines of how does CCS handle stuff like printf() and other internal functions. That is I assume that the compiler has these function defined like most compilers as a library. Therefore this BYTE thing would not be an issue. The same is true of the delay_us() function, that is if it is implemented properly it does not care about the BYTE defintion.
Sure I could go through my years of code libraries and replace BYTE with some other definition, but that is kind of defeating the point of having type defintions.
All I am trying to say is that the compiler is a great compiler, but I think it is getting about time for CCS to sit down and think for a minute about testing and so forth. That is I do not want to be a compiler debugger but CCS makes each customer a tester.
Trampas |
|
|
Neutone
Joined: 08 Sep 2003 Posts: 839 Location: Houston
|
|
Posted: Tue Aug 10, 2004 7:36 am |
|
|
It sounds like you should think about upgrading your years of code libraries and replace BYTE with some other definition. The native format of the PIC is an 8 bit unsigned integer. That is what CCS calls a byte. If your code libraries use 8 bit signed they are less efficient than they would be using the native number format of the PIC. |
|
|
|