CCS C Software and Maintenance Offers
FAQFAQ   FAQForum Help   FAQOfficial CCS Support   SearchSearch  RegisterRegister 

ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

CCS does not monitor this forum on a regular basis.

Please do not post bug reports on this forum. Send them to CCS Technical Support

How to inform compiler of true clock speed?

 
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion
View previous topic :: View next topic  
Author Message
newguy



Joined: 24 Jun 2004
Posts: 1911

View user's profile Send private message

How to inform compiler of true clock speed?
PostPosted: Fri Jun 14, 2019 4:48 pm     Reply with quote

Background: using a CCS DSP Analog dev kit which I had laying around to do a proof of concept. The kit has a 12MHz crystal and quite some years ago I found that the processor wouldn't reliably start in all circumstances. The way round that is to boot with the 7.37MHz internal oscillator, then during initialization do a clock switch to the external crystal and PLL.

Long story short, I initially have:
Code:
#use delay(internal=7370000)


Then later I do the clock switch routine using an #asm routine I got straight from Microchip. At the end of that routine I throw in:

Code:
#use delay(clock=80000000)


Problem is, the built-in delay functions delay_ms() and delay_us() are *sometimes* off by a factor of 7.37/80. Weird thing is that sometimes they seem to be dead on.

What's the "proper" way to let the compiler know what clock speed the processor is running at?
Ttelmah



Joined: 11 Mar 2010
Posts: 19559

View user's profile Send private message

PostPosted: Fri Jun 14, 2019 11:12 pm     Reply with quote

At heart you can't.
The processor has no idea of what speed it is running.
However the chip gives you lots of diagnostics to tell you which
oscillator is actually running. It sounds as if the PLL is failing to start
so you need to test the diagnostic bit on this, and then either try
starting it again, or switch the clock to match. Most likely this is
an FSCM switch, so the interrupt bit for this will probably be set.
You carefully don't tell us what chip is involved, so we can't tell you
what disagnostics are actually available on your chip.
Ttelmah



Joined: 11 Mar 2010
Posts: 19559

View user's profile Send private message

PostPosted: Sat Jun 15, 2019 12:43 am     Reply with quote

A bit of checking, shows this kit should have the DsPIC33FJ128GP706.
If so, this chip does have issues with the clock startup, on some revisions.
These threads:
<https://www.microchip.com/forums/m357107.aspx>
<https://www.microchip.com/forums/m845707.aspx>
and this CCS one:
<http://www.ccsinfo.com/forum/viewtopic.php?p=159723&highlight=crystal#159723.>

May help.
In particular look at how NewGuy is waiting for both the clock itself, and
then the PLL to have actually started.
Ttelmah



Joined: 11 Mar 2010
Posts: 19559

View user's profile Send private message

PostPosted: Sat Jun 15, 2019 3:31 am     Reply with quote

I also have to ask why you are not just letting the compiler do the switch?.
If you have a recent compiler, the control functions are very good, and in
most cases will be later than ASM routines you have found on the net.
In the last six months or more, I have not seen a chip fail to give the
frequency it is meant to using the standard CCS code.
Simply

#USE DELAY(INTERNAL=7.37MHz, CLOCK=80MHz)

It'll actually run at 79.8MHz, since this is the closest available from the
7.37MHz source. It'll start at 7.37MHz, and then program the PLL to give
the 80MHz. The CCS code does verify the PLL is running during the switch.

Do you want to post the assembler you are using (or a link to it), there
may be an obvious issue with what it is doing?. If I wanted to change after
boot, I'd use something like:
Code:

   do {
      setup_oscillator(OSC_INTERNAL, 80000000, 7370000);
   } while (! pll_locked());
   #use delay(CLOCK=79.84MHz)
temtronic



Joined: 01 Jul 2010
Posts: 9250
Location: Greensville,Ontario

View user's profile Send private message

PostPosted: Sat Jun 15, 2019 4:23 am     Reply with quote

I wouldn't use the 'kit' as it has known 'issues'. Perhaps buy a current unit or just make your own as it's a POC test. I tend to use the 46k22 as a 'generic' PIC to try stuff especially 'will-this-work ?' kind of ideas. It'd be nice to find a 'kit' or PCB that could be used as such or low qtys for small projects.
newguy



Joined: 24 Jun 2004
Posts: 1911

View user's profile Send private message

PostPosted: Sat Jun 15, 2019 9:20 am     Reply with quote

Laughing That last post you linked was me 7 years ago.

When I get something that works, I don't change it. Back when I wrote that post I believe that I was using 4.141 (or maybe even 4.099) and the CCS built-in functions just would not work with that dsPIC. Thus the #asm routine you linked to.

At any rate, my normal procedure for production code is to try and never "block" using the built-in CCS functions delay_ms() and delay_us(). For this proof of concept, however, they're good enough.

My issue is that the times that these functions actually delay for seems to be either "dead on" or off by a factor of (about) 10. For example, early on in my init routine, I reset an external module by setting its RESET line high for 20ms. I can see that 20ms pulse, pretty much bang on exactly 20ms, using a logic analyzer. That delay is okay.

Later, during the setup and configuration of that external module, I liberally use some extra delay_us() routines in the spi interactions with the module. These are not the expected length, and in practice are about 10% of what they should be. Annoying, but not a game changer.

Still later, once the external module is configured, it must be told to perform a self calibration. The standard time this takes (plus safety margin) is 2.3 seconds (2300ms). delay_ms(2300) was actually taking approx 230ms, again, proven/observed using a logic analyzer.

At any rate, I know the chip is actually running at 80MHz, using the external crystal. I have a serial (UART) connection to it at 115,200 baud. This is trouble free.

My problem is not the processor, it's the compiler. I just need to know how the compiler knows how fast the processor is actually running so that "vanilla" functions such as the delays actually delay for the correct time.

Until I broke out the logic analyzer I had no idea that the delays were totally off (sometimes - see above). Once I changed the critical delay_ms(2300) to 23,000, then everything worked. Something I've been on-and-off pulling my hair out over for about a week. I'm using this kit because I have it on hand and I just need to prove the concept before whipping out Altium and starting to design the (perhaps) finished product, confident that I can actually pull it off given that I've done a proof of concept.

5.085 if it matters.
Ttelmah



Joined: 11 Mar 2010
Posts: 19559

View user's profile Send private message

PostPosted: Sat Jun 15, 2019 12:12 pm     Reply with quote

Yes, I saw they were you.

The point is the compiler _never_ knows how fast the chip is running till
you tell it with a #use delay.
If you change the clock speed it is totally dependant on you then telling
it the speed with such a statement.
If the clock doesn't actually change, you need to test what speed is
actually being generated (by testing the status bits (so pll_locked and
INT_FSCM), and putting the corresponding #use delay.
If the speed is right (verified with the UART), if means the use delay
is wrong. Remember a #use delay, is _not_ a code line. It is a
precompiler directive. Things like delay_ms, generates a _fixed_ delay
dependant on the #use delay applied at the point where it is compiled.
The delay count won't change if you change the clock rate, so delays
will be wrong if you then switch the clock rate. If you want delays to
be right with different clock rates, you have to handle this yourself,
by having the code branch to different delays, using your own variable
that you change with the clock rate.
PrinceNai



Joined: 31 Oct 2016
Posts: 480
Location: Montenegro

View user's profile Send private message

PostPosted: Sat Jun 15, 2019 3:28 pm     Reply with quote

Hello All,
I'm following and reading almost all the threads here. A lot of things discussed here are way above my level, but as I said before, whatever question , however stupid or trivial, always had at least one answer. In this thread, this was what caught my eye:

Quote:

It'd be nice to find a 'kit' or PCB that could be used as such or low qtys for small projects.


My thoughts exactly. Maybe it would be nice to create a "CCS forum development board".

I'll open a new thread for that instead of littering this one.

With kindest regards,
Samo
smee



Joined: 16 Jan 2014
Posts: 24

View user's profile Send private message

PostPosted: Sun Jun 16, 2019 2:38 am     Reply with quote

i am currently using the dspic33fj128gp802 which i clock at 80mhz,
using #use delay(internal=80mhz)
i have timer1 setup for 1ms interupt. which i have verified with a scope to be 1ms.

and i run software timers off of the 1ms interupt.
yet my my timers are off. this was puzzling me yesterday,as 60000 ticks was actually 30 seconds.

i have seen some very peculiar things with this chip, which seem to come and go.
i also suspect the compiler. but i put it down to my compiler being old. 4.120
temtronic



Joined: 01 Jul 2010
Posts: 9250
Location: Greensville,Ontario

View user's profile Send private message

PostPosted: Sun Jun 16, 2019 4:28 am     Reply with quote

Without seeing the hardware, running ANY micro faster than say 4MHz, requires good PCB layout and proper components. Running real fast, like 80MHz, DEMANDS great PCB layout and the best components. Power demands and noise goes up the faster you go, so 'details' like caps and position are critical. Also having a rock stable PSU is very important. If the average demand is say 1 amp, design/build a supply good for 5 amps. Also be sure to eliminate any EMI ! Ground paths, caps (again), chokes, 'weight of copper', ground planes are all areas that need attention.

Jay
jeremiah



Joined: 20 Jul 2010
Posts: 1358

View user's profile Send private message

PostPosted: Mon Jun 17, 2019 8:12 am     Reply with quote

newguy wrote:


My issue is that the times that these functions actually delay for seems to be either "dead on" or off by a factor of (about) 10. For example, early on in my init routine, I reset an external module by setting its RESET line high for 20ms. I can see that 20ms pulse, pretty much bang on exactly 20ms, using a logic analyzer. That delay is okay.

Later, during the setup and configuration of that external module, I liberally use some extra delay_us() routines in the spi interactions with the module. These are not the expected length, and in practice are about 10% of what they should be. Annoying, but not a game changer.


From my own experience, these issues most often arise when a programmer thinks that the #use delay() statement dynamically changes anything at runtime. For example, in your post from that 7 year old thread, you put the second #use delay() inside the function, which looks like you expect to change the clock data at runtime. For example:

Code:

#include <33FJ128GP706.h>

#FUSES NOWDT
#FUSES FRC
#FUSES HS
#FUSES NOJTAG
#FUSES NODEBUG

#use delay(internal=7370000)

void do_something(){
   //stuff
   delay_ms(1000);
}

void main(void) [

   do_something();  //first call...uses 7.37MHz

   #use delay(clock=80000000,crystal=12000000)

   do_something();  //second call...still uses 7.37MHz

   delay_ms(1000);  //uses 80MHz

   while (TRUE) {
      // your code here
   }
}


In this example, both calls to do_something() will use the 7.37MHz clock configuration for their delay even though the second call comes after the new #use delay(). However, the final delay_ms(1000) will use the 80MHz clock configuration instead.

The above code would be equivalent to:
Code:

#include <33FJ128GP706.h>

#FUSES NOWDT
#FUSES FRC
#FUSES HS
#FUSES NOJTAG
#FUSES NODEBUG

#use delay(internal=7370000)

void do_something(){
   //stuff
   delay_ms(1000);
}

#use delay(clock=80000000,crystal=12000000)

void main(void) [

   do_something();  //first call...uses 7.37MHz

   do_something();  //second call...still uses 7.37MHz

   delay_ms(1000);  //uses 80MHz

   while (TRUE) {
      // your code here
   }
}



Precompiler directives (things that start with #) are evaluated prior to compilation. So you have to look at where in the file the delay_ms() is spacially relative to any #use delay() calls.
Ttelmah



Joined: 11 Mar 2010
Posts: 19559

View user's profile Send private message

PostPosted: Mon Jun 17, 2019 8:17 am     Reply with quote

Exactly the point I was trying to make in:

Quote:

If the speed is right (verified with the UART), if means the use delay
is wrong. Remember a #use delay, is _not_ a code line. It is a
precompiler directive. Things like delay_ms, generates a _fixed_ delay
dependant on the #use delay applied at the point where it is compiled.
The delay count won't change if you change the clock rate, so delays
will be wrong if you then switch the clock rate. If you want delays to
be right with different clock rates, you have to handle this yourself,
by having the code branch to different delays, using your own variable
that you change with the clock rate.


It is a fundamental 'point' about #use delay, that you must understand
when clock switching.

You can't (for instance) branch 'back' to a routine written using one
particular clock setting, and expect it to work with the new setting, You
have to have two versions of the code each compiled with the required
setting, and switch between these when you change speeds.
Display posts from previous:   
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion All times are GMT - 6 Hours
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group