Ttelmah
Joined: 11 Mar 2010 Posts: 19504
|
|
Posted: Wed Apr 28, 2010 3:16 am |
|
|
Multiple things here.
First the point about ADC clocks. ADC_CLOCK_INTERNAL, is _not_ recommended for operation with a processor clock faster than 1MHz, unless you put the processor to sleep for the conversion. Is your chip an F4550, or an LF4550?. You show the 'F' in your code, but are talking about 20MHz operation at 3.3v. This is not 'legal' for the F4550, but is just OK for the LF4550. Assuming it is the LF4550, change your clock selection to ADC_CLOCK_DIV_32 Table 21-1 in the data sheet, gives the selections that should be used. ADC_CLOCK_INTERNAL, does generate lower accuracy readings, if used outside the specified range, which may well be your main problem.
Second the point about the signal actually being stable. The ADC, is potentially recording signals at 0.1% full scale. On a typical scope, this is perhaps 1/4 the thickness of the line actually being traced. Your signal may well have 'real' noise at the level being recorded, but you are just not seeing it. Switch the scope to AC coupled, and turn the gain up by 20*. If the signal is now showing some noise, then this is what the ADC is recording...
Then you have the question of averaging. The Olympic averaging system, is ideal, if there are real 'spike' signals, but you may just be seeing the effect of purely random noise (effectively 'hiss' on an old radio). The value read from the ADC, will never be perfectly stable. If your signal is sitting between two counts of the ADC, and has some random noise present (there is some in the chip itself, and there will be some on the signal), you should receive equal numbers of the two counts. If it is sitting 80% of the way from one count to the other, you should receive 80% of the 'closer' count, and 20% of the other. Averaging, will allow you potentially to 'see' the long term trend of the basic data. There are a number of methods, but one that works well, is the 'sum, divide & subtract', which allows work to be minimised on each loop, but to give good long term averages. It is less effective at removing the 'odd' values, than the Olympic sort. Classic code for this simple approach is:
Code: |
int32 rolling_sum;
int16 reading;
#define DIV_FACTOR (4)
//Then at each reading
rolling_sum+=read_adc();
reading=rolling_sum/DIV_FACTOR;
rolling_sum-=reading;
//reading then contains the averaged value
|
Increasing the 'DIV_FACTOR' value gives more averaging. Keeping this as a binary value (2,4,8,16), makes the arithmetic fast/efficient.
Best Wishes |
|