CCS C Software and Maintenance Offers
FAQFAQ   FAQForum Help   FAQOfficial CCS Support   SearchSearch  RegisterRegister 

ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

CCS does not monitor this forum on a regular basis.

Please do not post bug reports on this forum. Send them to support@ccsinfo.com

I2C Uneven SCL Pulses

 
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion
View previous topic :: View next topic  
Author Message
KTrenholm



Joined: 19 Dec 2012
Posts: 43
Location: Connecticut, USA

View user's profile Send private message

I2C Uneven SCL Pulses
PostPosted: Tue Jul 01, 2014 8:09 am     Reply with quote

Hi all,

I'm seeing some odd behavior and was hoping I could get a solution or even just an explanation.

I'm using a PIC24FV32KA304 with PCWHD v4.132.

I'm using I2C to communicate with a RS232-I2C transceiver. I'm noticing that not all the time, but fairly often, the SCL signal is being held high for an abnormal amount of time (see below).



At first I thought it was clock stretching of some sort but if that's true shouldn't the line be held low by the slave device rather than the high I am seeing? Furthermore, I have I2C setup for no stretching:

#USE I2C (MASTER, FORCE_HW,sda=PIN_B9, scl=PIN_B8, STREAM=MCTL, NO_STRETCH)

Another theory I had was that perhaps the i2c_write() or i2c_read() function was getting interrupted somewhere. But disabling interrupts for the duration of the I2C transaction didn't cause any difference (and I shouldn't be spending that much time in these interrupts anyway. All they do is set a couple flags and increment counters).

Now functionally this doesn't seem to affect anything other than the amount of time a transaction takes. Data appears to come through and be read/written just fine. However I'd really like to know why I'm seeing what I'm seeing.
Ttelmah



Joined: 11 Mar 2010
Posts: 19369

View user's profile Send private message

PostPosted: Tue Jul 01, 2014 9:28 am     Reply with quote

Get rid of the NO_STRETCH specification.

Use:

#USE I2C (MASTER, FORCE_HW,I2C1, STREAM=MCTL)

I think I found before, that specifying NO_STRETCH on a master, results in software I2C being used.....
Your behaviour looks as if this is happening.
KTrenholm



Joined: 19 Dec 2012
Posts: 43
Location: Connecticut, USA

View user's profile Send private message

PostPosted: Tue Jul 01, 2014 10:06 am     Reply with quote

Ttelmah wrote:
Get rid of the NO_STRETCH specification.

Use:

#USE I2C (MASTER, FORCE_HW,I2C1, STREAM=MCTL)

I think I found before, that specifying NO_STRETCH on a master, results in software I2C being used.....
Your behaviour looks as if this is happening.



Thanks for the reply.

I gave that a shot and it didn't seem to change much.
The screenshot I posted was the longest high pulse I had seen but I'm still seeing it occasionally hold SCL high for up to 60uS. I will say that it does looks like big long pulses appear to be less frequent now. Is some stretching of the clock in this fashion to be expected?

This is legacy code I'm maintaining so I also have to wonder if maybe there's a reason why the dev that wrote this wasn't using hardware I2C.
gpsmikey



Joined: 16 Nov 2010
Posts: 588
Location: Kirkland, WA

View user's profile Send private message

PostPosted: Tue Jul 01, 2014 11:32 am     Reply with quote

Consider the possibility of another interrupt going off during the sending and locking things up until serviced. Interrupts can solve lots of problems but they can also cause things to happen at apparent random times when they do go off.

mikey
_________________
mikey
-- you can't have too many gadgets or too much disk space !
old engineering saying: 1+1 = 3 for sufficiently large values of 1 or small values of 3
Ttelmah



Joined: 11 Mar 2010
Posts: 19369

View user's profile Send private message

PostPosted: Tue Jul 01, 2014 11:32 am     Reply with quote

No.

If running software I2C, yes. If an interrupt occurs.

The hardware generates the clock, and you should see an irregularity at the start of the transaction (start), but the eight pulses for each character should be like 'clockwork' with the hardware.

I'd guess your old compiler, is not actually using the hardware I2C.....

Then I suspect either an interrupt is delaying longer than you think, or multiple interrupts occurred together.
PCM programmer



Joined: 06 Sep 2003
Posts: 21708

View user's profile Send private message

PostPosted: Tue Jul 01, 2014 11:49 am     Reply with quote

Look at the .LST file to see if the compiler is using software i2c.
For 18F PICs, this thread shows what the ASM code will look like:
http://www.ccsinfo.com/forum/viewtopic.php?t=31547&highlight=software+i2c&start=12
Your PIC will have different ASM code, but it will be similar if software i2c
is being used.

This thread gives an example of hardware i2c ASM code (again for 18F):
http://www.ccsinfo.com/forum/viewtopic.php?t=30080&highlight=hardware+i2c&start=1
KTrenholm



Joined: 19 Dec 2012
Posts: 43
Location: Connecticut, USA

View user's profile Send private message

PostPosted: Tue Jul 01, 2014 12:54 pm     Reply with quote

PCM programmer wrote:
Look at the .LST file to see if the compiler is using software i2c.
For 18F PICs, this thread shows what the ASM code will look like:
http://www.ccsinfo.com/forum/viewtopic.php?t=31547&highlight=software+i2c&start=12
Your PIC will have different ASM code, but it will be similar if software i2c
is being used.

This thread gives an example of hardware i2c ASM code (again for 18F):
http://www.ccsinfo.com/forum/viewtopic.php?t=30080&highlight=hardware+i2c&start=1


If I'm finding this correctly the I2C write function looks like this in the list file:
Code:

....................          ack = i2c_write(packet.command);
0D32:  MOV.B   A79,W0L
0D34:  MOV.B   W0L,W1L
0D36:  CALL    CE6
0D3A:  BCLR.B  A7E.0
0D3C:  BTSC.B  0.0
0D3E:  BSET.B  A7E.0


Looks like the call is to this?

Code:

.................... #USE I2C (MASTER, FORCE_HW,I2C1, STREAM=MCTL)
*
0CE6:  MOV     #FFFF,W0
0CE8:  BTSS.B  208.3
0CEA:  BRA     CF6
0CEC:  BTSC.B  209.6
0CEE:  BRA     CEC
0CF0:  MOV     W1,202
0CF2:  BTSC.B  209.6
0CF4:  BRA     CF2
0CF6:  MOV     #0,W0
0CF8:  BTSC.B  209.7
0CFA:  INC     W0,W0
0CFC:  RETURN 
*


Unless I'm mistaken it looks like it's using hardware I2C.
PCM programmer



Joined: 06 Sep 2003
Posts: 21708

View user's profile Send private message

PostPosted: Tue Jul 01, 2014 8:12 pm     Reply with quote

Remove the i2c slave and run a simple test loop that does continuous
i2c_write() operations, like this:
Code:

while(1)
  {
   i2c_write(0x55);
   delay_us(500);
  }
 

Don't enable any interrupts. Then see if you still get the problem.
If you still get it, then it's caused by something in the Master.

Also, make sure it's not some anomaly caused by your logic analyzer.
Or pull-up values, bus capacitance, etc.
Ttelmah



Joined: 11 Mar 2010
Posts: 19369

View user's profile Send private message

PostPosted: Wed Jul 02, 2014 1:25 am     Reply with quote

As some comments:

1) You are setting up streams in the I2C setup, but not then using this. Stick with either:
a) Use streams throughout - so the I2C write should also be using a stream name, or
b) Don't use streams at all.
Just safer.
2) Does your slave only support 100K?. 99% of I2C devices would now use 400KHz.

Stretch, only affects the clock low time, so shouldn't apply.

As far as I can see the only ways for the 'high' clock time to be changed 'mid byte', is first if the CPU clock rate changes. You can make a long break appear in the high time if (for instance) you start sending a byte, and then put the chip to sleep. The other way is if something tries to talk to the I2C peripheral in an interrupt, when I2C is already running.

However that being said, looking again, part may be an artefact with the logic analyser. The main byte transmissions are _nearly_ regular. If you look at the the start of the display, there is an I2C start, followed by the transmission of a byte with the clock showing one extra sample high after the fourth bit. looks at this point as if the actual sample period of the analyser is perhaps very low, and this is an artefact from this. The ninth clock is then stretched low (this is being done by the _slave_). Byte sent is 0xB2. Then there is the mysterious extra clock 3.6 divisions across the screen. If SDA had risen and dropped, I'd assume this was a repeated start. I'm actually wondering if in fact it is, and the sample rate of the analyser is low enough that the transition on SDA is not recorded. Then there is the huge gap till the next byte, which again has the clock nearly regular, but this time with the extra sample time after the second bit.
As such then if one assumes the analyser is only sampling perhaps 20* per main division across the screen, then part of the problem is that data is being missed, and the irregularity in the individual bytes is being created by the low sample rate. This would then explain how the hardware I2C could have an irregular clock....
Then the key is the long pause between the repeated start and the next byte. Since this is between bytes, it can be the result of an interrupt occurring at this point. Gpsmikey's comment then applies.
KTrenholm



Joined: 19 Dec 2012
Posts: 43
Location: Connecticut, USA

View user's profile Send private message

PostPosted: Wed Jul 02, 2014 7:22 am     Reply with quote

Ttelmah wrote:
As some comments:

1) You are setting up streams in the I2C setup, but not then using this. Stick with either:
a) Use streams throughout - so the I2C write should also be using a stream name, or
b) Don't use streams at all.
Just safer.
2) Does your slave only support 100K?. 99% of I2C devices would now use 400KHz.

Stretch, only affects the clock low time, so shouldn't apply.

As far as I can see the only ways for the 'high' clock time to be changed 'mid byte', is first if the CPU clock rate changes. You can make a long break appear in the high time if (for instance) you start sending a byte, and then put the chip to sleep. The other way is if something tries to talk to the I2C peripheral in an interrupt, when I2C is already running.

However that being said, looking again, part may be an artefact with the logic analyser. The main byte transmissions are _nearly_ regular. If you look at the the start of the display, there is an I2C start, followed by the transmission of a byte with the clock showing one extra sample high after the fourth bit. looks at this point as if the actual sample period of the analyser is perhaps very low, and this is an artefact from this. The ninth clock is then stretched low (this is being done by the _slave_). Byte sent is 0xB2. Then there is the mysterious extra clock 3.6 divisions across the screen. If SDA had risen and dropped, I'd assume this was a repeated start. I'm actually wondering if in fact it is, and the sample rate of the analyser is low enough that the transition on SDA is not recorded. Then there is the huge gap till the next byte, which again has the clock nearly regular, but this time with the extra sample time after the second bit.
As such then if one assumes the analyser is only sampling perhaps 20* per main division across the screen, then part of the problem is that data is being missed, and the irregularity in the individual bytes is being created by the low sample rate. This would then explain how the hardware I2C could have an irregular clock....
Then the key is the long pause between the repeated start and the next byte. Since this is between bytes, it can be the result of an interrupt occurring at this point. Gpsmikey's comment then applies.


Thanks for the long and detailed explanation of your reasoning.

I took a look into my logic analyzer:
http://www.tech-tools.com/DV3100-logic-analyzer.htm

The sample rate should be at least 100MHz (10nS period). So I doubt it's slow enough to be causing any artifacts. I will probably hook up an oscilloscope to see if the signals actually match what I'm seeing on the analyzer.

I did look into interrupts perhaps doing something unintended but I saw the same behavior when disabling interrupts during I2C transactions.

The device I'm using is a MAX3107 I2C-RS232 UART. According to the datasheet, it supports 100kHz standard and 400Khz fast.
Ttelmah



Joined: 11 Mar 2010
Posts: 19369

View user's profile Send private message

PostPosted: Wed Jul 02, 2014 8:42 am     Reply with quote

You are displaying 100uSec/division.
10 divisions across the screen. 1mSec.
At 10nSec/sample, the analyser would need to store 100000 samples, just for the screen. Most don't have anywhere near this storage. Though the analyser may well support 10nSec sampling, it'll normally only do this at it's shortest sample time. As you slow down the display, the sampling interval climbs. Even worse, most will store more than the displayed screen, so the resolution shoots downwards....
You might be surprised what you see, with the same settings on everything (zoom, sample interval, etc. etc..), if you feed (say) a 100KHz rectangular waveform at perhaps 25% duty cycle from a waveform generator into the analyser. If I'm right you are going to see the displayed graph is not a nice regular 25% duty cycle. If so, it just explains one small part of the oddity.

Since your chip supports 400K, use 400K. If the gaps remain the same, you are proving that these come from something else in the code.
Display posts from previous:   
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion All times are GMT - 6 Hours
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group