Search this blog...

Omnivison ov3640 i2c sccb

TASK : Write a device-driver for Omnivision ov3640 camera

Timeline : A.S.A.P (is there any other way? :P)

For the aptitude champs out there, here is a quick one:
Q. If you are sitting facing west and running I2C at 0.00000000055kbps and a bear appears in front of you, what color is the bear?
Not sure? Read on...

DAY 1 : Initial study

A bit about ov3640: The ov3640 (color) image sensor is a 1/4-inch 3.2-megapixel CMOS image sensor that is capable of QXGA(2048x1536)@15FPS using OmniPixel3™ technology in a small footprint package. It provides full-frame, sub-sampled, windowed or arbitrarily scaled 8-bit/10-bit images in various formats via the control of the Serial Camera Control Bus (SCCB) interface or MIPI interface. It supports both a digital video parallel port and a serial MIPI port.
Searching the "internets", an old "v4l2-int" styled driver for ov3640 is available for the linux-kernel. This will have to do for now. Can scavenge the camera configuration register-settings from it.

The Omnivision ov3640 product-brief contains the following functional block-diagram of ov3640:


The camera is controlled using the SCCB bus. Again back to the "internets". SCCB is an i2c-clone i.e a two-wire serial protocol that has significant differences from I2C to merits its own specification.
  1. According to spec, SCCB supports only upto 100Khz (not more).
  2. I2C spec requires pullups with open-collector(drain) drivers everywhere. SCCB requires CMOS-like drivers which are always either +VDD or GND i.e no pullups.
  3. In I2C, after every 8bits transferred, the 9th bit is designated ACK. The slave pulls SDA low to ack. SCCB designates the 9th bit "dont-care". SCCB spec states that the master continues regardless of ACK/NACK in the 9th bit.

DAY 2 : First attempt

Following the omnivision product-brief and the datasheet, the ov3640 camera-module is connected with the CPU as follows:

So far SCCB looked to be a simpler less restrictive version of I2C. Having worked extensively on I2C previously, was under the impression that setting up the basic comunication between the CPU and ov3640 would be a walk in the park. Wrote a simple skeleton i2c-driver and registered it with the kernel. Scavenged the I2C read/write routines from the old ov3640 driver and booted-up the device...

...and the driver failed to load as I2C-read failed. The ID register of ov3640 did NOT match the expected ID. Inserting logs in the code showed that the I2C-read routine was failing. The CPU was NOT getting an ACK from the ov3640 sensor. A true WTF moment as the I2C routines in the driver were tested to be working properly in earlier devices.

Oh well, maybe should really not expect an ACK i suppose. What with ov3640 being an SCCB device and not I2C. Started digging into the I2C core driver. found a provision to ignore this NACK(absense of ACK from slave). Updated the ov3640 driver to set the IGNORE_NACK flag and tried again. Now the I2C-read routine completed successfully despite there being no ACK from the slave. But still the driver failed to load. Turns out the contents of the ID register, read over I2C, did NOT match the expected value. The I2C-read routine was returning the contents of the ID register as "0". Further debugging showed that attempting to read any register of ov3640 over I2C gave the same result- a nice big ZERO. It was evident now that something was terribly wrong.

DAY 3 : Challenge Accepted

Time to bring out the big guns. Switched to multimeter and oscilloscope. Tested the lines from CPU to the ov3640 connecters for proper continuity. Booted-up the device and probed the I2C lines. The master was sending in the right values alright. But ov3640 was simply not responding. Suspect no.1 ? the I2C slave-id.

Ov3640 spec mentions that it responds to 2 I2C slave-IDs. 0x78 & 0x79. Had tried 0x78 so far. 0x79 also makes no difference - still no data from ov3640. Further digging through the docs i find one interesting line which mentions that the addresses are special in the sense that 0x78 is used to write and 0x79 to read the device. Hmmm... interesting. Lokks like these are 8bit addresses including the read/write bit of I2C. Which means the actual device slave-id is just the 7MSBs (common to 0x78 & 0x79) i.e. 0x3C. Face-palm!

Changed the slave-id of the ov3640-driver and booted-up the device, but still no dice. It would be easier to light a fire with 2 stones and twig.

DAY 4 : ...and let there be light

Lost all hopes of getting this to work. Swapped other camera modules. Tried a couple of other boards. But the ov3640 just does not seem to respond to anything. It is as if the module is not even powered-on.

Maybe, mayyyybe thats wat it IS!

Back to the schematics.
I2C-CLK? check.
I2C-DATA? check.
CAM_XCLK? check.
CAM-IO? check.
CAM-digital? check.
CAM-Analog? Do we really need to power the sensor array at this stage?

Well what the heck nothing else seems to work anyway. Might as well try this. So quickly pulled down a line from an existing 3.3V power-rail on the board. Placed a diode along it to drop it down a bit and powered-on the board.

And VOILA! It worked. The driver was able to read the ov3640 module properly.

The ov3640 even responded to the default settings (QXGA@15FPS). Pretty neat eh?

Oh well... sure makes me look foolish, now that it works. :-)
Ah well there's always the first time for everything. ;-) ;-)

And now that it works, was able to summarise the following

Hardware connections:

  • CAM-Analog (2.8V for powering-on the module)
  • CAM-IO 1.8V (1.8V for i2c-communication)
  • CAM-Digital (1.5V generated by module)
  • I2C_CLK (1.8V 400Khz-MAX for i2c-communication)
  • I2C_DATA (1.8V bi-directional)
  • CAM_XCLK (24Mhz reqd. for internal PLL)
  • CAM_PCLK (generated by module)

Software configurations:

  • I2C slave-id 0x3c
  • ov3640 DOES provide an ACK (its I2C, NOT sccb)
  • Works on I2C@400KHz

With the above specs, surely this begs the question that why does someone go all the way to define their own bus-specs when the hardware obviously works on I2C!!!! WHY Omnivision? WHY????

Designing you own serial-bus? == $1,000,000

Not using it in your own products? == $0

The smile on my face when i finally figure it out? PRICELESS
Somethings, money can't buy. For everything else , there is... Ah wait, i'm forgetting something now, right? Well here goes...

Q. If you are sitting facing west and running I2C at 0.00000000055kbps and a bear appears in front of you, what color is the bear?

Ans. If it takes 4 days to transfer a byte, do you REALLY think i care!!

NOTE:

[i]. No bears were harmed in the development of this camera-module.

[ii]. The image captured above did NOT appear out of the blue with only the driver in place. Several days of tweaking the exposure/white-balance settings and an earthquake later, managed to get the Kernel-driver, Android camera-HAL and app to work together.

81 comments :

  1. Thanks for sucha n ice description

    ReplyDelete
  2. Did you have to use the i/o pins to get data back, or you just got it through i2c?

    ReplyDelete
    Replies
    1. I2C is used for low bandwidth communication with ov3640. This includes stuff like configuring it by setting the appropriate registers. The camera capture-data is a bandwidth intensive process and hence i2c is NOT used. Capture data is available over the i/o pins(DATA7:0 in the diagram).

      Delete
    2. Thank! I'm gonna try that!

      Delete
  3. Does the ov3640 require the i2c to send a restart condition?

    ReplyDelete
    Replies
    1. PWDN and RESET_B pins can be used to change operation state of ov3640.

      Delete
    2. could you please tell what is the exact purpose of PWDN and Reset_B.. Also do you have any idea aout FSIN in omnivision sensors

      Delete
  4. Its been quite a while since i last worked on one. AFAIR, PWDN is used to activate a low-power standby mode in which the sensor retains its settings but does NOT capture/send any data.

    I have not had the opportunity to use FSIN myself but a little bit of google-fu gave me the following short description:

    "FSIN is used to sync the source. Enabling sync requires connection of VSYNC from the "master" to FSIN on the slave, and XCLK input to master and slave must be shared. The slave will then reset its VSYNC when rising edge in FSIN is detected. There is still a small delay in the slave data output relative to master, but it is apparently only on the order of 6-7 pixel periods."

    ReplyDelete
  5. Hello,

    I have 2 questions, please help me if you know the answers:

    1.Usually there is a limit for pixel clock, what happens when it is exceeded?? i.e. say max pixel clock is 100 MHz, so what will happen if i configure it to 120 MHz, will it affect HSYNC or VREF in any way.. i think it will change speed so maybe receiver will not be able to receive this quickly

    2.Also is pixel clock suppose to be consistent or it can vary?

    Thank you :)

    ReplyDelete
  6. 1. All the signals generated by the camera are with reference to the pixel-clock(PCLK). Exceeding the maximum limit specified could mean that the hardware cannot supply the data at the faster rate as expected at the higher PCLK, resulting in noise(incorrect-color/black pixels in captured image).

    2. As long as you do NOT modify the pixel clock in between active data frames, the camera and the SoC on the receiving end should happily be able to sync properly. Modifying the PCLK in the middle of a frame could lead to a SYNC loss as this is NOT an expected condition.

    Having said that, often its just an input reference clock (XCLK, NOT PCLK) that you need to provide(within a specified range) to the camera ICs. The PCLK is generated by the camera-IC itself and fed to the receiver SoC along-with the other image-data, HS, VS signals.

    ReplyDelete
  7. Dear Sir, I want to know that you have successfully run this driver or not .. actually I am working on OV5640 having some doubts .. can you resolve my doubts ... reply

    ReplyDelete
    Replies
    1. Yes. But due to NDA i may not be able to share any code/documents. If you have any specific issues/doubts, i will be glad to share my ideas/views on the same.

      Delete
  8. Dear Sir,
    We have implemented a simple C program in linux user spcae with SCCB protocol implemented we are able to capture some distorted image but unable to capture the true colours in the picture, As we are running the camera on minimum clock speed (appx. 5 MHz).
    To sum up, "the real problem lies in the root cause of the problem. And the root cause for the distorted picture, we found is that some of the pixels are getting missed within each frame of total 640 frames (for vga mode 640x480) and hence we are neither able to get full picture size nor true color of an image".

    We have done some research, how we can deal with the missing pixels within the frames.
    So can you help me on this to get a good quality image.
    We are using GPIO's as a DVP port .Kindly help us.

    ReplyDelete
    Replies
    1. From the datasheet of OV5640, the range of frequencies supported for the input clock is 6~27MHz. I do remember observing random black pixel noise at the lower end of the range of frequencies.

      As noted in the article above, the final setup had the input clock(XCLK)to the camera-IC, set to 24Mhz. Note that this high frequency signal (usually provided by the SoC to the camera-IC), is used as the "master" clock to run various internal blocks(eg. ADC) in the camera-IC.

      The output clock rate(PixelCLK)is determined by configured values of the camera frame-size and fps. The frequency of the output clock is set accordingly by the camera-IC. For example, if the camera is configured to output a 640x480(16bpp) image at 30fps, the PCLK = 640*480*2*30 = 18432000Hz. i.e. ~18Mhz.

      I would suggest you check the following:
      1. Increase XCLK to a higher value.(24Mhz?)
      2. Ensure that the driver(using gpios for dvp) is capturing at PCLK(~18Mhz?)
      3. Also ensure that exclusive power rails feed into the CAM_ANALOG and CAM_VDD.

      The 3 steps above should have a remarkable effect on improving the image quality(less noise and more accurate colors).

      Let me know if you have any further issues...

      Delete
  9. Thanks for the quick reply ...
    As per the check list mentioned by you -

    1) XCLK is already 24 MHz.

    2) We are unable to capture at 18 MHz, So to capture all the pixels we have to reduced the speed PCLK clock speed to 5 MHz.

    3) Yes we are providing 2.8 voltage to the CAM_ANALOG and CAV_VDD.

    Problem is that we are unable to capture all pixels ...
    Register settings for VGA_YUV422 and 7.5 fps ( We can also get image on 5 fps, as i test with provided OV5640 EVK).

    ReplyDelete
    Replies
    1. AFAIK, if the sensor is configured to send VGA data at 7.5fps, then it will be generating a PixelCLK of 640*480*2*7.5 ~ 4.6Mhz.

      Since you mention using GPIOs as the DVP, I assume you have some sort of ISR being invoked on PCLK, in which you read the data GPIOs to obtain the pixel value.

      This could be a timing issue i.e. the system is sometimes not able to service the ISR within time. At 4.6Mhz it would need to be serviced once every 0.21microseconds.

      As an experiment, deliberately drop alternate pixels(every 2 out of 4 bytes) to get a 320x480 image and see if that the amount of random noise is less compared to the 640x480 images earlier. If so, then we need to simplify the logic in the ISR to be able to complete comfortably within 0.2microseconds.

      Delete
  10. Dear Sir, We have done as you told me..

    Since We are using YUV422 ( in this protocol pixels format is Y1UY2V) format and missing of some pixel (~ 1/4 th of the Hszie), Some times it may the Y1, Y2, U and V, Hence image does not create properly..
    If there is any another way to capture all the pixels by increasing GPIO's reading Speed, kindly tell me .

    Here is the sample code to read pixels..



    while(!VSYNC)
    {
    while(HREF && row<MXROW)
    {
    hf=0;
    while(hf<(MXCOL*2))
    {
    *(STRING+LENGTH++)=PIXEL;
    /*PIXEL is the DVP PORT*/
    ++hf;
    }
    row++;
    while(!HREF);
    }
    }

    fwrite(STRING,LENGTH,1,fp);

    ReplyDelete
    Replies
    1. Details prem details! :) Which MC/MP are you using? Which OS?

      Delete
    2. LINUX KERNEL 2.6.35-8-ARCH+ and MC is : i.mx233 (freescale)

      Delete
    3. If you are implementing the gpio polling loop in an userspace program, then your program is NOT guaranteed to run on the CPU all the time. As the scheduler(kernel) can pre-empt the program anytime it needs to.

      So moving the code into the kernel into the interrupt context of a driver that is bit-banging the GPIOs should ideally eliminate the chances of missing any bytes.

      The iMX233 does seem capable of 5Mhz I/O on the GPIOs (with some tweaks as described here). Go ahead and follow the code posted in the above link and let me know how it works out for you.

      IMHO, using GPIOs to simulate a DVP is already pushing the iMX233(ARM9@450Mhz, running Linux) to its limits.

      Delete
    4. Dear Sir thanks for the reply !!

      With our code we are able to capture the picture but the colours are missing due to the missing pixels in each frame as per our analysis.

      Is there any way to capture the pixels thrown out via DVP PORT of camera using DMA in userspace or any other way to capture all pixels in UserSpace.


      Currently We can not run this code into the KERNEL due to some KERNEL porting issue.

      Delete
    5. Well the crux of the matter is this- As long as we are running the gpio-polling in the userspace, we can be pre-empted i.e. miss random bytes.

      To implement some sort of DMA mechanism we would need a component in the kernel that would read data over the gpio lines and push it to memory (which is then read by the userspace app). This is necessary because the iMX233 does NOT seem to have a hardware block dedicated to image capture.

      Delete
    6. OK fine.
      We want to implement DMA mechanism in UserSpace, How we can proceed with this ?
      As We know that DMA requires following parameters
      1) Source pointer (that is DVP Port address from where byte will be thrown out).
      2) Destination Pointer ( that is Image Buffer).
      3) Number of bytes to be copied.
      Hence there is no need of any Hardware block dedicated for image buffer in MC.
      KERNEL has following files regarding DMA : dma.h, dma-coherent.h, dma-mapping-common.h, dma-mapping-broken.h.


      Delete
    7. (I am fairly new to the iMX233, so correct me if i am wrong...)

      Since there is no dedicated DVP-IN port and corresponding hardware block on iMX233, we most certainly need a software component(kernel driver) that reads the data over the external camera connected to the iMX233 over GPIOs.

      This Kernel driver would have register an ISR for the GPIO connected to PCLK and then reads the values of the GPIOs connected to the 8/10 data pins of the DVP(every time inside the ISR) and saves the data in a linear contiguous buffer which can be then passed to the userspace.

      Basically you could go ahead and implement your own V4L2 driver which does GPIO bit-banging to obtain the digital data over the GPIOs.

      Delete
  11. Thanks for your support ... by the way we are also trying to develop it into KERNEL SPACE..
    Ok Now we will concentrate on LINUX DRIVER DEVELOPMENT.

    Thanks again ....

    ReplyDelete
  12. Hello Sir,

    Very nice forum... you are doing a great job..

    I have a question, i am using an omnivision camera, is there a way to check the size of image output from the camera as my code is not yet ready ??

    Also, say we are using 15fps, with 480 lines,then these 480 line should come uniformly divided within 2 VSYNC i.e. if time period between two VSYNC is 10 msec then these 480 line will cover entire 10 msec or it can come early also and leave rest of the space i.e. can it come in 5msec and leave rest of 5 msec empty??

    ReplyDelete
    Replies
    1. The data on the camera interface will be transmitted at the rate of PCLK.

      If you are interested in probing the bus to validate the image data, use the PCLK line(driven by the camera-IC) as a reference clk. For example a YUV422 interleaved data on an 8bit parallel camera-interface will carry one byte during each cycle of PCLK. i.e. in 4cycle of PCLK, the bus will transfer Y1, U, Y2 and V bytes. 2pixels worth of data transferred in 4clock cycles of PCLK.

      Now, coming to your specific question, we can see that the speed at which image data arrives has no direct relation with the HSYNC and VSYNC signals. Rather the image is transferred at the rate of PCLK and until the next line/frame needs to be sent(based on the fps configured) the HSYNC/VSYNC are kept enabled during the interval to signal the receiver that there is NO active data i.e. its a "blanking" period.

      PS: Thank You :) I hope you enjoyed reading the article as much as you found it informative.

      Delete
  13. Thanks for the article!

    I am new to SCCB. Do I have to use an OMAP? Is there a USB to SCCB board?

    Peace!

    ReplyDelete
    Replies
    1. The OMAP is NOT the only choice of a master controller for ov3640. Any device that supports I2C should be able to connect to the OV3640 over SCCB.

      In the context of ov3640, the SCCB bus is just a low-speed control bus used to read/write the internal registers to configure the settings of the camera module. The image data is obtained from it over the high-speed parallel(DVP) or serial(MIPI) interface.

      The SCCB(or I2C) bus is fairly simple to manage and can be implemented by bit-banging GPIOs as well. It can be operated at low frequencies in the Khz range. This is achieved using simple micro-controllers as well.

      You are far more likely to run across challenges interfacing with the DVP/MIPI bus to obtain the image data.

      Delete
  14. Hello Sir,
    I have a question, i am able to see the encoded image which were passed to the image using i2c and encoded using MJPEG encoder which uses quantization and huffman table for compression but large part of the image is dark gray in colour.. i dont know what to check....do you have any idea what should i check...is it the pixel clock(kept at highest value), encoder or transmission that is leading to problem.

    Thanks

    ReplyDelete
    Replies
    1. actually when the image size is less image improves but as it becomes more quality starts degrading

      Delete
    2. From your query it is really NOT clear what you are attempting.

      What is you hardware? How is it interconnected? What framework are you running? Can you upload samples of your captured images and share the links?

      StackOverflow.com is ideal for this. Can post a question(with all the above details) and share the link to it here?...

      Delete
  15. Hi Chinmay,

    We are using OV5640 for our application.

    We are trying to access the Camera Internal Registers using SCCB Interface. We have sent 0x79 (0x3C and Read bit).
    We are receiving the ACK bit(1’b0) and after that 8’h00 data from the Camera Slave.

    According to the SCCB Protocol Specification(http://www.ovt.com/download_document.php?type=document&DID=63) , we can have a maximum of 3 phases for Write to the Camera. But, in our case, the Register addresses are 16 bit, and the device ID is 8 bit(7 bit Slave ID + 1 bit for Write), and data is 1 byte, so we need 4 phases.
    How can we do any write transaction? Can you help us?

    Datasheet for OV5640 - http://wenku.baidu.com/view/7c2fe97fa417866fb84a8e08.html

    ReplyDelete
    Replies
    1. 4 byte (or "phase") transactions are supported. Its just that the SCCB spec does not provide a sample example.

      For a good example on preparing a 4byte buffer (1byte slave-id + 3byte msg) for i2c transfer, you may want to refer to the ov5640 Linux kernel driver.

      Delete
  16. Dear Chinmay VS,

    I am considering to use this cam with my 10 inch telescope. In this application I need very long exposures of the order of several minutes for each frame. How will that be done? What is the longest exposure time the cam module is capable of? (The telescope is rotated on a polar axis in synch with the sky.)


    I believe I will have to go for 2X2 binning. I am trying to read through the control register descriptions but haven't figured out all the details.
    I am not handy with C/Linux etc at all. But can zip through assembly language for MCS51, AVR, and ARM but never learned C.
    I have not been able to figure out the pclk rate for a given xclk and resolution.
    10 bit data is preferable. I can do the jpeg compression later after I get the data out in an msd card.
    Can an 89C51ED2 handle the data rate?

    Thank you in advance for any help.


    Sincerely.

    Azzythehillbilly



    ReplyDelete
    Replies
    1. The requirement of extended exposure times does seems to be a problem. Maybe you can capture multiple frames at maximum standard exposure supported by the camera and then later combine the information from multiple frames into a single image.


      The 89C51ED2 seems to be slightly under-powered than what one would use. However it should be possible to use it under a reduced set of requirements like yours. (grayscale image at a low resolution at a low fps).

      To get started, you may want to checkout these projects:

      robozes.com/inaki/dproject
      OV6620 interfaced to an Atmel AVR ATmega16.

      ikalogic.com/image-processing-as-a-sensor
      TCM8230MD CMOS camera interfaced to an AtXmega 256 A3.

      Delete
  17. I just realized that I am dealing with an OV5642 IC. So could you (by any luck) please confirm an OV5642 (NOT 5640) also uses I2 iso SCCB? I2C should be a breeze. For I have my own self written I2c routines but never bothered with SCCB
    Confirms my suspicion that most Chinese manuals are not to be trusted. I remember the trouble I had with some Chinese transducers. Yhe manuals were pure crap.

    Thanks

    Azzythehillbilly

    ReplyDelete
    Replies
    1. OV5642 DOES use SCCB (conforms to I2C) as mentioned in its product-brief available in english on ovt.com. You will also find some additional info on the same here.

      The Linux kernel driver for OV5642 will be extremely helpful to get you started with implementing the i2c read/write functions in C for OV5642.

      Delete
  18. Thanks Chinmay for an amazing fast response. Wish I could.(work with Linux.) Never learned C neither.

    But once I have figured out all the control registers of the OV5642 writing Ass'y code will be a breeze. And the tight control should be pure joy.

    Do you know any site where the chip is discussed in detail minus of course any mention of C/linux etc?
    Guess I am a member of a dying breed.

    ReplyDelete
    Replies
    1. I too agree that as long as one is sticking to low power micro-controllers, using Linux doesn't make sense.

      However you might be surprised with the amount of control you have on the hardware with C programming (with inline assembly graciously sprinkled all over it :P).

      Anyways, the links to a couple of micro-controller based image-capture projects (in one of my replies to your comment above) should help you get started.

      PS: I am intrigued by your project. Anywhere (blog/forum/website) i can follow the progress or help? Please let me know.

      Delete
  19. You are right about low power uControllers and Linux.
    I'm not, to be honest entirely innocent of C/C++ and have often used C routines called from Assembly. In The eighties when I first needed floating point but had not written my own hex/decimal converters and four function routines I was forced to learn just enough C to get along with math headers. ( I learned Folders methods later and later implemented it in hardware ( (cplds). Although, strangely, some of my work is good, I have not an easy acquaintance with C and know nothing about Java/Pearl/.net or whatever they are.

    I have no web page/blog, being too old and tired (and incompetent) for that. Wish I had for I have many stories I could tell.

    The OV5642 is just a retired mans hobby due my interest in Astronomy. No commercial interest. But too many other hobbies leave me little time for this one. Is and when I make any progress with the camera I will write in to your blog. It might be of interest to some.

    Azzythehillbilly


    ReplyDelete
    Replies
    1. Hi Chinmay,

      This is Azzy again. Back from a visit to the moon.

      After I last wrote to you. I looked at the problem ( cam interface) anew. It did not look like my usual MCS-52/AVR would do. After a bit of looking around I found the ARM processors, which seem quite ubiquitous and with the low cost availability of boards it seemed like an ARM processor was the way to go.
      Unluckily no prior exposure to ARM. My search revealed the cancerous growth of Linux based software.
      Anyways I decided to take the plunge and ordered a Cubieboard, a Beaglebone and two bare systems, one a board with an STM stm32f103rbt6 and a few support chips the other with a stm32f103zet6.

      I have managed to get the two stmf32.. boards under full control and can make them do tricks like (that guy with the monkey and the goat) but the Cubie and Beagle are still beyond my reach. I do realize the power of the Linux kernels the SS shells and other stuff but bare metal is closer to my heart. And am too old to learn new tricks. Perhaps the BB and QB will retire/relax in the cabinet.

      I recently built a high speed frequency agile front end with ’52 type processor (IP core, at the limit of what I could get out of it) and an AD9858 so there is no way to stop me from redoing it with the STM chip first. After that I will get into the camera. (Barring other distractions).

      So as things happen I will post them on your blog if you think that is OK. Hopefully it should be interesting.

      Azzy

      Delete
    2. Azzy,

      I'm hoping to attach an OmniVision camera module to a BeagleBone Black for a coin scanning application I'm developing.

      Have you had any luck getting the Cubie and Beagle to talk to each other?

      RayH

      Delete
  20. Hi Chinmay,
    Brother I need help regarding 0v3640.I2c is working fine,after configuring the registers i am getting vsync and href but data is not coming ?? Also pclk is coming but i can see on the oscilloscope that its votage amplitude is very low i.e. 0.3v p-p where as for vsync and href the peak to peak voltage is 2.5V

    So far i fave configured only 4 registers

    reg | value
    ox3400 | 0x02
    ox3400 | 0x80
    ox30B0 | 0xFF
    ox30B1 | 0x3F

    Waiting for ur reply ?

    ReplyDelete
    Replies
    1. First check that the CAM_ANALOG and CAM_DIGITAL_IO are being fed proper voltages. Need to ensure that it can allow sufficient current draw and that the voltages do not drop under load.

      IIRC, the power-on sequence is important as well. Ensure the sequence of power-supplies, external-clock and reset is applied as described in the OV3640 app-note/dev-guide.

      Following this, upon turning on the image-preview of the ov3640, a proper PCLK (voltage=IO-voltage) must be available.

      Now, the rate of VSYNC pulses = FPS. If VSYNC in always high/low along with a proper active PCLK, it indicates improper software configuration of OV3640; in which case additional registers need to be initialised)

      Delete
    2. ^^^^^ Sorry for the extra spacing

      Hi Chinmay,brother thanks for ur reply u are doing great work

      I have checked there is no loading effect, i am giving 2.5 volts to both CAM_ANALOG and CAM_DIGITAL as 2.5V comes in the range mentioned in the datasheet i.e. (analog 2.5V ~ 3V and I/0 1.7V ~ 3V)

      For power-on sequence , i m using internal DVDD and there is no i2c access during the power on period , so as mentioned in the datasheet there is only one restriction that the delay (T0) from VDD_IO stable to VDD_A stable should be greater than or equal to 0ms .What i understand from this is that i Should only take care that VDD-A is not applied before VDD-IO . Applying them at the same time should not be a problem?

      I am keeping reset 1 all the time as its active low and pwdn 0 all the time
      I have recently used ov7725(vga resolution) but it was quite simple as compared to 0v3640 ,there was nothing like power on sequence, u just provide it with the external xclk and it outputs the frame in DVP format, in case of ov7725 there was even no need to configure any register as the default values of the registers were so.

      As I am new to Image processing can you please explain me the difference between the sensor preview mode and capture mode and to get the frame out of the sensor which mode has to be used ????

      vsync is not high or low all the time , Vsync frequency = 15 Hz , Href freq = 24.5 KHz , and pclk freq is 57 MHz ,which seems to be correct but as mentioned before the pclk voltage amp is very low i.e. 0.3 V (p-p) which seems to be a problem ??????? Also I can see on the chipscope(debugging tool for FPGA's) that there is no data coming , where as when i am configuring the register 0x30b0 i m setting the data pins direction as output

      Waiting for ur reply,Thanks



      Delete
    3. Hi chinmay

      The voltage CAM_DIGITAL_IO is dropping under load now. First it was ok but now it dropping ?? Now I am using different supplies for CAM_analog and CAM_digital_io ,please help

      Delete
    4. Until the hardware is stable, it would be futile to attempt debugging the software(i.e. register settings) Ensure that both the independent power supplies are noiseless and capable of supporting a high current draw.

      The terminology of "preview" and "capture" modes is derived from the application use-cases. The preview mode is usually a lower resolution setting that can be run for extended periods. The capture mode is usually the highest resolution supported by the camera. However running the sensor in this mode for extended periods risks heating-up the sensor and damaging it.

      Delete
  21. Thanks a lot Bro for ur help , now its much better , just getting some black lines in the image :)

    ReplyDelete
    Replies
    1. The black lines are usually caused by :
      - Noisy power-supply.
      - Improper input XCLK.

      Ensure that both the CAM_ANALOG and CAM_DIGITAL_IO are from independent rails to eliminate any possibility of parasitic effects.

      Also provide a clean XCLK at the higher-end within the specified operating range of the camera module (~24Mhz in this case).

      Delete
  22. Hello chinmay,
    sorry to bother you. Did you already manage to use the ov3640 camera sensor along with a current linux kernel like version 3.10? With the old driver which I got from e-con-systems it works fine for linux kernel 2.6.37. But now I want to use the implemented isp driver of the linux kernel. For that I found a driver here: http://git.linuxtv.org/pinchartl/media.git/blob/refs/heads/sensors/ov3640:/drivers/media/i2c/ov3640.c
    But with that driver I don't get right color and image size. For now it would be fine to know whether you used that sensor with a current kernel.
    Best Regards, Tom

    ReplyDelete
    Replies
    1. Hi Tom, Its been quite a while since i last played around with this. I haven't tested this on any recent kernel versions and i do not have the hardware with me now either. Sorry about that.

      Since you mention that you have got it working (albeit with improper color/resolution), one quick thing to do would be to compare the register settings of the e-con and linux-tv drivers. You can even simply try the e-con register setting arrays within the new style driver from linux-tv. Would it be possible for you to share a link to the e-con ov3640 driver that works for you?

      Also which dev-board/SoC are your trying this on?

      Delete
    2. Hello Chinmay,
      thanks for your quick reply. I use a gumstix overo board. Here is a link for the old driver:
      http://www.roadnarrows.com/distro/e-con/e-CAM32_OMAP_Gstix/Software/e-CAM32_OMAP_GSTIX_ver_D6.371_A3.201/Driver/Source/e-CAM32_OMAP_GSTIX/
      The main thing I couldn't find out clearly, is what the difference between the vsync modes of the camera sensor are. The old driver uses the vsync mode 3 and the newer driver uses vsync mode 2. The datasheet doesn't descripe the differences. So I tried to set the register as it was in the old driver, but it didn't work. I didn't even get image data. the isp register should now also be similar to the register settings of the old driver. Can you tell me what the differences between these vsync modes are?
      Best Regards, Tom

      Delete
    3. The "VSYNC_MODE" you mention is controlled by bits 1 and 7 of register 0x3600. As you too have found out, these are NOT described in a couple of versions of the ov3640 datasheets available online. The Omnivision FAE for your region should be able to help you in obtaining access to the latest version of the complete datasheet.

      From the inline comments in the linux-tv driver, we can understand that :

      - In mode 2, the VSYNC line is momentarily held active only at the start of each frame.

      - In mode 3, the VSYNC line is held active during the entire period the active frame data is being transferred on the data lines. It goes inactive during the vblank period between 2 consecutive frames.

      Delete
    4. Hello Chinmay,
      again many thanks for your reply. When I configure the isp_sysconfig (isp.c file; isp_core_init function) to nostandby I get images. originally it was configured as smart or forcestandby. when this was configured I wasn't able to get an image. the output already said that the ccdc won't become idle. Do you know why this happens?
      however when I get an image it is still not ok in color and/or structure. I get the interrupts and it seems to work. I configured the ov3640 to give a standard test pattern. I configured the isp pipeline with the media-ctl tool and grabbed an image with the yavta tool.
      my command lines look like this:
      sudo ./media-ctl -v -r -l '"ov3640 3-003c":0->"OMAP3 ISP CCDC":0[1], "OMAP3 ISP CCDC":1->"OMAP3 ISP CCDC output":0[1]'
      sudo ./media-ctl -v -V '"ov3640 3-003c":0 [UYVY2X8 640x480], "OMAP3 ISP CCDC":1 [UYVY2X8 640x480]'
      sudo ./yavta -p -f UYVY -s 640x480 -n 4 --skip 3 --capture=13 --file=img#.raw /dev/video2

      the picture I got looks like this: http://s7.directupload.net/file/d/3435/2s5kuacl_png.htm
      do you have an idea what the problem might be?
      do you know on which signal each of these 3 functions (ccdc_vd0_isr; ccdc_vd1_isr; ccdc_hs_vs_isr) are called. what signal has to be high?
      Many thanks for your help.
      Best regards, Tom

      Delete
  23. Hello Chinmay

    I want to use ov5640 in my streaming camera but I am not finding its datasheet much useful as there are so many registers that are required to be configured . Since I am a verilog developer and dont know much about linux I am unable to find any useful stuff regarding register configuration . So could you please help me.I just want the sensor to output me a continuous stream 720p at 60 or 30 fps in ycbcr422 data format through the DVP parallel interface . Any list of registers for my particular case???Thanks in advance

    Best Regards: Lynch

    ReplyDelete
    Replies
    1. I2c is working fine I just need the list that gives the required registers to be configured for my case

      Regards: Lynch

      Delete
    2. You can obtain a list of registers (and values) that need to be configured within existing drivers for ov5640 here and here

      (Note that both above links are from separate sources and may contain conflicting values/order. Feel free to experiment and choose what works best for you.)

      The arrays in the above drivers are address-value pairs. Entire arrays need to be written to the sensor module using i2c before turning on the sensor.

      The usual approach is to first write the init-array followed by the resolution, fps and img-format arrays. Next (optional) write any lens-correction, exposure, white-balance settings. Finally turn-on the streaming-bit of the camera.

      Delete
    3. Thanks for the timely reply and the above two links

      I studied the datasheet but unable to understand how to set the desired fps .Is there any specific formula that tells how the 6 registers associated with PLL (0x3034 ~ 0x3039) relates to the fps . I mean if my input clock(XVCLK) to the PLL module is lets say 25 MHz then to get the 30 fps what should be written in those 6 registers . Any example you can give will be greatly appreciated.

      Regards: Lynch

      Delete
  24. I am interfacing omnivision's (ov5640)image sensor with the FPGA and was able to capture raw frame in YUV422 format through the digital video port (DVP) parallel output interface.Now I am trying to capture the JPEG compressed output.The datasheet doesnot provide much detail of the JPEG mode and Since I have never studied JPEG mode I am unable to understand what the sensor will output in this mode(what bytes mean e.g. in raw yuv mode I knew the bytes coming are luma and chroma intensity values) and whether the sensor will output only the scan data or it will ouput all other necessary headers and info. that is required to make a complete JPEG file.I will be thankful for any help

    Regards
    Az Ahmad

    ReplyDelete
    Replies
    1. In JPEG mode, the compression-engine block of the Omnivision sensor can be configured to compress the captured YUV image into a JPEG buffer. This is written to the on-chip FIFO (next stage following the ISP pipeline).

      Now the buffer read from the Omnivision chip on to the host is a JPEG image buffer which can
      - be saved to file
      OR
      - uncompressed using a jpeg-decoder library/routine and displayed.

      Experiment with the various JPEG configuration registers (Tables 7-17, 7-18 & 7-19) of the Omnivision OV5640 Datasheet.

      Delete
  25. HI I am using ov5640 , for any resolution it works fine for lower frame rate e.g. if my resolution is 720 p (raw YUV) it works fine till 25 fps , but when I try to increase my frame rate over 25 fps, the sensor stops giving pclk . For 25 fps , PCLK was coming around 75 MHz . For any resolution it works fine till the PCLK is in 75-80 MHz range? Any suggestions ? I am using DVP interface

    ReplyDelete
    Replies
    1. Not sure about this myself.

      However, the table 8-5 of the Ominivision OV5640 datasheet containing the timing characteristics lists the maximum PCLK supported as 96Mhz with additional restrictions.

      As described in comment e alongside the table, it appears that high PCLK is supported for RAW@15fps. As we approach higher resolutions/speeds like 5M YUV@15fps (and above) the recommendation seems to be to go for MIPI interface and NOT use DVP.

      Delete
  26. Hi, I am using OV5640. Driver is configured to use 10 bit DVP. When I tested i found data is observed only in D9-D2. And not in D1-D0. Could you please tell me if it is a hardware issue or wrongly set register settings issue.

    ReplyDelete
  27. Check OV5640's SC_PLL_CONTRL0 register (address 0x3034). The lower nibble should read 0xA for 10-bit mode.

    Another possibility is that the buses are NOT perfectly aligned between the host chip and OV5640. Which D2-D9 pins are you measuring; are they the ones on the host chip? I have had instances where D0-D9 from OV5640 were connected to D2-D11 on the host chip.

    ReplyDelete
    Replies
    1. OV5640's SC_PLL_CONTRL0 register (address 0x3034), datasheet mentions as MIPI bit mode. Does it relate to DVP output.

      If it is in 8 bit mode which data pins will have data(D9-D2 or D7-D0).

      Delete
    2. Hi Chinmay,
      We obseved that in 8 bit mode OV5640 will output data from D9-D2. In our board the hardware connection with the host was wrong (It should have connected to D7-D0 but instead it was connected to D9-D2)

      The information regarding limitation in DVP interface with high PCLK from one of your replies to another query was helpful in debugging.

      Thanks

      Delete
    3. aaah... :-) that you go. Glad to know you got it wokring.

      Delete
  28. Register settings are same as you mentioned above.

    I measured D0-D9 from ov5640.

    I am able to read data but it is full noise and still am not getting data in D0-D1.

    ReplyDelete
  29. Hi, maybe one of you that are working with this could help. i'm using this camera with a fpga and I've seen links to the drivers (and got information out from them) but sincerely, copying all those registers and change the syntax is a bit hard (i'm programming in vhdl, not in c++ for arduino) and above all i want to understand what I'm doing.

    My problem, as a think was mentioned above, is the setting of the registers: I'm using ov5642 and managed to access the bank of registers, changing the value of some of them:

    register value
    0x3017 0x7f
    0x3018 0xfc

    The thing is that with this i manage to get VSYNC, HREF and PCLK signals, all of them for syncronizing but, on the other hand, i get no response on the data pins (the signals are quiet, like if the pins weren't enabled as outputs).

    I'm sure i'm missing some important register, so i'd be grateful if you Chinmay, or someone else who have had the same problem or known the solution, would tell me what is left or what i'm doing wrong.

    Thanks,

    Fernando

    ReplyDelete
    Replies
    1. the camera, by the way, is the model ov5642. I'm using a clock of 25Mhz as input of XCLK pin. the rest of the registers remain with their default value

      Delete
  30. i have a ov3640 working with a STM32F429. Its well working with a resolution of 640x480. when i try to change the resolution to 1600x1200 with

    {0x3362 ,(((V_PX+6) & 0xFF00)>>4) | (((H_PX+8) & 0xFF00)>>8)},
    {0x3363 ,((H_PX+8) & 0xFF)},
    {0x3364 ,((V_PX+6) & 0xFF)},
    {0x3403 ,0x42},
    {0x3088 ,(H_PX & 0xFF00)>>8},
    {0x3089 ,(H_PX & 0xFF)},
    {0x308a ,((V_PX+2)& 0xFF00)>>8},
    {0x308b ,((V_PX+2)& 0xFF)},
    {0x3f00 ,0x09},
    {0x3400 ,0x01},
    {0x3404 ,0x30},

    the vsync-signal gets lost. Someone can help me how to get a better resolution then vga?

    ReplyDelete
  31. now it is working with
    http://read.pudn.com/downloads189/sourcecode/embed/890065/TI_EVM_3530/SRC/CSP/OMAP/CAMERA/Sensor/OV3640/ov3640.h__.htm

    ReplyDelete
  32. Hi, I am working OV5640 camera sensor. It is working properly at 720p with frame rates of 30fps and 20fps. But at 10fps there is lot of noise in the video.
    It is similar to the fixed pattern noise explained in the following link-
    http://www.cambridgeincolour.com/tutorials/image-noise.htm

    Can you please suggest some appropriate register settings that could reduce this noise.

    ReplyDelete
  33. Hello Chinmay V S
    I am using OV5642 camera sensor with IMX6 CSI0 interface, i am facing some issue.
    1. I2C is not getting detected
    2. OV5642 camera sensor is not getting detected

    link for connection b/w imx6 and OV5642 camera, source(code):
    https://www.dropbox.com/sh/qooc04hca6tte3w/AAB01k0KmiagLkfLSwxz4QIXa?dl=0

    ReplyDelete
  34. hello sir,
    im using OV5642 camera module with arduino .Do you have any code related to it.kindly post it .i have been searching for it from many days.i didnt find anything related to it.i ahve directly connected the camera module to ardunio mega.i need a code to take a picture.
    Thank you

    ReplyDelete
    Replies
    1. Any one kindly post the code to "capture the image using OV5642 module" which is directly connected to Arduino without any shield.

      Thank you.

      Delete
  35. why someone goes all the way to define their own bus-specs when the hardware obviously works on I2C???

    Obvious... to save millions in royalties they'd have to pay Philips.

    Sony did a very similar thing with their earlier Triniton TVs so they weren't crippled with PAL royalties.

    Answer to all questions, follow the money trail.....

    ReplyDelete
  36. Hello, rather late to your article, but thank you, very informative.
    If I may ask you, I don't know if you recall, what is the response time of these cameras to the register writes?
    I'm wondering if adjustments to the shutter and other parameters can be done per-frame. Two examples would be to increase dynamic range by combining frames with different exposures, or quickly windowing into image details at higher resolution.
    Thanks!

    ReplyDelete