Uni Xray blog

Home   >   Blog

CCD vs CMOS – Which is better?

Table of Contents



Lot many articles have been penned about the importance of CMOS (complementary metal oxide semiconductor) and CCD (charge coupled device) imagers. Even a debate is going on regarding the topic of which one is better. Is it CMOS or CCD imagers? However, there is no definitive conclusion to date as the subject is quite complicated. Remember that the technologies evolve continuously according to the changing requirements. Certain applications work well with CMOS, while some with CCDs. Both of them have unique pros and cons according to the varied applications.


Hence, this article aims to provide more clarity to this topic by highlighting various scenarios, lesser-known trade-offs, comparison between CMOS and CCD, as well as cost factors.


CCD VS CMOS – How they work?


Imagine a group of buckets that collect rainwater. Looking at the quantity of water collected in each bucket, you can detect the density and shape of the cloud, right? That’s how CCD and CMOS work.



There are two types of image sensors: CMOS and CCD. Both CCD and CMOS use arrays of pixels (buckets) to determine light. These two technologies can digitally capture images. When a photon of light hits a pixel, it emits an electron to a higher energy level. The emitted free electron moves through the material, which is referred to as a photoelectron (raindrop). 


In a CCD image sensor, each pixel’s charge is transmitted through a lesser quantity of output nodes and converted into voltage. Next, they are buffered and emitted as an analog signal. These entire pixels can be dedicated to light capture where the uniformity in the output is high for enhanced image quality.


In a CMOS sensor, every pixel has got its own charge-to-voltage conversion. It also includes noise correction, amplifiers, as well as digitization circuits. Hence, the chip can send off digital bits. They can improve the complexity in design and decrease the area for capturing the light. Here, the uniformity is lower with every pixel conversion thereby allowing increased bandwidth at high speed.


In a nutshell, CCD moves photogenerated charge from pixel to pixel thereby converting it to the voltage at an output node. However, CMOS converts charge into voltage inside each pixel.




CCD imagers have one readout in the corner, while CMOS have readout at each pixel.


History of CCD and CMOS


Although CMOS and CCD imagers were invented by Dr. Savvas Chamberlain in the late 1060s and 1970s, the use of CCD became more dominant. It’s because CCD can deliver top-quality views with the available fabrication technology and less power. CCD had a steady increase in quantum efficiency and signal handling followed by the reduction in pixel size, operating voltage, and dark current.


In the case of CMOS, it requires more uniformity. However, the invention of lithography in the 1990s helped the designers to develop CMOS again due to the prevailing interest in reduced power consumption, lesser fabrication cost, more integration of smaller parts, and camera-on-a-chip incorporation. It consumed more time and money to achieve all these benefits along with providing top image value. For example, high-volume imagers were used for customer appliances including mobile phone use. That’s where CMOS outperformed CCDs. And this journey to the CMOS from CCD has been quicker and more unstable.


However, in the later stage, CMOS has joined CCDs as a major mature technology.


Let us next go through various scenarios to understand which is better: CCD or CMOS? 


Scenarios to decide which is better: CCD or CMOS? 


  1. Machine vision


When it comes to machine vision, noise and speed are its key parameters. But CCDs and CMOS vary in the way where signals are transferred from signal charge to an analog signal followed by a digital signal conversion. In CMOS, the front end of the data path is very parallel with low bandwidth. However, in CCDs, there will be a vast quantity of parallel quick output networks although not as massive as CMOS. Hence, CCDs have huge bandwidth followed by huge noise compared to the CMOS.


  1. NIR (Near infrared imagers)


In near-infrared imagers, the epi of the CCDs is greater than 100 microns in thickness when compared to the CMOS where thickness is just 5-10 microns. Hence, the effect on the CCD circuit is comparatively simpler to operate. Moreover, CCD is extremely vulnerable in the near-infrared than CMOS.


  1. TDI (Time Delay and Integration Imagers)


In the case of TDI (Time Delay and Integration Imagers), CCDs combine only charge signals. But CMOS combines either charge signals or voltage signals. Here, the summing operation of the charge signals is noiseless, although voltage summing cannot. Voltage-based CMOS can deliver cost-effective high performance, but charge-based CMOS can provide the highest performance.


Due to higher sensitivity, CCD TDI can reach a speed limit. However, CMOS has got speed benefits


  1. Electronic multiplication


CCD becomes more beneficial when there is no need to image at a high speed, compared to CMOS. This increase in speed elevates the read noise in CCD as well. On the contrary, CMOS has a very low read noise. Hence, these CMOS may not have TDI or near infrared benefits like CCD.


  1. Other scenarios


If your customer wants to assess extremely faint light sources with adequate signal-to-noise ratio and hour-long exposures, CCD imagers can work well. 


However, companies are switching to CMOS since most non-scientific imaging applications need short video exposures as well. In such cases, CMOS proves to be superior in performance and cost. Hence, the manufacture of CCD sensors came down in volume. 


Therefore, ON semiconductor started discontinuing the past Kodak gadgets in the year 2019. However, it is not the end of CCD technology. You can find specific SONY CCD till the year 2026.


Companies will continue to manufacture CCD technology for astronomical applications including spectroscope, photometry, or life science applications including fluorescence and bioluminescence.


Less demanding imaging or those which require high speed imaging will switch to CMOS use. In the coming 5 years, CMOS will replace even more applications. 


In order to meet your requirements today and tomorrow, many companies include both modern CMOS and high-performance CCD sensors.


CMOS VS CCD: A comparative study


In the past few years, CMOS was comparatively employed more than CCDs but not in every aspect. So, let us have a comparative study between CCD and the CMOS imagers:


FeaturesCCDCMOSWhich is better?
AvailabilityMajor CCDs have been replaced. Only a few expensive CCDs are made by top companiesRapid upgrade in technology thereby developing new CMOS imagers. So, companies are investing more in CMOSCCD will remain only in specific niches like scientific instruments. CMOS is dominant 
CostLarge CCDs are costly with a complex digital camera and external analogLarge CMOS is also costly with complex digital electronics. Analog is removedFor simple cameras, CMOS is cost effective. For cool low-light imaging cameras, there is not much variation
Speed1-40 megapixels per second100-400 megapixels per secondCMOS
Read noise at each pixel when the sensor is read5-10 electrons; 1 electron for electron multiplying CCD1-3 electronsCMOS or electron multiplying CCD
CoolingHigh cooling is achieved easilyCannot operate at the high cooling conditionCCD
Electronic shutterFrame and interline transferRolling and global shutterNo main benefit
Mechanical shutterUsed for image calibrationUsed for image calibrationNo main benefit
Pixel size3-25 microns2-9 micronsLarger pixels work for telescopes. Although CMOS have small pixels, bigger pixel size models are introduced recently
Wall depth (number of electrons held by each pixel)40,000-200,00030,000-75,000. Can be reduced via stacking CCD, however stacking for CMOS also works
A/D converter bits16 bits12 bitsCCD
Binning (pixel combination for resolution or sensitivity)Easily achievedComparatively limitedCCD
Light emission via LEDEasily reducedNot so easyCCD however, there is improvement in CMOS as well
Infrared imagingPossibleCurrently not possibleCCD
Fixed pattern noiseOccasional, can be reducedCan be a problemNo major benefit, although CCD helps
Calibration – how perfect is the final imageEffective and well establishedCan be more complexCCD


How about the cost?


For numerous commercial decision-makers, it is the net value (performance achieved for the amount paid) that really matters. Let us go through the cost perspective before selecting either CCD or CMOS imager:


  1. Leverage: No matter if it is a CCD or a CMOS imager, the ones available in the market are lesser expensive than a fully customized imager. When it comes to customization, developing a customized CMOS is more expensive than a custom CCD due to the use of deeper and costly submicron masks. Further, more circuitry is involved in a CMOS design to deliver a better performance. Still, the proposition may give preference to a customized CCD.
  2. Volume: Although CMOS involves high development costs, it has only a lesser unit cost. Hence with more volume, a lesser unit cost is more vital than a lesser development cost.
  3. Supply security: Despite a good value proposition, select a company that can develop a fully secure and long-lasting CCD or CMOS.




So, the upshot? CMOS can surpass CCDs in many obvious imaging purposes. For TDI, CCD can outperform CMOS for increased speed and low light levels. In NIR, CCD is a better option. For UV imaging, CMOS excels at extreme readout rates compared to CCD. Moreover, factors such as leverage, volume, and supply security decide the cost-performance trade-off of whether to opt for CMOS or CCD imagers.


It is not an easy task to select the right imager for an application. Every application has got unique requirements. And these requirements have an impact on performance and cost. Hence, it is not possible to conclude that both CMOS and CCD imagers can be used in all applications.