ZWO ASI174 & ASI120 Digital Video Cameras- Hardware Settings & Differences
These are some notes originally prepared for personal use which others might find useful to help them get the most out of their cameras. I started writing these pages to help me understand some of the hardware features and settings used for CMOS based cameras used for digital video imaging of planets. Writing these notes helped me understand terms like unity gain, shot and read noise, binning and dynamic range and helped me better understand how these terms relate to each other. I have latterly expanded the scope and extent of this document but even so this is not meant to be a comprehensive set of tutorial notes for CMOS based digital video camera imaging.
I have used two popular digital video cameras made by ZWO in China, the monochrome ASI120MM and ASI174MM, as vehicles to explore the concepts used in CMOS digital video imaging although many of the ideas, are common to CCD based astronomical digital imaging too.
The information presented has been gleaned from answers to questions on ASI Yahoo Group users forum by Sam Wen of ZWO and also from the ZWO ASI174 technical brochure – plus some background reading especially of Camera Sensors and Control by Chris Wood as well as a Point Grey white paper on ‘How to Evaluate Camera Sensitivity‘. These are such useful papers here are the documents in pdf format to download in case the links don’t work;
If you don’t have time to read all these notes and just want the bottom line on the differences between the ZWO ASI120 and ASI174 cameras;
- both are excellent planetary cameras with similar levels of read noise and sensitivity but quite different sized pixels
- when used at the same image scale (changing magnification to account for differences in pixel size) and gain, the smaller pixel size of the ASI120 (3.75um) leads to significantly brighter images which are closer to saturation than the ASI174 (with 5.86um pixels)
- when used as above, the signal-to-noise ratio will be similar for each camera but the ASI174 has a higher dynamic range giving it the extra headroom to also successfully image brighter regions. This makes this camera an excellent lunar imaging camera where there is detail in bright areas and shadow
- the ASI174 has a larger chip with twice the number of pixels as the ASI120 allowing detailed imaging of larger areas
- the ASI174 has the potential to run much faster even running at full frame where the pixel count is twice that of the ASI120
- the ASI174 has a global shutter whereas the ASI120 has a rolling shutter. The global shutter leads to less ‘rubber-sheet’ type distortion when imaging large areas especially in conditions where the scope may be buffeted by wind
- Pixel Well depth and Quantum Efficiency
- System Gain
- Bit Count
- Unity Gain
- Analog and Digital Gain
- Well Depth and System Gain
- Output Table
- Analog and Digital Gain
Comparative Image Brightness
- At same optical amplification factor
- Same image scale in terms of arcsecs per camera pixel
- Pixel and Circuit Noise (Read Noise)
- Read Noise and Bit Depth
- Read Noise for ASI120
- Random and Non-Randomly Distributed Read Noise
- Shot Noise- Introduction
- Shot Noise and Gain
- Read Noise versus Shot Noise and Signal to Noise Ratio (SNR)
- Shot Noise- High gain and short exposures or low gain and long exposures?
- Read Noise- High gain and short exposures or low gain and long exposures?
- Noise Summary- High gain and short exposures or low gain and long exposures?
Other Gain/Exposure Considerations
ASI120 vs ASI174 which is better?
Camera Comparison Table
12-bit, reconstructed 16-bit, & 8bit – It’s all very confusing!
- 8-bit imaging – leading to 16-bits on stacking
- 12-bit imaging
- Conversion from 12-bit to pseudo 16-bit
- Deep Sky Imaging and 16-bit signals
- 10-bit ADC mode
- Bit depth Comparison Table
Pixel Well Depth and Quantum Efficiency
When photons are absorbed by the pixels in a CMOS sensor chip, free electrons are liberated which increase the charge level in the pixel. The total pixel charge is simply the number of free electrons in this so-called pixel well. In the case of the ASI174 camera, the pixel well can hold up to 32,400 free electrons before it is full- the well depth is said to be 32,400 electrons. Further photons arriving at a full well do not further increase the charge on the pixel and the pixel well is said to be saturated.
To relate the change in electron count in the well to the number of photons collected, you need to know the quantum efficiency (QE) of the chip which tells you how many photons are needed to generate one free electron. Below you can see the QE vs. wavelength for the ASI174 and ASI120 cameras which shows a peak QE close to 80% meaning that 10 photons arriving at the pixel will generate close to 8 more electrons in the pixel well. Note that CMOS cameras have their peak efficiency much closer to the blue end of the spectrum than CCD chips which are generally more red sensitive.
The charge level in electrons on each pixel at the end of each frame is read out and fed to an on-chip amplifier. This amplifier applies a user-settable analog gain multiplying up the analog charge level before it is fed to the on-chip Analog to Digital Convertor (ADC). This, as the name suggests, converts the analog charge signal from the pixel amplifier into a digital signal which can then sent to your computer.
A different sort of gain to that applied by the analog amplifier is the so-called ‘System Gain’ (also known as ‘Inverse System Gain’) which relates to the conversion between the charge fed into the ADC and the corresponding digital output it then gives. System Gain is just the ratio of the change in electrons into the ADC chip compared to the change in digital output. The unit of System Gain is e/ADU, where ADU means electrons per analog to digital unit.
System gain is a fixed conversion setting of the ADC and is a separate thing from the user settable analog gain applied to the amplifier or the digital gain which is applied after the ADC.
The System gain is set by the manufacturer so that when the pixel well is just completely full of electrons and the user-settable gain is 1x (ie has no effect) then the ADC is just at its max. count. For a 12-bit ADC this maximum count will be at 1111 1111 1111, which in decimal is 4095. If, for the ASI174, the full-well figure of 32,400 is divided by the maximum ADC count of 4095 we get a value of 7.91 and consequently a system gain of 7.9e-/ADU has been set by the manufacturer. This means that provided there is no amplification, for roughly every 8 electrons that are fed into the ADC the digital count increases by one and when the well has charge of 32,400e in it the ADC output will max out at a digital signal level of 1111 1111 1111.
The AI174 (and the ASI120) has a 12-bit analog to digital convertor (ADC) on board, but planetary digital videoing is usually done in 8-bit only. Even more confusingly Firecapture, indicates a 16-bit option on top of the normal 8-bit output modes for both the ASI174 and ASI120 cameras.
The 8-bit signal is constructed from the 12-bit by chopping off the 4 least significant bits so, for example, the 12-bit signal 1101 0001 0111 then becomes 1101 0001. If you want to use all the resolution of the 12-bit signal from the ADC then you need to use the 16-bit option in Firecapture which turns the 12-bit signal into a more computer friendly 16-bit signal (16-bit =2 bytes) by taking the 12-bit signal and adding in 0000 to the end so 1101 0001 1111 then becomes 1101 0001 0111 0000.
To confuse matters further the ASI174 also has a High Speed 10-bit ADC mode which can run at higher speed than the 12-bit mode (and related truncated 12-bit =8-bit). Although capable of higher speeds this is significantly noisier than the 12-bit mode and so is not recommended for planetary imaging.
The section at the end of this document entitled ‘12-bit, reconstructed 16-bit, 8bit. It’s all very confusing!‘ gives much more detail about all this and explains why and when you would want to use the different modes. It also gives a useful conversion table to convert between the 12-bit ADC output and 8-bit and 16-bit
Unity Gain is a value of the user settable gain applied to the amplifier between the CMOS sensors and the ADC where each extra electron generated in the pixel well increases the digital count of the output by one unit. Unity gain is the minimum gain at which the output between every N photons and every N+1 photons is different and discernible. At a gain of less than unity then not every extra electron added to a well leads to an increase in the digital count whereas at gains of higher than unity some digital counts will get skipped over possibly compromising the dynamic range as there are less steps in the signal available.
To help understand unity gain more let’s start by drawing up a table giving the ADC output signal for different electron numbers in the well, when the user settable analog and digital gain are set to 1 (ie nulled).
Looking at the table above and thinking about the user settable analog gain (which had been set to 1) which is applied to the signal going to the ADC;
- If the user settable gain was increased from 1 to 8x then each electron in the pixel well would get multiplied up to 8 electrons before being fed into the ADC. If the system gain for this chip is 7.9e/ADU then each extra set of 8 electrons will give an increase in digital output of the ADC of 1.
Thus by this logic for 12-bit output when the analog gain is 8x then 1 extra electron in sensor well increases the ADC count by one and a gain 8x can be defined as Unity gain for 12-bit output.
- If the user settable gain was increased from 1x to 126x then each pixel electron would get multiplied to 126 electrons before being fed into the ADC. If the system gain for this chip is 7.9e/ADU then each extra set of 126 electrons will give an increase in digital output of the ADC of 16. This is for a 12-bit output which is equivalent to an increase in just 1 for an 8-bit output as 0000 0001 0000 (=126) gets cut back to 0000 0001 (=1) if the last four digits are knocked off to turn 12-bit into 8-bit.
So for 8-bit output with a gain of 126x then 1 extra electron in sensor well increases the ADC count by one and a gain 126x can be defined as unity gain for 8-bit output.
Analog and Digital Gain
The ASI174 when run in Firecapture in 8-bit mode has gain settings which can be set in the range 0 to 400.
- Gain in the range 0-240 is analog gain (before the ADC)
- Gain in the range 240-400 is digital gain (after the ADC)
The gain setting in Firecapture for all Sony chip based cameras (eg ASI174, ASI224, ASI185) is defined as 10x the value of the gain in dB- for example a Firecapture gain setting of 320 relates to a dB gain of 32dB.
Gain in dB is defined as = 20 x log [amplified electron count/original electron count] So we can say;
- Gain of 0-240 is 0-24dB which is a (analog only) gain of 1x to 15.8x
- Gain of 240-400 is 24dB to 40dB which is a (analog plus digital) gain of 15.8x to 100x
- When FC is set to 12bit (=pseudo 16bit) mode, Unity Gain, as we found out above is 8x which = 18.1dB or a gain setting of 181 in Firecapture
- When FC is set to 8-bit mode then Unity Gain, again as we found above, is 126x which would equal to 42.0db or a gain setting of 420 in Firecapture. This is 24db (16x) higher than 12bit/16bit mode. When operating in Firecapture the gain, however, only goes up to 400 (=100x) so you are always operating below unity gain in 8bit mode.
NB: It is useful to remember than a 6dB gain (60 change in gain setting Firecapture for the ASI174) corresponds to a doubling in the signal level.
Well depth and System Gain
The ASI120 sensor has a well depth of 14,500 electrons which is 2.23x less than that of the 32,400 electron deep ASI174. This is pretty much the ratio of their relative pixel areas (area ratio =(5.862/3.752)=2.44).
As we said before for max. dynamic range (with analog gain at 1) the ADC output should be at its maximum value of 4095 (=1111 1111 1111) when the pixel well is full. 14,500 divided by 4095 is 3.5 and so the system again is set to 3.5e/ADU (electrons per Analog to Digital Unit). As before, this value of system gain means that you will have a just full well just as the ADC reaches its maximum output value.
Again let’s draw up a table of the ADC output signal for different electron numbers in the well when the user settable analog and digital gain are set to 1 (ie nulled).
Looking at the table above and thinking about the user settable analog gain (which had been set to 1) applied to the signal going to the ADC- using the same logic used for the ASI174 we can say that for the ASI120:
- For 12-bit output when the analog gain is 3.5x then 1 extra electron in sensor well increases the ADC count by one and a gain 3.5x can be defined as Unity gain for 12-bit output.
- For 8-bit output with a gain of 56x then 1 extra electron in sensor well increases the ADC count by one and a gain 56x can be defined as unity gain for 8-bit output. Unlike the ASI174 8-bit unity gain is actually within the capability of the ASI120 as its maximum gain is 64x.
Analog and Digital Gain
The ASI120 camera, when run in Firecapture, has a user settable gain that is stepwise as far as the analog gain goes and works like this:
- Gain range 0-16; analog gain 1x; digital gain 1-2x: 1-2x total gain
- Gain 16-32; analog gain 2x; digital gain 1-2x: 2x-4x total gain
- Gain 32-48; analog gain 4x; digital gain 1-2x: 4x-8x total gain
- Gain 48-64; analog gain 8x; digital gain 1-2x: 8x-16x total gain
- Gain 64-80; analog gain 8x; digital gain 2-4x: 16x-32x total gain
- Gain 80-100; analog gain 8x; digital gain 4-8x: 32-64x total gain
As we found above, unity gain, where one extra electron in the well increases output count by one ADU is;
- For pseudo 16-bit unity gain is 3.5x and lies in the 16-32 range
- For 8 bit unity gain is 56x and lies in the 80-100 range
Comparative Image Brightness
The ASI120 only requires 14,500 electrons to fill a well whereas the ASI174 requires 32,400 electrons – 2.23x more. The QE is similar for both but the photon collecting area of the 5.86um pixels in the ASI174 is 2.44x greater than the 3.75um pixels in the ASI120.
Let’s consider the two cameras arranged in two common ways on a telescope;
At the same optical amplification
- If the two cameras are used without changing the barlow, so maintaining the same optical amplification factor for both (eg both kept at an f-ratio of f22), the surface brightness on the chip in photon/um2 will be the same for both. The ASI174 pixels collect 2.44x the number of photons as the ASI120 (the ratio of their pixel areas) but, as a % of the maximum well depth, it fills 2.23x less slowly (the ratio of their comparative well depths). The net result is that the ASI174 fills up slightly more quickly than the ASI120 at the same optical amplification factor by a factor 2.44/2.23= 1.09x.
- As we have seen above, the individual System Gain is set for any camera so that the ADC output value reaches its maximum when the well is just at 100% full (at min. gain). What this means here, is that two cameras with wells filled to the same % level should, at the same user gain settings, give the same ADC output signal and hence image brightness.
- Thus at the same optical amplification factor, the ASI174 cameras image brightness will be slightly higher than the ASI120 at the same analog and digital gain settings by the factor 2.44/2.23= 1.09x.
Same image scale in terms of arcsecs per camera pixel
- If you wanted to swap the cameras over but keep the image scale the same in arcsecs/pixel, then the optical amplification would need to be changed by an amount equal to the relative change in pixel size. So, for example, if the ASI174 was being used at f22 you would need to drop down to f14.1 (=22 x 3.75um/5.86um) for the smaller pixelled ASI120.
- If you do this to maintain the same image scale, the surface brightness on the ASI120 chip will go up by 2.44x- the square of the ratio of the pixel dimensions. As a result the ASI120 image brightness will be 2.44x brighter than before and now significantly brighter than the ASI174 at the same user settable gain. Brightness will now be 2.44/1.09=2.23x brighter
Noise is always present in any camera signal and for planetary imaging cameras this is mainly made up of two components – Read Noise and Shot Noise.
Pixel and Circuit Noise (Read noise)
Read noise is noise which is added to every pixel in each frame when it is read out from the sensor. You get read noise for each pixel whatever the signal level is – even if the camera is capped and in total darkness.
Read noise is made up of two components, pixel noise and circuit noise. Although digital gain increases both noise components equally, any analog gain amplifies the pixel noise and not the circuit noise meaning that for a given exposure value a higher analog gain increases the signal to noise ratio (SNR). Note that any digital gain amplifies both the signal and the noise equally and does not change the signal to noise.
The plot below shows how the effective read noise reduces with gain for the ASI174 sensor. Obviously the real read noise always increases with gain applied so in this chart the noise axis is the final read noise divided by the gain.
Remember for the ASI174 that the gain in the range 0-240 is analog then in the range 240-400 it is digital. What the chart above shows is that the read noise at a gain setting of 0 (real gain=1x) starts at 6e- decreasing as the analog gain increases to 240 then levels out at 240 to 400 where the gain increase is then just due to increased digital gain. In the range 240-400x the actual read noise after the amplifier (noise from graph x the real gain value) will be 3.6e x15.8 to 3.6e x100 ie 57e- to 360e-
From the plot above and data presented earlier we can compile a table of read noise against gain for the ASI174 for a constant exposure and for the situation where the exposure is increased with as the gain is decreased so maintaining the image brightness (constant Histogram level). For simplicity I have picked an exposure where 10msec generates enough electrons to give a saturated signal at 100x gain. Shot noise, which we shall learn more about below, has been ignored in this table.
In the upper half of the table we see how for a fixed exposure time the signal to noise ratio (SNR) improves as the analog gain increases in the range 0 to 240 then levels off as just the digital gain increases in the range 240-400.
In the bottom half of the table we increase the gain and reduce the exposure to maintain a fixed image brightness (at max. of 100% in this example), if we do this then the signal to noise ratio gets much worse at high gain. This is because at high gain the photon signal is so much smaller than at low gain as the exposure has been shortened to keep the same image brightness. This difference has implications when deciding on the balance between gain and exposure as we will see later under the section ‘Read Noise- High gain and short exposures or low gain and long exposures?’
Don’t forget that in both these cases we are ignoring shot noise whose addition can only make the SNR worse.
Read Noise and Bit Depth
In the table above the values of the noise and signal are in 12-bit, where there 4096 grey levels. When converting from 12-bit to 8-bit you would divide these values by 16x (the SNRs would be unchanged by this). Below is a table compiled from the table above showing 8-bit values and the corresponding digital value.
Looking at this table you might think that when converting from 12-bit to 8-bit any read noise of less than 0.5 grey levels would magically be lost and the signal would ‘clean itself up’ because the noise was smaller than the individual steps in the signal. This though is not the case and low levels of noise can indeed show up in the stacked image.
To understand why levels of noise smaller than one unit of the 8-bit signal can show up later, you need to know that when hundreds of 8-bit images are stacked in AS!2 or Registax, the stacked output is no longer 8-bit but actually becomes a 16-bit image. Instead of having just 256 grey levels in the signal the averaging process that occurs in the stacking process converts the stacked output to 65,536 grey levels- putting an extra 256 grey level between each previous single step! All this means that the low levels of read noise seen above in the 8-bit signal can manifest itself in the stacked 16-bit image, especially when the image is stretched in the processing stage.
The creation of smooth 16-bit images from separate 8-bit images can seem a bit like magic. If you stacked many identical 8-bit images, the stacked image would still only have 256 levels in it, however, the presence of a low but finite level of noise (mainly shot noise) in the relatively coarse 8-bit image allows the stacking process to create a smooth 16-bit image giving much better greyscale resolution to the image than that in the separate 8-bit images that went to make it up. It is this much finer grey level spacing in the stacked 16-bit image that allows the wavelet processing to be done without it breaking up. This processing which brings out the details in the image can also draw out the read noise if it is there.
For more about this process of converting 8-bit images to 16-bit stacked images please read the later section entitled ‘12-bit, reconstructed 16-bit, 8bit. It’s all very confusing!‘ which has links to interesting related articles and discussions.
It is worth noting here that dark areas of the frame where the signal level is low will have low shot noise as the shot noise is just the square root of the signal (see more on shot noise below). Here the read noise is the main component of the noise. If read noise is low then noise levels may be so low in these dark areas that there is insufficient signal variation to allow the smooth conversion from 8-bit to 16-bit during stacking. As a result, these areas of the stacked image may still be quantised into the bottom level of the original 256 grey levels which will tend to clean the signal of any low level read noise and will just stay black with no detail present. To bring detail to these areas (which unfortunately then may start to show up the read noise) you would need to increase the exposure time to increase the shot noise here – no amount of frames stacked will help bring out detail here otherwise.
Read Noise for ASI120
For the ASI120 the read noise v. gain plot is much more complex than with the ASI174, as the analog gain changes in a stepwise fashion and can only be 1x 2x, 4x or 8x, as we saw above. Thus the plot of effective read noise is like a staircase, dropping whenever the analog gain goes up to the next level;
Random and Non-Randomly Distributed Read Noise
Read noise is primarily randomly distributed over the image but it can also have a non-random element with a spatial distribution to the noise such as a pattern of horizontal or vertical lines. The so called fixed-pattern-noise (FPN) which traditionally has afflicted CMOS chips is an example of such a spatially distributed component of read noise. The ASI174 has less fixed pattern noise than the ASI120 and whereas for the ASI174 it is primary found in the rows, with the ASI120 the pattern is primary vertical (see examples below).
Adding frames together will average out the random element to the noise, which will reduce by the square root of the number of frames stacked and this will cause it to fade into the background. Having said this the degree to which it fades with stacking depends on the severity of the noise. If you have lines of heavier noise in the frames against a less noisy background then those heavier lines may still show in the stacked image.
To some degree you can overcome the spatial pattern in the read noise by deep sky methods such as using dark frame subtraction or by moving the image around on the chip during an exposure. This gives a spatial averaging effect as at the align stage AS!2 or Registax will lock onto the planet and the background will move around relative to the planet.
Shot Noise- Introduction
Shot noise is generally the major source of noise in the signal for planetary imaging and is something that is a fundamental limitation for low light signals. Shot noise is the statistical variation in the number of electrons in a well and is independent of read noise. It comes about as a result of the statistical variation in the incoming photons with the variation in the photons (noise) – being equal to the square root of the number of photons gathered.
Shot noise is a bit like rain falling on a dry pavement. To start with there is a lot of variation in wetness of the pavement-some areas are dry and some wet and the slab looks spotted. As the rain continues to fall and the number of raindrops landing on the slab increases, the local wetness of the paving slab evens itself out and there is much less variation in wetness across the slab.
As the noise is given by the square root of the number of photons/electrons, short exposures, which have a lower photon count, have less shot noise but this noise represents a much greater % of the signal. For example a well filled to 10,000 electrons will have a shot noise of the square root 10,000 (=100) giving a signal to noise (SNR) of 100:1 whereas a well only filled to 100 electrons will have a shot noise of 10 giving a much worse SNR of 10:1.
Shot Noise and Gain
Beginners to digital video imaging often think that reducing the camera gain reduces the noise in the image and that extra gain ‘makes things noisy‘ but this is not really the case.
For shot noise, which for bright areas is the major source of noise in each frame, the signal to noise ratio (SNR) is actually independent of gain. Higher gain is often linked with a more noisy signal but this is only because to maintain the image brightness for higher gain you need to drop the exposure and the lower signal has a greater % of shot noise. It is not the gain increase in itself, but the related decreased exposure, leading to less photons/electrons in the well, that leads to a noisier image. The shot noise actually decreases with decreased exposure but the signal drops more so although the increased gain brings the signal back up the relative amount of shot noise increases.
For planetary imaging, reducing the relative amount of shot noise is the main reason for decreasing the gain. Don’t forget, however, that this reduced gain only gives benefits of reduced shot noise if after decreasing the gain you also increase the exposure to maintain the image brightness. If you just reduce the gain and don’t increase the exposure then shot noise will stay the same as the number of electrons generated will be unchanged. An example may help here;
- Imagine an ASI174 camera with a pixel well full of 32,400 electrons and the gain being set to 1x so that the 8-bit output is 1111 1111 (ADC signal maximum). In this case the shot noise is root of 32,400 or 180 electrons. Here the signal to (shot) noise will be 180:1 (=32,400/180) and there will be 1.4 grey levels of shot noise for a 256 grey level (max.) signal (256/180=1.4). If the exposure is now decreased to 1/100th of the original value but the gain is increase to 100x then the output signal will still be at the same max. 8-bit value of 1111 1111 (=256). Although the well is still effectively saturated, there are only 324 electrons in the well (the well depth has effectively been reduced by a factor of 100x). The shot noise now will be root of 324 or 18 electrons. The signal to (shot) noise now will be 10x worse at 18:1 (=324/18). For the signal of 256 the shot noise will now be 14 grey levels (14=256/18) ie 10x higher than before.
- Let’s do the same calculation for the ASI120 camera now (again for 1x gain). For a full well the shot noise will be root of 14,500 or 120 electrons and the signal to noise then will then be 120:1 which gives 2.1 grey levels of noise for a full (256 level) signal. Decrease the exposure to 1/64th of the original value and increase the gain to the maximum value of 64x and the effective well depth before it is full becomes 64x smaller. Now the well is full when there are only 226 electrons in the well (226 x 64=14,500). The shot noise in this case will be root 226 or 15.0 electrons and the signal to noise will be 15:1. This gives a noise of 17 grey levels for a 256 level signal (256/15) at maximum gain making it slightly noisier than the ASI174 at maximum gain- even though the gain increase was less.
Read Noise versus Shot Noise and Signal to Noise Ratio (SNR)
It is interesting to make a table of shot noise and read noise at various gains for the ASI174 to see which dominates. This is a key table and an extension of the table called ASI174 Read Noise table I showed above under Pixel and Circuit Noise. and so I will explain it carefully;
- As before the table is split into two halves. The top half is for constant exposure and varying gain whilst for the bottom half the exposure goes up as the gain drops – so that the image brightness is constant (see histogram column)
- A reasonably bright object like Jupiter has been chosen as the target so that 10ms exposure gathers 250 electrons per pixel
- Maximum (100x) and minimum (1x) gain are shown as well as the maximum analog gain (=15.8x)
- System gain divided by user (e/ADU) gain gives the effective conversion between electrons into the amplifier and counts out of the ADC (after the amplifier)
- Shot noise is the square root of the signal in electrons
- Total noise is given by the square root read noise squared added to shot noise squared
- The middle columns of effective read noise and shot noise are before the amplifier
- Frame SNR is the signal to noise ratio for individual frames
- The stack size is just the number of frames gathered in 100secs presuming that the camera is constantly exposing with no dead time between frames (ie exp=1/fps)
- Stack SNR is the signal to noise of the stacked image. We presume that both read and shot noise are random and that the total noise reduces with the square root of the number of frames so for example stacking 1600 images will reduce the noise by 40x
Above you can see that for this relatively bright object the shot noise dominates over the read noise with dominance increasing the longer the exposure.
- As we found for the read noise earlier (see ‘Pixel and Circuit Noise- Read Noise’) for constant exposure the SNR of a single frame improves as the analog gain improves but once the read noise is constant then the SNR is constant.
- If we balance exposure and gain to maintain the image brightness then SNR of the individual frames drops dramatically with increased gain/reduced exposure.
- The SNR is dramatically improved by stacking which reduces the noise in the image. At high gain there are far more frames available to stack as exposures area shorter and this helps to almost completely cancel the drop in SNR with gain.
- As long as we capture during the whole of the imaging period and not have dead time between frames (eg you should have 30fps if imaging at 1/30th sec so exp=1/fps) then after stacking, even big differences in SNR of the individual frames tend to almost completely even out.
Below I have compiled the same table for an object with a surface brightness 25x dimmer than previously, say for an object like Uranus. Exposure times have been kept the same as before and this makes the target signal 25x less in terms of electrons per pixel and the histogram levels consequently very low. In reality for dim objects you would reduce the magnification and increase the exposure to bring the number of electrons back up but let’s keep the settings the same as they were for Jupiter for illustrative purposes.
You see that for this dimmer object the SNR values are much worse than before and the read noise and shot noise are much closer in value now- this because a low signal gives a much lower absolute shot noise whilst the read noise is essentially unchanged.
The trends seen in the previous table for the stacked set are even more obvious in this table. We see now that in the upper half of the table the SNR improves whilst the analog gain is increased (and the read noise falls). In the lower half of the table higher gain and shorter exposures for the stacked image noticeably reduces the SNR. This might prompt you to keep the gain low and the exposure high for dim objects.
For interest below I have added extra columns in now for the dim object for longer imaging periods showing how the SNR increases with the square root of the imaging period. So if we increase the imaging period by 4x the number of frames increases 4x and the SNR increases 2x. Increasing frames like this is a massively powerful way of increasing SNR on all objects but is especially useful for low brightness objects which often start with quite a low SNR.
Shot Noise- High gain and short exposures or low gain and long exposures?
For imaging it is important to bring out the finest detail in the stacked image and minimising noise is an important part of this. Let’s look at the shot noise and see which the best settings are to minimise shot noise -lots of short exposures and high gain or fewer long exposure and low gain?
Let us run our table again but setting the read noise to zero so we just look at the impact of changes on the shot noise;
You can see looking at the frame SNR that the SNR of the individual long exposure/low gain frames is much better (higher) than that of the short exposure/high gain frames. At high gain and short exposure, however, you can potentially run at a much faster frame rate and have far more frames to stack together than when running at low gain and long exposure. This then helps to reduce noise and reverse the drop in SNR. In fact you can see that after stacking, the SNR of the stacked image is identical across the range of settings. Thus from the shot noise perspective it doesn’t matter what exposure/gain pairing you use the SNR is the same after stacking. This is the case as long as you gather light for the whole of the 100seconds with no dead time between frames. This infers that the camera and computer should not be data transfer speed limited during recording. Unfortunately this can be quite a demanding requirement for short exposures as the fps may exceed the camera speed or the USB transfer rate may be too high for the laptop being used.
Although it may suggest that from the shot noise perspective you can run at any exposure/gain combination do remember that low-gain long exposure length frames are much more likely suffer badly from movement and seeing issues.
As a follow-on from this, it is interesting to see what happens if you don’t alter the gain at all and run the same table above for constant gain but just altering the exposure and allowing the corresponding frame count to increase or decrease accordingly.
Amazingly the SNR values are exactly the same as in the earlier table in this section. This highlights an important fact that for shot noise, which is the dominant source of noise in imaging of bright objects, the SNR of the stacked image is independent of the actual gain used and the actual exposure used! As long as you gather frames for the whole of the imaging period what you lose in the individual short exposures you make up for in correspondingly more of those shorter exposures.
Of course this simplistic conclusion ignores the small amount of read noise – which as you will see below becomes increasingly important for dimmer objects and which is not independent of gain or exposure.
Read Noise- High gain and short exposures or low gain and long exposures?
Let us ask the same question we asked above but this time ask it for read noise- which is best; lots of short exposure and high gain or fewer long exposure and low gain?
Let us run our table again but set the shot noise to zero so we just look at the impact of changing read noise on the SNR;
We now get a very different answer than we got for the shot noise. For both individual frames and for the stacked image higher gain massively worsens the SNR.
Is this trend of decreasing SNR as we move down the table due to increasing gain or reducing exposure time? It’s hard to tell as both are changing so to answer this looks at the two tables below;
This is for increasing gain and constant exposure;
Whereas this is for constant gain and decreasing exposure;
- As far as the gain goes then the SNR actually improves with gain for both the individual frames and the stack as long as the analog gain is increasing. When the analog gain runs out at 15.8x then the SNR levels out
- As far exposure goes however, reducing the exposure reduces the signal size in proportion but does not affect the read noise; this means the SNR drops for both the frame and the stack as the exposure falls. The frame SNR drops by 100x when the signal drops by 100x. The stack SNR, however, drops only by a factor of 10, this is because 100x shorter exposure allows 100x as many frames and noise is proportional to root of the number of frames.
Noise Summary- High gain and short exposures or low gain and long exposures?
- If the image is bright then shot noise dominates over read noise. The exact values of gain and exposure make little difference to the SNR which gain/exposure combination you stack together provided you use the same proportion of frames from each recording and you image efficiently with no wasted time between frames (ie exposure = 1/fps). See top half of table below for JUPITER.
- If the image is dim, the signal level drops and the shot noise drops too, but not as much (shot noise is sq. root of the signal remember) this makes the SNR worse in itself for a dimmer image. Even so for just the shot noise element the same conclusion as above applies in that it doesn’t matter where you operate as long as you image efficiently (ie exposure = 1/fps).
- Unlike shot noise the read noise is independent of the signal size and only really depends on the gain level and then not strongly. With a dim image and the signal dropping and the shot noise drops too and the read noise then starts to become more dominant. Then changes in the amount of read noise both with gain setting and signal level (with exposure) start to have an influence on the decision of which combination to go for. High gain and short exposures then start to look significantly less attractive from just the noise perspective. Although higher analog gain in itself helps improve SNR, any corresponding reduction in exposure time (if you are maintaining image brightness) more than wipes out any benefit and the extra frames in the stack that the shorter exposure then brings does not do enough make up for this.
- Do remember a fundamental difference between shot noise and read noise that is important here. With shorter exposures your signal drops, but the combination of being able to gather more frames and the fact that shot noise drops with signal, together exactly balance this out. Read noise, however, does not reduce with signal and dropping the exposure drops just the signal and not the read noise. It does allow more frames to be gathered but the stacking of these extra frames does not make up for reduction in SNR. Quartering the exposure time reduces the SNR by 4x. It does allow 4x the number of frames but these extra frames only improve the noise (and SNR) by 2x.
- Thus for a dim image, where the read noise comes into importance, longer exposures and lower gain help the SNR of the stack. For a bright image, however, the exact combination of gain and exposure doesn’t matter that much as long as you fill the recording period efficiently with minimal dead time between frames.
Other Gain/Exposure Considerations
With all this talk of noise, it is easy to think that low noise and high SNR are all that matters with planetary imaging and that your gain and exposure settings should purely be dictated by noise considerations. Although important, good SNR is just one of many factors to consider in successful imaging. You also need to think about other factors when deciding on gain/exposure settings as these have an impact on other aspects which can affect one’s ability to produce good images;
- Shorter exposures ‘freeze’ the seeing better especially if the scope is being buffeted at all by breezes which can lead to smearing during the frame exposures. It is hard to get good hi-res images with exposures above ~50 msec, unless the seeing is excellent.
-This would push you towards the short exposure high gain regime unless the object was dim where the read noise starts to dominate (and high gain increases SNR). Dim objects which need long exposure, even at high gain, benefit from stacking for much longer periods to bring down SNR.
- Shorter exposures often mean faster frame rates and download speeds which can be problematic especially for older computers. If you can’t arrange it so that fps= 1/exposure time you are losing imaging time between frames and this might push you to increase the exposure to bring the frame rate down.
- For high brightness subjects you want some noise to avoid quantisation errors as discussed later in the section under Extras ‘8-bit imaging – leading to 16-bits on stacking‘. This may influence gain/exposure choice and push you to shorter exposures on high brightness objects to introduce some noise. Alternatively image in 12-bit/16-bit mode which needs 16x less noise to avoid quantisation errors (see ‘12-bit, reconstructed 16-bit, 8bit. It’s all very confusing!’ under Extras later on).
- In darker areas of the image where shot noise is very low, no matter how many frames you stack you may get stuck in an 8-bit quantisation trap and these areas will always stay black. Sometimes you just need exposure to add signal and the increased shot noise it brings to these areas to be able lift them out of the blackness with processing of the 16-bit stacked image.
- A practical consideration when imaging is that the image needs to be bright enough on the screen to be able to focus properly prior to hitting the record button but without being saturated. Typically a gain/exposure setting will be chosen so that image brightness is 40%-80% of maximum.
Dynamic range is a measure the total number of discrete levels ‘observable’ in a real signal and generally is taken as the well depth in electrons divided by the read noise in electrons- shot noise is ignored as that is essentially independent of the camera.
The full well represents the maximum number of electrons in a signal and the read noise represents the minimum incremental step size in the signal (in terms of electrons). Steps smaller than the size of the read noise are swamped by the size of the read noise superimposed on the signal and so are regarded as not detectable. For the ASI174, for example, the full well is 32,400 and the read noise is 6e. Therefore signal increments of less than 6e are not useful and the smallest useful signal step size is 6e. Dividing 32,400e- by 6e- gives 75dB (=20log(32,400/6)).
If you increase the gain, then the total read noise increases (although as we have seen the read noise divided by the gain actually decreases as the read noise plots showed) but the maximum signal is still 1111 1111 1111 from the ADC. The higher noise level at higher gain reduces the size of the useful incremental step so the dynamic range decreases as the gain increases.
Another more simplistic way of looking at this is that as the gain increases it takes fewer electrons in the signal before the ADC output is maxed out- so increasing the gain can be regarded as decreasing the effective well depth whilst the noise stays the same (actually it effectively decreases a bit with gain).
Hopefully the table below explains things a little more clearly, again using the ASI174 as an example. Here the read noise decreases to 3.6e at max. analog gain and the effective well depth decreases from 32,400 at 1x to 2,038e at 15.8x and only 324e at 100x gain;
ASI120 vs ASI174 which is better?
It is now time to make a summary comparison between the two cameras as far as suitability for planetary imaging is concerned. There are obvious differences between the two such as chip size, pixel count, speed and type of shutter and these are shown in a Comparison Table below but most readers will be interested in things which are harder to ascertain right away such as noise and sensitivity. The task of comparing the two is made harder due to differences in gain set-up and the split between analog and digital gain so as a compromise I will compare sensitivity and gain for two mono cameras with the following example set up;
- Both with same shutter speed (eg 1/30sec)
- Both with an overall gain of 16x which corresponds to;
-ASI120 gain setting of 64 (64% of max.) which gives 8x analog gain and 2x digital gain
-ASI174 gain setting of 241 (60% of max. = 24.1dB) which gives 15.8x analog and 1.01x digital gain
- Both with the same image scale in terms of arc secs/pixel (need to reduce magnification by 1.56x for the ASI120 camera)
- Both imaging an object which gives a brightness of 80% on the histogram for the ASI120
ASI120 80% on histogram (we just defined this above)
ASI174 35.8% on histogram (see section above on comparative image brightness which is calculated to be 2.23x lower than the ASI120)
The number of electrons in the signal is = degree that well filled (ie histo) x full well depth/gain.
ASI120; 80% x 14,500/16 = 725e
ASI174; 35.8% x 32,400/16 = 725e
Thus the signal size in terms of electrons is the same for both
The shot noise is the square root of the number of electrons per pixel. As the signal is electrons is the same for both then the shot noise in electrons will be identical for both at 27e.
Looking at the gain versus read noise plots for Firecapture when the overall gain is 16x we have gain settings of 64 for the ASI120 and 240 for the ASI174. For both these cameras at these settings the read noise is the same at ~3.7e. The relative split between random read noise which will reduce on stacking and non-random read noise which won’t, however, is not specified for the two different cameras.
Note: Be aware that the grey levels that this 3.7e of noise represents will be 2.23x more for the ASI120 than the ASI174 due to the differences in system gain (7.91/3.5 which equals 2.23x). This makes the signal 2.23x brighter too as well saw under image brightness.
Signal to Noise Ratio
The signal to noise will be the same for both as noise and signal are the same in electrons.
SNR=Signal:Sq.rt(shot noise2+read noise2) = 725:sq.rt(272 + 3.72) = 26.6:1
The dynamic range is the max. number of well electrons (at that gain) to saturate the signal divided by the effective read noise at that gain.
ASI120. If 80% of signal saturation is 725e then full saturation would occur at 906e. Thus dynamic range at 16x is 906/3.7= 48dB
ASI174. If 35.8% of signal saturation is 725e then full saturation at would occur at 2025e. Thus dynamic range at 16x is 2014/3.7= 55dB
The difference in dynamic range is 7dB which is our old friend 2.23x.
The higher dynamic range comes essentially from the fact that although you can scale your image with the pixel size to maintain the number of photons per pixel you cannot scale the molecular structure of the silicon and a smaller pixel will always have a smaller well which has less electrons in it leading to a reduced dynamic range.
If you operated both cameras with the same image scale and had the same exposure and absolute gain the image in the ASI120 would be 2.23x brighter than the ASI174. Despite the difference in brightness of the image the signal to noise ratio would be the same for both cameras for both shot noise and read noise.
One difference between the two cameras is the dynamic range with the ASI174 having 2.23x (7db) the dynamic range of the ASI120 at the same gain. This gives it an edge on high contrast objects like the as it can successfully image dimmer areas without too much noise but also has extra headroom to image brighter areas without saturating.
For planetary imaging camera both cameras will perform well as they both have high sensitivity and low read noise. This is true provided your telescope focal ratio is adjusted to compensate for differences in pixel size to maintain the same image scale in arcsecs/pixel.
As far as solar and lunar imaging goes there are a number of factors apart from the better dynamic range which make the ASI174 camera more suitable.
- A pixel count twice that of the ASI120 allowing high resolution imaging of large areas
- Higher maximum speed even at full frame which can be realised with bright objects like the moon and sun where fps will not be limited by long exposure times
- Global shutter rather than rolling shutter reducing ‘rubber sheet’ distortion across the frame during poor seeing or scope movement due to breezy conditions.
Camera Comparison Table
The ZWO cameras have a gamma adjustment which be default is set to 50 (off). Enabling the gamma function by ticking the box in Firecapture allows you to improve the contrast for dim objects (gamma <50) or for bright objects (gamma >50). Gamma alteration improves contrast by compressing one end of the histogram whilst stretching the other end so for example if you improve the contrast at the dim end this will be at the expense of reduced contrast at the bright end.
Be aware that if you set the gamma adjustment to anything other than 50 will lose you data. Gaps will be produced in the histogram where the contrast is reduced whilst at the other end data values are removed to make room to squash the histogram up and increase contrast. You can see this in the Firecapture histograms of Jupiter below.
If you own a colour camera one of the settings you may play around with is the colour balance. There is an adjustment slider for Blue and also for Red and these are typically set at default values of something like 90 for Blue and 50 for Red. Green is kept at 50 and is non-adjustable. Understand that setting the slider to a value of greater than 50 adds digital gain to the signal for that colour which is on top of the gain already set for the camera under the main gain control.
Setting the slider at less than 50 is not recommended as the signal could then be saturated but not look at full brightness on the preview screen. A look at the histogram will reveal issues with clipping for the red at just the 60% level (see example below).
A popular mode of operation for CCD camera chips is pixel binning. This is where adjacent pixels are grouped together in 2×2 arrays or 3×3 arrays before being read off the sensor. Binning in CCDs makes larger effective pixels and reduces read noise as the read noise is applied once to the group of pixels rather than to each of the pixels in the group separately. As well as reducing read noise binning reduces shot noise as the number of electrons captured per pixel is increased. With 2x binning, for example, 4 adjacent pixels are grouped together increasing the number of electrons by a factor of 4 and so halving the shot noise (reduced by root of 4). Binning also dramatically increases speed for CCD chips as the data to read off the chip is reduced by a factor of 4 (or 9 for 3x binning).
In ZWO CMOS cameras you can tick the binning option box in Firecapture to enable real-time 2x binning. However for CMOS chips, binning on the chip itself is not possible and so there is no benefit in reduced read noise like there is for CCDs. Because binning is done after readout from the chip there is also no camera speed benefit as the maximum camera speed is pretty much dictated by the speed of reading the data off the sensor. Having said that the data rate from the camera will be 4x less and this should help improve rate of data transfer down the USB cable with slower computers.
With 2x binning used on CMOS cameras and CCD cameras alike, the effective image scale in arcsecs per (effective) camera pixel is halved and the shot noise is reduced by a factor of 2 as the number of electrons is increased by 4x, however, this same benefit in shot noise could be had post-capture in processing.
- For the ASI174 camera the 2x binning is done in the camera, off the chip, by hardware and software. The max. speed without binning is given as 128fps for the full resolution of 1936X1216 and the same 128fps applies for 2×2binning where the resolution drops to 968×608. This compares to about 250fps for unbinned 968×608
- For the ASI120 camera the 2x binning is done in the camera with software only. For this camera the max speed is given as 60fps for the full resolution of 1280×960 but for 2×2binning where the resolution drops to 640×480 the speed drops to 45fps. This compares to 133fps for unbinned 640×480
Real-time binning with CMOS chips has few real advantages over binning post processing and certainly does not have the considerable advantages of binning with CCDs which includes improved speed and reduced read noise. Having said this it can be useful when you want to reduce the storage space of files, where you might be USB data transfer speed limited, or where you find you are oversampling for the prevailing seeing conditions and want a quick way of reducing the sampling rate. It also allows real-time reduction in shot noise which might be useful for on-screen meteor observing.
Overall though it is much better if you can to reduce your image scale by a factor of 2x by reducing optical amplification rather than enabling 2x binning. This will give you a 4x increase in the number of electrons per pixel also halving the shot noise like binning, but if you use ROI to cover the same part of sky your speed will be much faster. Even if you do not use ROI there is the advantage that the area of sky covered is increased 4x.
12-bit, reconstructed 16-bit & 8bit. It’s all very confusing!
The AI174 and the ASI120 cameras both have an ADC (analog to digital convertor) sited after the amplifier, which has a 12-bit output. For most purposes only the most significant 8-bits of this 12-bit signal are used and this is entirely sufficient for imaging needs. There are occasions, however, when it is better to use the full 12-bits of the ADC signal. This section gives more details of these two modes of operation as well as other related information.
8-bit imaging – leading to 16-bits on stacking
For the needs of most digital video planetary imaging an 8-bit depth in the video file is sufficient to produce great finished images. The 8-bit signal is generated by taking the 12-bit ADC output and removing the 4 least significant digits. For example the 12-bit signal 1101 0001 0111 becomes 1101 0001.
In modern planetary imaging many hundreds of 8-bit images are stacked and the output of stacking part of programs like AS!2 or Registax is actually a 16-bit stacked image (usually in Tiff format) which has 65,536 grey levels in total compared to just 256 for the original 8-bit signal. This means this image has an extra 256 grey levels between each 8-bit grey level found in the frames that went to create it. The 16-bit image can be stretched or wavelet processed without the image breaking up and showing non-smooth changes in greyscale similar to posterisation, which would happen if the stacked image was still 8-bit.
To enable this 8-bit to 16-bit conversion to happen and not to end up with quantisation errors where the grey levels do not change smoothly you need a certain amount of random noise in the individual frames that are stacked. If there was no noise all frames would be identical and the resulting 16-bit image would look just like an 8-bit image, breaking with digitisation/quantisation errors on wavelet processing.
For more on this subject, see this interesting article by Joseph M Zawodny which explains how a level of noise of ±0.5 a digitisation unit in the individual frames is optimum for producing a stacked image without posterisation/digitisation effects. Also see an excellent discussion initiated by experiments by Emil Kraaikamp (of AS!2) can be read here which shows for 8-bit imaging and sufficient stacked images then you need at least 0.5 grey levels of random variation image noise to avoid quantisation errors; for 12 bit imaging the noise can be 16x smaller if you want to avoid these quantisation errors.
When imaging high surface brightness object like the Moon, Sun, Venus as possibly Mars high photon counts can be achieved with relatively short shutter speeds and low gain. With high photon counts like this the shot noise can be quite low and for 8-bit imaging can actually be insufficient to give the random variation in the signal required to give a high quality 16-bit stacked image which does not show quantisation errors. In this case the grey levels stacked image when stretched in Registax will not look smoothly changing and will look posterised with quantisation errors (see an example of this in the Jupiter pair above). 12-bit imaging could be used in this case where there are an extra 16 grey levels between each grey level of the 8-bit case (4096 grey levels versus 256). 12-bit mode can cope with noise levels 16x lower than 8-bit without giving quantisation errors in the stacked image. As an alternative to going to 12-bit you can of course reduce the exposure to increase the shot noise and gather more frames.
Quantisation errors can also creep in when stacking insufficient frames to fully flesh out the 16 bit stacked image. When stacking only a few hundred frames or less again 12-bit may be advantageous.
12-bit imaging for imaging where there is significantly more than 0.5 grey levels of random, usually shot noise, (noise at 0.5 levels referenced against the 256 levels of an 8-bit image) has no advantage and can impact download speed as well as increasing storage requirements.
Conversion from 12-bit to pseudo 16-bit
In Firecapture there is no 12-bit option only an 8-bit (default) or 16-bit. This 16-bit signal is created from the 12-bit signal by adding 0000 to the end of the 12-bit signal so for example 1101 0001 0111 becomes 1101 0001 0111 0000. This conversion is done in Firecapture because 16-bit is more computer friendly than 12-bit, being a full 2 bytes long. Be aware that this is not a real 16-bit signal it is a 12-bit signal masquerading as a 16-bit as it only has 4096 grey levels not 65,536 as a true 16-bit signal would have (see bit depth table below).
Deep Sky Imaging and 16-bit signals
A true 16-bit signal is preferred by those wishing to use imaging cameras for deep sky objects where the higher bit count (8-bit increasing to 16-bit on stacking) has not come from stacking many separate frames- as it does with planetary imaging. Traditionally DSO imagers use long exposures and multiple calibrations (dark frames, flat frames etc) to reduce noise and need the fine gradation in grey scale that 16-bit imaging gives to allow extreme stretching, especially at the bottom end where the faint detail is, without the images breaking down and showing quantisation effects.
DSO images often require a high dynamic range as detail need to be revealed in the faint stuff and also not overblown in the bright stuff (stars). Deep pixel wells from large pixels help here together with low gain as we found out previously.
Recent experimentation by Emil Kraaikamp and others have started to use planetary imaging camera techniques for DSO work using short exposures and stacking multiple short exposures to create 16-bit stacked images from 8-bit frames allowing stretching to bring out details. The new generation of low read noise CMOS planetary imaging cameras make this method full of potential and the technique is appealing from several angles;
- no need for cooled cameras as short exposures do not suffer unduly from bright pixels which come from long exposures with warm cameras
- tracking requirements and demands for expensive, sophisticated and well aligned mounts is massively reduced
- potential for a much greater dynamic range as brighter stars are less burnt out in individual frames compared to long exposure shots
- lucky imaging methods start to become a possibility to improve resolution on high brightness objects
- less troubled by satellites, planes etc as bad frames can be disposed of
10-bit ADC mode
The ASI174 also has a 10-bit mode which can run at higher speed than the 12-bit (and related truncated 12-bit =8-bit). Although capable of higher speeds this mode results in significantly higher read noise than the 12-bit mode and consequently is not recommended for planetary imaging.
Bit Depth Comparison Table
I have constructed the table below showing the translation from 12-bit to 8-bit and pseudo 16-bit which you may find useful. The 12-bit output from the ADC is in the middle and this can be converted to 12-bit or 8-bit. You can see how 12-bit signal is a much more finer graduated version of the 8-bit signal and the pseudo 16-bit is essentially the 12-bit signal with each grey scale having a value 16x larger.
It is worth underling the fact that although 12-bit signals give higher numbers than the 8-bit signal the two modes cover the same range of incoming signal brightnesses and gains. The 256 grey levels of the 8-bit are not the first or last 256 levels of the 4096 levels in the 12-bit scale.
A big thanks for comment and advice from Grant Blair, Chris Garry, Simon Kidd and Emil Kraaikamp in writing these pages