Ji,
Sorry but the reason the black dog looks gray has nothing to do with the amount of light, it is simply a result of the incident light meter that resides in all modern cameras [and the resultant exposure]. You can achieve a 'correct' exposure in any light level with the right technique and equipment. That doesn't mean that you can shoot, pin sharp, action images in the dead of night, that isn't practical but the theory still stands.
The incident light meter is designed to create an average exposure [of a ‘hypothetical’ average scene] and render this as a mid tone 18% gray.
If you aim your camera at a quality grey card and shoot you should get an accurate gray image. Aim your camera at a white wall and bingo you still get the gray wall - the camera assumes that it an ‘average’ scene and under exposes the image to produce an 18% gray. Shoot a black wall and you guessed it, yep another gray wall. As this time the camera has over exposed the image to change the gray wall [or dog] to an average 18% gray.
This is why any quality camera has an exposure compensation feature to allow the user to over-ride the incident light meter when shooting non-average scenes.
That is also why the well informed photographer will use a 'reflective' light meter, and/or their experience to over-ride the limitations of the incident meter.
Your golden premise that 'the more light the less colour… etc' simply isn't correct. Shoot an image at high noon, on a sunny day and you get great colour saturation, but you will also exceed the contrast range of the chip [or film]. Shoot on a cloudy day with diffused sunlight and you get a flat, low contrast image. Neither is wrong, but a well exposed low contrast image can be post-processed much more easily than an image with lots of deep underexposed shadow areas that contain minimal data or detail.
It is much easier to add contrast and a bit of midtone saturation [as all low/mid price digital chips lack mid tone saturation] to make a low contrast capture 'POP' than it is to try to add detail into underexposed shadow areas. That is why we use reflectors, fill in flash and multiple exposure HDR techniques in high contrast situations - to add detail [and therefore reduce the contrast] in the black or dense shadow areas. This is also the reason that we shoot in studios [where applicable], to control the quality, intensity and contrast of the light use in the shoot.
Another point to realise is that ALL low/mid price digital cameras use a chip with a ‘linear array’. The average quality mid price chip can record an image contrast range of approx 6 stops - at noon on a sunny day the contrast range can easily exceed 10 stops. So you have to make a decision which tones to clip. It all depends on the scene in front of the camera [this is where the operator comes in as the camera has no idea what it is looking at no matter how sophisiticated the incident meter algorithms are] as to whether you clip the highlights or shadows - but with no other intervention, 4 stops of data will have to be lost. So what does the linear array have to do with this?
As I said the average chip can record a scene with an approx 6 stop range. If your image contrast range is 6 stops or less then the chip can record the scene without clipping. If it is beyond this 6 stop range then you have to decide [by using the exposure compensation] which tones to clip.
A polar bear in a snow storm - you would probably clip the shadows [because they don't represent many tones in the scene]. A coal miner in the pit - then you would probably expose for the shadows and sacrifice the highlights. The term 'linear array' refers to the principle that in this 6 stop range, not all stops are created the equal. In an average chip with 4096 levels, the first stop of the chip [at the highlight end] will have 2048 levels and the next stop will have 1024 levels, all the way down to the last stop [the blacks or dense shadows] where it will only have 64 levels.
This is why it is vital to correctly expose the highlights with a digital camera. If you are one stop under exposed you have thrown away 2048 levels of tone, or half the data the chip is capable of capturing - that is why the exposure compensation feature is your best friend [or worst enemy if you choose not to use it].
Try to post-process an underexposed image and you will soon see the tell tale traces of ‘digital noise’ that exists in the shadow areas that have the lowest ability to capture variations in tonality.
How you use this information depends on what kind of photography you are doing. If you are happy to just point-and-shoot and just make a record of the moment, then do just that. If you wish to manipulate, or further process the image capture then you need to embrace these fundamental principals to ensure that you capture all the possible data to allow for effective post-processing.
If you are aiming to go down this path then I know you won’t have your camera set to process the images a jpeg file, but instead you will save them as RAW images. But that is for another time…
Also as I [am glad to] see that TonyT is back on OZVMX I look forward to his insights on this subject.
VMX42