Silk Road forums
Discussion => Newbie discussion => Topic started by: forrotser on May 17, 2013, 04:09 am
-
guys,
this is a duplicate from the Onion Forum. But my first question is: how many posts does it take here to see the 'new topic' button appear in the regular forums? thanks.
On to the actual question I'm asking ... per the title, I'm trying to find out where to start in order to get to the bottom of an issue where a camera lens captures light and where *exactly* that light is interpreted such that the color of the image becomes relevant. I know the filters are involved here to get to the RGB primaries, and the sensors are too. But other than generalities like those, I'm not quite sure where to start. The end goal is to be able to take an image capture that contains more than one color (say for instance, a person's shirt) and be able to see the image once it comes through the camera as just one big blob of a color. So in essence, the image is really just being distorted.
I'm just not sure of how you would go about this without manipulating the filters or the sensors inside the camera directly. But the real question is, where do you guys think I should start with this? There are some above-ground resources on the web which are plentiful, but most of them say pretty much the same thing and don't get into the level of detail I'm looking for here.
thanks for tolerating a cross post. There are only 2 of these.