Autor Tópico: [ARTIGO] Ruído X Pixels II - O retorno  (Lida 48093 vezes)

Leo Terra

  • SysOp
  • Trade Count: (27)
  • Referência
  • *****
  • Mensagens: 13.761
  • Sexo: Masculino
  • “Deus disse: 'Haja luz'. E houve luz.” (Gen 1,3)
    • http://www.leoterra.com.br
Resposta #30 Online: 24 de Outubro de 2004, 01:27:42
Opa tá ai o artigo.
Tive que procurar novamente :)
Não sei que ponto é realidade, no fórum que eu peguei o link eu vi muitos comentários sobre ele.
Sense & Sensors in Digital Photography
Não tive tempo de ler ele com detalhe ainda.
Qualquer coisa falem o que acharam depois :) Vou até perder um tempinho com ele agora a noite. :) Aos amigos gostaria de pedir que procurem inconsistências vou procurar também, por enquanto segue o artiguinho em inglês, assim que fizermos uma boa revisão sobre os dados contidos nele eu posto uma versão em português já com a revisão :).
Sense & Sensors in Digital Photography
by Charles Maurer

In another incarnation I was a commercial photographer. At the end of that life I sold all of my studio equipment and all of my cameras save one, a Horseman 985, a contraption with a black bellows that resembles the Speed Graphic press cameras you see in pre-war movies. It uses roll film and allows the front and back of the camera to be twisted in every direction when it's parked on a tripod. You can also hold it in your hands and pretend you're acting in "Front Page." Never have I found a camera so useful. Nowadays, however, digital sensors are pushing the optical limits of lenses and software has become more pliable than leather bellows, not just for adjusting colour but for optical manipulations as well. This year a modestly priced (as such things go) digital SLR supplanted my Horseman. I can no longer see owning a camera that uses film.

In this article I am going to examine the technology of digital cameras, but in an unconventional way. I am going approach it from basic principles. This approach may seem abstract and theoretical at first, but it won't for long. You will see that if you understand the scientific principles, you can ignore a lot of marketing hype and save significant sums of money.

Photocells -- Imagine a small windowpane with bits of a special metal embedded in the glass and a wire touching those bits. Photons of light bang against the glass. The impact unsettles electrons in the metal. They bang into electrons within the wire, which bump into electrons further down the wire, which bump into still more electrons, so that a wave of moving electrons passes along the wire - an electrical current. The more photons that bang into the pane, the more electricity flows.

This is a photocell, a sensor that is sensitive to the intensity of light. Now imagine millions of cells like this assembled into a checkerboard and shrunk to the size of a postage stamp. Put this stamp-sized collection of photocells inside a camera where the film usually goes. The lens projects an image onto it. Each cell receives a tiny portion of the image and converts that portion into an electrical charge proportionate to the amount of light forming that portion of the picture. Now we have a photosensor.

The complete matrix of charges on this photosensor forms an electrical equivalent of the complete image - but only of the intensity of the image. Since the eye interprets the intensity of light as brightness, brightness devoid of colour, this photosensor provides the information of a colourless photograph, of a black-and-white photograph. If we feed the output of the photosensor to the input of a printer, and if we let the printer spray ink on paper in inverse proportion to the voltage (lower voltage, more ink), then we will see a black-and-white photograph appear. The output of the photosensor can be connected directly to the printer through an amplifier, or it can be converted into digital numbers and the digital numbers can be sent to the printer. The first approach is analog, the second is digital. The greater the range of digital numbers, the finer the steps from black to white. If there are enough steps, the printout will look like a continuous-tone photograph.

To make a photosensor record colour, we need to make it sensitive to wavelengths of light as the eye is sensitive to them. We see long wavelengths weakly as reds, short wavelengths very weakly as blues, and medium wavelengths strongly as greens. The easiest way to make a black-and-white photosensor record colour is to put filters over the cells so that alternate cells respond to short wavelengths, medium ones and long ones. Since the eye is most sensitive to medium wavelengths, it is practical to use twice as many of these as the others: one blue, one red, two greens. Such a set of filtered cells - red, green, blue, green - forms the Bayer photosensor (named after its inventor) that is used in nearly every digital camera.

Now consider what happens when a spot of light is smaller than a group of four cells, when it is small enough to strike only a single cell. Assume the spot to be white light, which includes every wavelength. If the white spot falls on a blue-filtered cell, then the picture will show the spot to be blue. If the white spot falls on a red-filtered cell, the picture will show the spot to be red. If it falls on a green-filtered cell, the spot will look green. This can cause so many errors in the image that manufacturers try to prevent it from happening by blurring the image, by putting a diffusing filter in front of the sensor to smear small spots of light over more than one cell.

Note that in a sensor like this, four cells form the smallest unit that can capture full information about some part of a picture. That is, four cells form the basic element of a picture, the basic "picture element" or "pixel". Unfortunately, to make their products sound more impressive, manufacturers count cells as pixels. That's like saying a piano has 234 notes, not 88, because it is built with 234 strings. Since the sensors function differently at the level of the cell and the level of the pixel, it is important to ignore the advertising and to discriminate appropriately between pixel and cell. I shall do that in this article.

A simpler approach would be to design a sensor in which every cell is sensitive to every wavelength. Such a sensor was patented by Foveon, Inc., in 2002, and is currently in its second commercial generation. Foveon's sensor uses no coloured filters but instead embeds photo-sensitive materials within the silicon at three depths. The longer the wavelength of the light, the farther it penetrates the semi-transparent silicon and the deeper the photo-sensitive material it stimulates. With a Foveon sensor, every cell records a complete pixel with all wavelengths. (Note, however, that Foveon have taken to multiplying the number of pixels by three, to sound competitive in their ads.)

How many pixels do you need? The smallest detail usable in a print is defined by the finest lines that a person can see. At a close reading distance (about 10 inches, or 25 cm), somebody with perfect vision can resolve lines slightly finer than those on the 20/20 (6/6) line of the eye chart, lines of about 8 line-pairs per millimetre (l-p/mm), which is the unit of optical resolution.

However, those are black-and-white lines. No ordinary photograph contains black-and-white lines so thin because no camera can produce them on photographic (as distinct from lithographic) film. No lens can create such fine lines without beginning to blur the blacks and whites into grey. Dark-grey-and-light-grey lines need to be thicker than black-and-white lines to be seen. In the perception of fine lines, a halving or a doubling of thickness is usually the smallest difference of any practical significance, so this pronouncement of Schneider-Kreuznach sounds perfectly reasonable to me: "A picture can be regarded as impeccably sharp if, when viewed from a distance of 25 cm, it has a resolution of about 4 l-p/mm." On an 8" x 12" photo, this is 1,600 by 2,400 pixels, or 3.8 megapixels. (8" x 12" is about the size of A4 paper. It isn't quite a standard size of a photo but will prove more convenient for discussion than 8" x 10".)

In short, 4 million pixels carry all of the useful information that you can put into an 8" x 12" photograph. Finer detail than this will matter to technical aficionados making magnified comparisons, and it may matter for scientific or forensic tasks, but it will not matter for ordinary purposes. The same holds for larger prints because we don't normally view larger photographs from only 10 inches away. It holds even for the gigantic images in first-run movie theatres. The digital processing used routinely for editing and special effects generates movies with no more than 2,048 pixels of information from left to right, no matter how wide the screen. The vertical dimension differs among cinematic formats but is typically around 1,500 pixels.

This, of course, presents quite a paradox: a frame of a Cinemascope print obviously contains a lot more than 4 million pixels. Even an 8" x 12" print from a 300-dpi printer contains 2,400 pixels by 3,600 pixels, or 8.6 million pixels. Large prints need those additional pixels to prevent our seeing jagged edges on diagonal lines, because the eye will see discontinuities in lines that are finer than the lines themselves.

Since no photograph of any size can contain more than 3 to 4 million elements of information, even when made from film, any substantial enlargement needs to be composed primarily of pixels that do not exist in the original. These pixels need to be interpolated: interpolated through continuous optical integration (film), interpolated mechanically (high-resolution scanner), or interpolated logically by software (digital photography). This need for interpolation in enlargements makes interpolating algorithms fundamentally important to digital photography. For most enlargements, the quality of the interpolating algorithm matters more than the resolution of the sensor or the quality of the lens. We shall come back to this.

For the moment - indeed, forevermore - it is essential to keep straight the distinction between (1) the information that is contained within an image and (2) the presentation of this information. Both are often measured by pixels but they are orthogonal dimensions. The information within a picture can be described by a certain number of pixels. That information may be interpolated into any number of additional pixels but doing so adds nothing to the information, it merely presents the information in smaller pieces.

To illustrate this, here are some examples:

A good 8" x 12" photograph and the same photo run full-page in a tabloid newspaper both contain about 1 megapixel of information.

A slightly better photograph and the same photo run full-page in a glossy magazine and a broadsheet newspaper all contain about 1.9 megapixels of information.

A slightly better photograph still - the best possible - and the same photo spread over two pages in a glossy magazine both contain about 3.8 megapixels of information.

If you have an 8" x 10" photo printer, you can compare those levels of information by printing out a set of pictures (linked below, about 30 MB) that I took at approximately those resolutions, keeping everything else the same. (The test pictures were shot at 3.4, 1.5 and 0.86 megapixels: I used a Foveon sensor and, to generate the lower resolutions, used its built-in facility to average cells electronically in pairs or in groups of four.) I enlarged the pictures using the best interpolator I could find to 3,140 by 2,093 pixels.

<http://www.tidbits.com/resources/751/HighMedLowResolution.zip>

The photos are JPEG 2000 files, saved in GraphicConverter at 100 percent quality using QuickTime lossless compression. To prepare them I adjusted the levels, cleaned up some dirt in the sky, then enlarged them in PhotoZoom Pro using the default settings for "Photo - Regular." Those settings include a modest and appropriate amount of sharpening.

What you will see, if you print them, is surprisingly small differences from one level of resolution to the next. Each of these photos looks sharp on its own, and at arm's length they all look the same. You can see a difference only if you compare them up close. That, of course, is because the only information that's missing from the lower-resolution pictures is information that is close to the limit of the eye's acuity and thus is difficult to see.

Bayer vs. Foveon in Theory -- Cameras today fall into two categories, those with a Bayer sensor and those with a Foveon sensor, which at this writing include only two, a theoretical Polaroid 530 and a very real Sigma SD-10.

<http://www.pdcameras.com/usa/catalog.php?itemname=x530>
<http://www.foveon.com/SD10_info.html>

In a Bayer sensor, a single cell records a single colour, but a pixel in the print can be any colour. Carl Zeiss explain this: "Each pixel of the CCD has exactly one filter color patch in front of it. It can sense the intensity for this color only. But how can the two remaining color intensities be sensed at the very location of this pixel? They cannot. They have to be generated instead through interpolation (averaging) by monitoring the signals from the surrounding pixels which have filters of these other two colors in front of them."

Since the cells provide a lot of partial information, the interpolation can be accurate, but it can be inaccurate as well. Patterns of coloured light can interact with the checkerboard pattern of filters over the cells to generate grotesque moire patterns. To avoid these, Bayer sensors are covered with a filter that blurs every spot of light over more that one cell. The net result proves to be interpolated resolution that varies with colour and peaks with black-and-white at about 50 percent more line-pairs/millimetre than the intrinsic resolution of the sensor. This sounds like a lot but cannot be seen unless you look closely.

More problematic is the fact that this filter does not merely prevent moire patterns, it also blurs edges. With a Bayer sensor, every edge of every line is blurred. You can see the interpolated resolution and the blurring in the magnified tests in the picture linked below. There I have compared cameras with a Foveon and a Bayer sensor containing the same number of pixels - pixels, not cells. Both have 3.4 million pixels (although the Bayer has 13.8 million cells).

<http://www.tidbits.com/resources/751/Resolution.jpg>

People make a big deal about resolution because it sounds important and is easy to test, but aside from special cases like astronomical observation, fine resolution actually matters little. By definition, at the limits of resolution, we can only just make out detail. Anything that is barely visible will not obtrude itself upon our attention or be badly missed if it is not there. What we see easily is what matters to us, what determines our impression of sharpness. Our impression of sharpness is determined by the abruptness and contrast at the edges of lines that are broad enough to be easily made out. You can see this with the two tortoises in this picture linked below. The sharper tortoise has less resolution but its edges are more clearly defined.

<http://www.tidbits.com/resources/751/Sharpness.jpg>

The Bayer sensor resolves finer black-and-white lines but a Bayer sensor will not reproduce any line so sharply as the Foveon. As a result, when comparing two top-quality images, I would expect the Bayer's image to look slightly more impressive when large blow-ups are examined up close, but I would expect the Foveon's to look slightly clearer when held a little farther away. Moreover, when detail is too fine for the sensor to resolve, the Bayer looks ugly or blank but the Foveon interpolates pseudo-detail. This means that in some areas, large enlargements examined closely might actually look better with the Foveon. In sum, I would expect the 3.4 megapixel Foveon and what is marketed as a 13.8-megapixel Bayer to be in the same league. I would expect photographs from them to be different but comparable overall, if they are enlarged with an appropriate algorithm.

Bayer vs. Foveon in Practice -- "If they are enlarged with an appropriate algorithm..." - that statement is critical to a sensible comparison. Usually, if you magnify an object a little, it won't change its appearance much. If you simply interpolate according to some kind of running average, you can increase its size to a certain extent and it will still look reasonable. This is how most enlargements are made. It is the basis of the bicubic algorithm used in most photo editors, including Photoshop and, apparently, Sigma's PhotoPro. It is also the basis of most comparisons between Bayer and Foveon. However, a running average will widen transitions at the edges of lines, and it will destroy the Foveon's sharp edges, softening them into the edges of a Bayer. A better class of algorithm will stop averaging at lines. Any form of averaging, though, tends to distort small regularities (wavelets) that occur in similar forms at different scales. Best of all are algorithms that look for wavelets, too. The only Macintosh application I know of in that class is PhotoZoom Pro. PhotoZoom Pro has a limited set of features and some annoying bugs - version 1.095 for the Mac feels like a beta release - but it creates superb enlargements.

<http://www.trulyphotomagic.com/>

An appropriate comparison of the Bayer and Foveon sensors would see how much information these sensors capture overall. (How much spatial information, that is: comparing colour would be comparing amoebas, as I explained in "Colour & Computers" in TidBITS-749.) To do this, I tested an SD-10 against an SLR that was based on a larger Bayer sensor, a sensor 70 percent larger than the Foveon that contained 13.8 million cells. Kodak were most helpful in supplying this camera once they heard Doctors Without Borders (Medecins sans Frontiers) was to benefit (see the PayBITS block at the bottom of this article to make a donation if you've found this article helpful). Also, Sigma sent me a matched pair of 50-mm macro lenses to use with the cameras.

<http://db.tidbits.com/getbits.acgi?tbart=07840>

I copied an oil painting with a wide variety of colours and a lot of fine textural detail. With each camera I photographed a large chunk of the painting, cropped out a small section from the centre, blew up that section to the same size as the original using PhotoZoom Pro (the defaults for "Photo - Regular"), and compared that blow-up to a gold standard, a close-up that had not seen any enlargement, interpolation, or blurring filter in front of the sensor. Before blowing them up I balanced all three photos to be as similar as I could, then, to prevent unavoidable differences in colour from confounding the spatial information, I converted all three images to black-and-white. I did this in ImageJ. First I split each image into its three channels, then I equalized the contrast of each channel across the histogram, then I combined the channels back into a colour picture, converted the new colour picture to 8-bit, and equalized the contrast of the 8-bit file. (See the second link below for an explanation of contrast-equalization.) I chose a painting in which most of the coloured brush strokes were outlined with black brush strokes, so that adjacent colours would not merge after conversion into a similar shades of grey. With my 314-dpi printer, the two enlargements are the equivalent of chunks from a 14" x 21".

<http://rsb.info.nih.gov/ij/>
<http://homepages.inf.ed.ac.uk/rbf/HIPR2/histeq.htm#1>

The difference between the photos from the Bayer and Foveon is very slight. The two pictures are indistinguishable unless you compare them closely. Fine, contrasty lines on the standard are finer on the Bayer, more contrasty on the Foveon. The one that looks more like the standard depends upon the distance from the eye and the lighting but the differences are trivial. The two images do contain slightly different information, but they contain comparable amounts overall.

On the other hand, for efficiency of storage and speed of processing, the Foveon wins hands down. This is how two identical pictures compared:

                Foveon       Bayer
RAW             7.8 MB       14.7 MB
8-bit TIFF      9.8 MB       38.7 MB
If you would like to print out my test pictures, you can download them. However, for the comparison to be meaningful, you must specify a number of dots per inch for the pictures that your printer can resolve in both directions. I know that an Olympus P-440 can resolve 314 dpi, with no more than occasional one-pixel errors in one colour's registration. I have not found any resolution that an Epson 9600 can handle cleanly in both directions, although I have not been able to test it exhaustively. Other printers I know nothing about. You will have to experiment with the test patterns in the Printer Sharpness Test file linked below. For this purpose, only the black-and-white stripes matter.

<http://www.tidbits.com/resources/748/PrinterSharpnessTest.zip>

Each picture in the 5.8 MB file below is 1512 pixels by approximately 2270. If a picture has been printed correctly, the width in inches will be 1512 divided by the number of dots per inch. Print them from Photoshop or GraphicConverter; Preview will scale them to fit the paper.

<http://www.tidbits.com/resources/751/Bayer_vs_Foveon.zip>

Remember that the question to ask is not which picture looks better or which picture shows more detail but which picture looks more like the gold standard overall. I suggest that you compare the pictures upside down. Remember, too, that these are small sections from big enlargements that you would normally view framed and hanging on a wall. Also, although the contrast is equalized overall, the original colours were not quite identical and the equalization of contrast amplified some tonal differences. If you perceive the Bayer or Foveon to be better in one or another area, make sure that in this area the tonality is similar. If the tonality is different, the difference there is probably an artifact. An example of this is the shadow beneath the tape on the left side.

I have not been able to test this but I suspect that the most important optical difference between Bayer and Foveon sensors may be how clearly they reveal deficiencies in lenses. Since the Foveon sensor is sharper, I would expect blur and colour fringing to show up more clearly on a Foveon sensor than a Bayer.

Megapixels, Meganonsense -- Megapixels sell cameras as horsepower sells cars and just as foolishly. To fit more cells in a sensor, the cells need to be smaller. It is possible to make cells smaller than a lens can resolve. Even if the lens can resolve the detail more finely, doubling the number of cells makes a difference that is only just noticeable in a direct comparison.

On the other hand, small pixels create problems. Electronic sensors pick up random fluctuations in light that we cannot see. These show up on enlargements like grain in film. Larger cells smooth out the fluctuations better than smaller cells. Also, larger cells can handle more light before they top out at their maximum voltage, so they can operate farther above the residual noise. For both reasons, images taken with larger cells are cleaner. Enlargements from my pocket-sized Minolta Xt begin to fall apart from too much noise, not from too few pixels.

In contrast, enlargements from my Sigma SD-10 have so little noise that they can be enormous. A 30" x 44" test print looked as though it came from my 2-1/4" x 3-1/4" Horseman. The Sigma has less resolution than the Horseman - it's probably less than can be extracted from scanning the finest 35-mm film - but its noise level can be reduced to something approaching 4" x 5" sheet film. Such a low level of noise leaves the detail that it contains, which is substantial, very clean. In perception, above a low threshold, the proportion of noise to signal matters far more to the brain than the absolute amount of signal. Indeed, if I look through a box of my old 11" x 14" enlargements, the only way I can distinguish the 35-mm photos from the 2-1/4 x 3-1/4" is to examine smooth tones for noise. I cannot tell them apart by looking at areas with detail.

In sum, with the range of sensors used in cameras today, there is no point to worrying about a few megapixels more or less. Shrinking cells to fit more of them in the sensor can lose more information than it gains. The size of the cells is likely to be more important than their number. For the same money, I would rather buy a larger sensor with fewer pixels than a smaller sensor with more pixels. If nothing else, the larger sensor is likely to be sharper because it will be less sensitive to movement of the camera. For a realistic comparison of sensors as they are marketed see this chart:

<http://www.tidbits.com/resources/751/SensorChart.png>

Tripod vs. Lens -- Most people believe that the quality of the lens is of primary importance in digital photography. If you have stayed with me so far, you may not be surprised to hear me calculate otherwise. With 35mm cameras, an old rule of thumb holds that the slowest shutter speed that a competent, sober photographer can use without a tripod and still stand a good chance of having the picture look sharp is 1 divided by the focal length of the lens: 1/50" for a 50-mm lens, 1/100" for a 100-mm lens, etc. At these settings there will always be some slight blur but it will usually be too little to be noticed. This blur will mask any difference in sharpness between lenses. To see differences in sharpness requires speeds several times faster.

With digital cameras that use 35-mm-sized sensors, the same rule of thumb holds, but most digital cameras use smaller sensors. With smaller sensors, the same amount of movement will blur more of the picture. If you work out the trigonometry, you'll find that you need shutter speeds roughly twice as fast for 4/3" sensors and four times faster for 2/3" and 1/1.8" sensors. (Digital sensors come in sizes like 4/3", 2/3" and 1/1.8". Those numbers are meaningless relics from the days of vacuum tubes; they are now just arbitrary numbers equivalent to dress sizes.) That means minimal speeds of 1/100" and 1/200" for a normal lens. Differences in sharpness among lenses would not be apparent until shutter speeds are several times higher again. Because of this, it strikes me that the weight of lenses matters more to image quality than the optics. The heavier a camera bag becomes, the more likely the tripod will be left at home.

(Note that this does not mean that 35-mm-sized sensors are best. Other optical problems increase with the size of the sensor. As an overall compromise, the industry is beginning to adopt a new standard, the 4/3", or four-thirds, which is approximately one-half the diameter of 35-mm. This is not unreasonable.)

Frankly, I should be astonished to find any lens manufactured today that does not have sufficient contrast and resolution to produce an impressive image in the hands of a competent photographer. I know that close comparisons of photos shot on a tripod will show differences from one lens to another, and I know that some lenses have weaknesses, but very few people will decorate a living room with test pictures. In the real world, nobody is likely to notice any optical deficiency unless the problem is movement of the camera, bad focus, distortion or colour fringing. It is certainly true that distortion and colour fringing can be objectionable but, although enough money and experimentation might find some lenses that evince less of these problems than others, as a practical matter, especially with zoom lenses, they seem to be inescapable. Fortunately, these can usually be corrected or hidden by software.

Indeed, even a certain amount of blur can be removed with software. Let's say that half of the light that ought to fall on one pixel is spread over surrounding pixels. Knowing this, it is possible to move that much light back to the central pixel from the surrounding ones. That seems to be what Focus Magic does (see the discussion of Focus Magic in "Editing Photographs for the Perfectionist" in TidBITS-748).

<http://www.focusmagic.com/>
<http://db.tidbits.com/getbits.acgi?tbart=07832>

One More Myth -- Finally, I would like to end this article by debunking a common myth. I have often read that Bayer sensors work well because half of their cells are green and the wavelengths that induce green provide most of the information used by the eye for visual acuity. This made no sense to me but I am not an expert on the eye so I asked an expert - three experts in fact, scientists known internationally for their work in visual perception. I happened to be having dinner with them. It made no sense to them, either, although I took care to ask them before they had much wine. Later I pestered one of them about it so much that eventually she got out of bed (this was my wife Daphne) and threw an old textbook at me, Human Color Vision by Robert Boynton. In it I found this explanation:

"To investigate 'color,'" an experimenter puts a filter in front of a projector that is projecting an eye chart. "An observer, who formerly could read the 20/20 line, now finds that he or she can recognize only those letters corresponding to 20/60 acuity or worse. What can be legitimately concluded from this experiment? The answer is, nothing at all," because the filter reduced the amount of light. "A control experiment is needed, where the same reduction in luminance is achieved using a neutral filter.... When such controls are used, it is typically found that varying spectral distribution has remarkably little effect upon visual acuity."

In short, each cell in a Bayer sensor provides similar information about resolution. It is true that green light will provide a Bayer sensor with more information than red and blue light but that is only because the sensor has more green cells.

If you want to shop for a digital camera, this article will help you make the most important decision, what kind and size of sensor to buy, with how many pixels. Once you have decided that, a host of smaller decisions await you. My next article will walk you through these. It is also going to incorporate a review of the Sigma SD-10 and will appear shortly after one more lens arrives from Japan.

PayBITS: If Charles's explanation of resolution and debunking of
the megapixel myth were useful, please support Doctors Without
Borders: <http://www.doctorswithoutborders-usa.org/donate/>
Read more about PayBITS: <http://www.tidbits.com/paybits/>

 
« Última modificação: 24 de Outubro de 2004, 01:30:15 por mahain »
Leo Terra

CURSOS DE FOTOGRAFIA: www.teiadoconhecimento.com



ATENÇÃO: NÃO RESPONDO DÚVIDAS EM PRIVATIVO. USEM O ESPAÇO PÚBLICO PARA TAL.
PARA DÚVIDAS SOBRE O FÓRUM LEIA O FAQ.


Ivan de Almeida

  • Trade Count: (1)
  • Referência
  • *****
  • Mensagens: 5.295
  • Sexo: Masculino
  • . F o t o g r a f i a .
    • Fotografia em Palavras
Resposta #31 Online: 24 de Outubro de 2004, 12:39:25
Leonardo:

Ainda não li seu novo post, mas o seu post anterior já trazia um argumento de muito peso. Uma das coisas que mais aprecio é a argumentação sagaz, que vai direto ao ponto discutido, e sua argumentação das texturas é inquestionável. Evidentemente, se o ruído é distribuído e a textura também, ele terá efeito sobre a definição desta. Não há o que contestar.

Ainda assim persisto em minha maneira de ver, qual seja, de haver diversos fatores na imagem digital, e que o enfraquecimento de um pode ser compensado pelo aumento de outro, mas, naturalmente, isso mudará como você tão bem demonstrou, a aplicabilidade da imagem. Provavelmente, inclusive, isso é o que acontece nos tão discutidos SuperCCD, nos quais certas características da definição são melhoradas e outras pioradas, conduzindo a essa controvérsia que sempre vemos a respeito de seu desempenho.

Me parece que há uma palavra que precisa de melhor definição, não no sentido de para ela encontrar um sentido único, mas de circunscrever seu  campo de significado e as diferenças entre os significados que estão incluídos nesse campo, e essa palavra é RESOLUÇÃO.

Já de imediato, nesse debate, encontramos pelo menos quatro significados usados alternadamente. São esses:

1) Resolução como quantidade de pixels do sensor
2) Resolução como quantidade de pixels aproveitáveis deduzidos os pixels de ruído (o que exige ainda uma outra definição sobre o que é um pixel com ruído, pois não há sinal absolutamente limpo em pixel algum)
3) Resolução como capacidade de separar detalhes no sentido do Resolution Chart.
4) Resolução como capacidade de trascrever padrões/texturas/rugosidades.

O item 3 é linear, o item 4 é área. O item 3 beneficia-se do item 1. O item 4 beneficia-se do item 2.

Definir qual desses itens é o mais importante não é bem o caso, mas, me parece, o importante é definir qual a ponderação entre esses fatores que produzirá o melhor resultado global.

Vou ler seu post novo com calma e comento depois, principalmente se encontrar nela algum gancho para argumentação -risos.

Grande abraço,
Ivan


Ivan de Almeida

  • Trade Count: (1)
  • Referência
  • *****
  • Mensagens: 5.295
  • Sexo: Masculino
  • . F o t o g r a f i a .
    • Fotografia em Palavras
Resposta #32 Online: 24 de Outubro de 2004, 15:40:57
Li o artigo e lerei novamente -é vasto e interessantíssimo-, mas em sua primeira metade, na qual defende a tese de serem 4mp os necessários para a definição da imagem em termos perceptivos, é preciso ter em mente que ele está falando de 4mp Foveon, e não de 4mp Bayer. Ele deixa isso mais ou menos claro quando diz ser o 13mp Bayer equivalente ao 3.5 Foveon.

A segunda coisa a discutir é quanto à relação entre nitidez e velocidade. Todo o raciocínio parece verdadeiro, mas ele se aplica exatamente às DSLRs, e não às demais câmeras, compactas, prosumers ou point-and-shot, como queiram chamar (ou o quanto diferentes sejam). Nas DSLRs a distância focal é idêntica a das SLRs, sendo o fator de redução tão somente um crop da imagem da lente provocada pelo tamanho do sensor, enquanto nas demais a distância focal cai dramaticamente, e com ela muda a trigonometria apresentada pelo autor. Tanto isso é verdade que nessas últimas o problema é exatamente o contrário, é excesso de sharp e de profundidade de campo. Há muita confusão aí, pois ´se mistura distância focal com ângulo de captura. Nas DSLRs, devido ao sensor menor colocado na posição dos filmes, muda o ângulo de visada, e não a distância focal.

Uma terceira questão refere-se a algo puramente cultural, não técnico. Os 4mp apurados (ou supostos) pelo autor -4mp Foveon- para a definição de uma fotografia, inslusive com a suposição de serem esses necessários para qualquer ampliação, uma vez que a distância de observação também mudaria, supõem como fator imóvel, isto é, "paribus", uma determinada maneira de ver as fotos. Porém, quem estuda história da arte, sabe que a maneira de observar uma representação é histórica, e não biológica. Se a foto digital for observada nos termos da tradição fotográfica, provavelmente o autor estará certo, mas nada impede gerar uma imagem de enorme tamanho, excelente definição, e propor ao observador uma observação ativa e imarsiva, na qual eo invés da contemplação do quadro total, como supóe o autor ser o normal, essa onservação ativa se volte para detalhes mesmo que isso implique em uma imersão na foto que faz ser imp0orssível a apreensão do todo. Dizendo melhor: há uma forma de observação do todo e uma forma de escrutínio dos detalhes, e embora a tradição fotográfica seja até agora mais apoiada na primeira, até porque o grão do filme impede ampliações maiores, há demanda pelo escrutínio de detalhes, e meu debate com o Ricardo mostra exatamente isso, ele dizendo que "queria passear pela foto em 100%". Ele não está errado. É só uma demanda diferente daquela clássica, e se essa demanda é possível, o paradigma proposto dos 4mp deixa de ser válido. Nada impede que um fotógrafo aproveitando a imersividade conseguida pela resolução aumentada faça do detalhe o "punctun" da fotografia, obrigando a um escrutínio imersivo.

Todas essas questões estão abertas, e somente o tempo pode respondê-las, pois não se referem somente aos aspectos técnicos, mas em como evoluirá a linguagem fotográfica. A linguagem do todo significativo é uma linguagem decorrente do filme 35mm, o qual com menor capacidade de captação de detalhes que um médio ou um grande formato obigatoriamente enfatiza o conjunto. Essa linguagem dominou os últimos 70 anos da fotografia, mas não é a única possível. Toda troca de formato gera nova linguagem, e estamos nesse interessantíssimo momento, que nos dá a oportunidade de participar dessa geração.



 


Ivan de Almeida

  • Trade Count: (1)
  • Referência
  • *****
  • Mensagens: 5.295
  • Sexo: Masculino
  • . F o t o g r a f i a .
    • Fotografia em Palavras
Resposta #33 Online: 24 de Outubro de 2004, 19:47:19
Mais uma observação sobre o interessantíssimo artigo transcrito.

A assunção pelo autor de ser cada pixel "verdadeiro" do sensor tipo Matriz de Bayer correspondente a 4 pixesl físicos (2 verdes, um azul e um vermelho) não é inteeiramente verdadeira por dois motivos:

O primeiro é que embora sejam necessários quatro pixels para definir um pixel, essa definição não é feita reduzindo , digamos 8mp físicos para 2mp efetivos, pois ela é feita a cada pixel físico, e para cada um os quatro pixels de vizinhança são diferentes. Isso significa que são realizadas tantas interpolações de cor quantos são os pixels físicos, e dada uma delas terá um pixel físico como posição e três vizinhos como informação ponderada, mas ao se passar para o pixel seguinte outra interpolação será feita e apenas alguns dos pixels dela terão participado da interpolação anterior. Só haveria a redução proposta se a quantidade de interpolações diminuisse, embora, é claro, a unidade 4 pixels defini um nível de detalhamento (ou, melhor um espaço de dúvida) possível. O que conta é o número de interpolações realizadas e é esse número de interpolações que será igual ao número de pixels-equivalentes. Dá mais ou menos no mesmo que o dito no artigo, mas conceitualmente é diferente. O artigo dá a entender que é meramente uma redução por quatro.

A segunda coisa é que a interpolação de Bayer é responsável pela geração dos pixels coloridos, mas não necessáriamente da luminância, a qual provém dos pixels verdes. Grande parte do detalhamento é obtido através da luminância, e quam quer que tenha tratado uma imagem no modo LAB sabe disso. A Panasonic, por exemplo, implementou uma rotina onde recupera sinais de luminância de todos os pixels, ao invés de somente dos verdes, e isso lhe dá vantagens no detalhamento da imagem. A se considerar isso, não caberia falar nem em reduzir por quatro, mas em reduzir por dois.

Essa questão da luminância não está clara para mim ainda, pois em nenhum lugar consegui ler algo de certa profundidade sobre o Venus Engine que equipa as Panasonic nem sobre o tratamento da luminância normalmente praticado, mas parece haver fogo sob essa fumaça, pois a Leica Digilux 2 possui um modo PB diferenciado, com desempenho superior -dizem- ao modo PB das câmeras conuns (dessaturação de canais), provavelmente apoisado nessa tecnologia. A observação das fotos das Panasonic mostra muito detalhamento fino em grama, folhagem, etc, partes críticas das imagens.

Não por acaso, a Sigma S10 não é comparada às câmeras com sensor de Bayer de 11mp, mas sim com as DSLRs de 6mp (que têm 3mp verdes), e se observa que sua capacidade de definir uma imagem é bastante semelhante a essas, pleo menos observando o que é mostrado no dpreview.

Ivan



 


Leo Terra

  • SysOp
  • Trade Count: (27)
  • Referência
  • *****
  • Mensagens: 13.761
  • Sexo: Masculino
  • “Deus disse: 'Haja luz'. E houve luz.” (Gen 1,3)
    • http://www.leoterra.com.br
Resposta #34 Online: 24 de Outubro de 2004, 20:56:41
Como eu disse eu ainda não tive tempo de ler este artigo, a resolução de 4MP Foveon seria equivalente a aproximadamente 7MP bayer, o fator qualidade do foveon contando as interpolações é de 75% mais performance que um CCD Bayer de mesma resolução, mas eu vou ler o artigo inteiro depois, eu ando meio sem tempo e ele é meio longo.
Assim que eu ler ele diretinho eu falo o que achei tbm. :)

Podiamos pensar em uma forma de pesar ambas as características de forma efetiva.
« Última modificação: 31 de Outubro de 2004, 16:28:12 por mahain »
Leo Terra

CURSOS DE FOTOGRAFIA: www.teiadoconhecimento.com



ATENÇÃO: NÃO RESPONDO DÚVIDAS EM PRIVATIVO. USEM O ESPAÇO PÚBLICO PARA TAL.
PARA DÚVIDAS SOBRE O FÓRUM LEIA O FAQ.


Rodrigo

  • Trade Count: (3)
  • Conhecendo
  • *
  • Mensagens: 182
  • Sexo: Masculino
Resposta #35 Online: 27 de Outubro de 2004, 10:51:25
Seguindo o assunto, li um artigo interessante, sobre a corrida nos megapixels.

O endereço:

Mais e mais megapixels

Ele cita justamente o caso da G3 x G5, que concordo totalmente... como proprietário de uma G3 (infelizmente em breve não serei mais), considero que é melhor do que a G5, que apenas tem 1Mpixel a mais, e mais ruído junto.

[ ]s

- Rodrigo -
« Última modificação: 27 de Outubro de 2004, 13:42:53 por Rodrigo »
- Rodrigo -


Leo Terra

  • SysOp
  • Trade Count: (27)
  • Referência
  • *****
  • Mensagens: 13.761
  • Sexo: Masculino
  • “Deus disse: 'Haja luz'. E houve luz.” (Gen 1,3)
    • http://www.leoterra.com.br
Resposta #36 Online: 27 de Outubro de 2004, 17:43:46
Gotei do artigo, é muito interessante notar a Olympus C-2100UZ com 2MP e um sensor de 1/2", eu tive uma câmera dessas e ela possuia uma ótica incrível, além de um sensor muito bom em termos de ruído, para terem uma noção eu gosto mais da qualidade da C-2100 do que da S602, porém a Olympus não deu continuidade a esta linha (que era tão boa), preferiu dar continuidade à série C-7XX que apesar de muito boa para os parâmetros de hoje era bem inferior à qualidade da C-2100, eu imagino que se a Olympus tivesse dado continuidade nesta série a história das ultra zoom seria bem diferente hoje.
« Última modificação: 27 de Outubro de 2004, 17:49:00 por mahain »
Leo Terra

CURSOS DE FOTOGRAFIA: www.teiadoconhecimento.com



ATENÇÃO: NÃO RESPONDO DÚVIDAS EM PRIVATIVO. USEM O ESPAÇO PÚBLICO PARA TAL.
PARA DÚVIDAS SOBRE O FÓRUM LEIA O FAQ.


Daniel Magalhães

  • Trade Count: (0)
  • Conhecendo
  • *
  • Mensagens: 53
    • http://
Resposta #37 Online: 01 de Novembro de 2004, 13:28:51
Caros,

para que eu possa ter uma melhor assimilação, gostaria que me falassem em que situação se encontra o sensor da Canon 20D. Não estou sabendo que dados analisar em suas características para enquadra-la nesa relação RuídoxPixels.

Obrigado.

 
Daniel Magalhães
             engdom@gmail.com


bruno_kiau

  • Trade Count: (0)
  • Conhecendo
  • *
  • Mensagens: 28
Resposta #38 Online: 01 de Novembro de 2004, 13:52:46
Como funciona o sistema de redução de ruídos que a Sony está usando nos últimos modelos?

Onde a w1 se encaixa na tabela?
Effective pixels: 5.0 million
Sensor photo detectors: 5.2 million
CCD Size (inches): 1/1.8"  
« Última modificação: 01 de Novembro de 2004, 13:54:47 por bruno_kiau »
Bruno Araujo


Leo Terra

  • SysOp
  • Trade Count: (27)
  • Referência
  • *****
  • Mensagens: 13.761
  • Sexo: Masculino
  • “Deus disse: 'Haja luz'. E houve luz.” (Gen 1,3)
    • http://www.leoterra.com.br
Resposta #39 Online: 02 de Novembro de 2004, 03:57:33
Bom Daniel, a 20D encontra-se perto do ponto ótimo, portanto sua resolução real é verdadeiramente maior, porém a Canon está usando sistemas de redução de ruído muito forte nela, o que estraga a imagem para uso profissional, portanto fotografar com a 20D de preferância em RAW, pois em JPG a perda é muito grande.

Bruno a W1 está acima do ponto ótimo, portanto ela tem menor resolução real do que uma Canon A80 por exemplo que possui sensor de 1/1.8" de 4MP (próxima do ideal).

O sistema de redução de ruído nada mais é do que um software interno no equipamento que filtra ruídos, o problema é que junto com os ruídos ele leva as texturas e portanto a resolução real, via de regra apenas a Minolta e a Fuji tratam ruído de forma menos agressiva, possibilitando o uso direto do JPG e do TIFF em processos mais sofisticados, as demais marcas é altamente recomendável que se faça fotos em RAW, devido ao agressivo sistema de tratamento implementado, principalmente nas câmeras mais próximas ou que ultrapassem o ponto de máxima resolução real (onde o ruído se torna muito pronunciado).
« Última modificação: 04 de Dezembro de 2004, 22:09:17 por Leo Terra »
Leo Terra

CURSOS DE FOTOGRAFIA: www.teiadoconhecimento.com



ATENÇÃO: NÃO RESPONDO DÚVIDAS EM PRIVATIVO. USEM O ESPAÇO PÚBLICO PARA TAL.
PARA DÚVIDAS SOBRE O FÓRUM LEIA O FAQ.


Matheus

  • Trade Count: (0)
  • Colaborador(a)
  • ****
  • Mensagens: 2.381
  • Sexo: Masculino
    • http://www.mundofotografico.com.br
Resposta #40 Online: 03 de Dezembro de 2004, 08:32:47
...


Marcelo.S.

  • Trade Count: (2)
  • Colaborador(a)
  • ****
  • Mensagens: 2.279
  • Sexo: Masculino
    • http://www.flickr.com/photos/msafioti/sets/72157618134645270/detail/
Resposta #41 Online: 10 de Dezembro de 2004, 13:28:50
Dúvida:
No caso de uma Sony V3 por exemplo, 7.2 Mp e sensor de 1/1.8"

É melhor eu tirar fotos na resolucao de 4 Mp?? Teria menos ruído??

Obrigado pela ajuda!
Marcelo
Deutschland

Flickr


Leo Terra

  • SysOp
  • Trade Count: (27)
  • Referência
  • *****
  • Mensagens: 13.761
  • Sexo: Masculino
  • “Deus disse: 'Haja luz'. E houve luz.” (Gen 1,3)
    • http://www.leoterra.com.br
Resposta #42 Online: 10 de Dezembro de 2004, 17:53:48
No caso é melhor não comprar.
Se comprar tire sempre no máximo de resolução para não aumentar ainda mais as perdas.
O ideal é sempre comprar a câmera mais perto do ponto de ótimo possível.
Na verdade o que é dito neste artigo é que a V1 tem resolução real maior do que a V3, apenas isso :)

Não importa se vc reduz a resolução da foto da V3 pois ela já terá tido as perdas na captura no sensor e quando vc fotografa em reolução menor ela continuará tendo a perda por ruído (pois seus fotodiodos continuarão sendo pequenos) e vc ainda vai jogar mais pixels bons fora na compressão, piorando ainda mais o rendimento do equipamento. :)
Leo Terra

CURSOS DE FOTOGRAFIA: www.teiadoconhecimento.com



ATENÇÃO: NÃO RESPONDO DÚVIDAS EM PRIVATIVO. USEM O ESPAÇO PÚBLICO PARA TAL.
PARA DÚVIDAS SOBRE O FÓRUM LEIA O FAQ.


André Duarte

  • Trade Count: (0)
  • Conhecendo
  • *
  • Mensagens: 88
    • http://
Resposta #43 Online: 16 de Dezembro de 2004, 13:05:29
Eu tenho uma Canon EOS 300D, no manual diz que o formato do sensor é 3:2 em qual tamanho ela se encaixaria na sua tabela? E oq tem a me dizer sobre os prós e os contras do sensor dessa máquina?  


wilsons

  • Trade Count: (0)
  • Conhecendo
  • *
  • Mensagens: 42
Resposta #44 Online: 16 de Dezembro de 2004, 14:22:37
Leo, em outros posts você "trucida" a Finepix S7000...(risos) Mas com acabei de adquirir uma pela opinião da maioria, melhor que a S5000 e a S5100, e realmente gostei do modo Macro 1cm dela, o acabamento impressiona, etc. No post inicial, na tabela de Ruidox Pixel, você comenta que a S7000 possui a relação, 1/1,7 = 3,39 ideais, quer dizer que na resolução de 6,3 a S7000 gera uma imagem com maior quantidade de ruidos? Você tem alguma experiência com ela, para nos contar? Obrigadão!
Finepix S7000, Super CCD HR 6Mp, Zoom 6x, Macro 1cm, Hot Shoe + CF 512

------------------------------------------[]O]-----------------------------------------
Veja minhas fotos:
http://www.usefilm.com/photographer/74431.html
http://www.superzoom.com.br/will