3do specs
Moderators: 3DO Experience, Devin, Bas, 3DOKid
- Virtual Audio QSound
- 3DO ZERO USER
- Posts: 61
- Joined: Tue Mar 15, 2011 10:01 pm
- Location: U.S.A.
The NeoGeo was able to display 4096 colors at once, if I recall
correctly - and the Saturn...? Hm, I dont know.
But, is there really a difference between 16bit color and 24bit?
I mean, can one really notice it?
By the way: the 3DOs native resolution is 320x240 pixel.
That means 76.800 pixels on the screen.
So how could more than 76.800 colours at the same time be
possible, given that one pixel can have one color?
correctly - and the Saturn...? Hm, I dont know.
But, is there really a difference between 16bit color and 24bit?
I mean, can one really notice it?
By the way: the 3DOs native resolution is 320x240 pixel.
That means 76.800 pixels on the screen.
So how could more than 76.800 colours at the same time be
possible, given that one pixel can have one color?
- 3DO Experience
- 3DO ZONE ADMIN
- Posts: 3686
- Joined: Sun Jun 24, 2007 8:47 am
- Location: U.S.A.
Are you serious? Ok I know most people shouldn't be able to tell the difference but does anyone who would actually care really believe that? It's kinda like the difference between 24fps and 30fps.Jones wrote:But, is there really a difference between 16bit color and 24bit?
I mean, can one really notice it?
Anyway...
3DO can produce 16,777,216 colors (same simultaneously)
NEO-GEO can produce 65,536 colors (4,096 simultaneously)
Saturn can produce 16,777,216 colors (same simultaneously)
It can produce an image up to 640x480.Jones wrote:By the way: the 3DOs native resolution is 320x240 pixel.
That means 76.800 pixels on the screen.
So how could more than 76.800 colors at the same time be
possible, given that one pixel can have one color?
That means 307,200 pixels on the screen.
Your question is a good one for anyone who's given it a thought. The answer is actually quite simple. Just because you don't see something doesn't mean it's not rendered. That's why even a console like the the Dreamcast gives a picture that has more colors in a single frame than it's predecessor the Saturn when they both produce 16,7 million simultaneously. The more pixels you make them more you see.
I'm coming down off a migraine so I'm sorry if some of this doesn't make sense.
"Wait. You don't have a bag of charcoal in your gaming room???"
- 3DO Experience
- 3DO ZONE ADMIN
- Posts: 3686
- Joined: Sun Jun 24, 2007 8:47 am
- Location: U.S.A.
- Martin III
- 3DO ZERO USER
- Posts: 1005
- Joined: Fri Feb 11, 2011 12:32 am
- Location: United States of America
From what I'm told, the 3DO could display 50,000 polygons per second, while the Saturn could display a whopping 200,000 texture-mapped polygons or 500,000 flat-shaded polygons per second.oldskool wrote:How many polygons can the 3DO display per second compared to the Saturn? It seems to me that the 3DO has smoother graphics and less jaggies.
- Austin
- Master Poster & Pricing Expert
- Posts: 1839
- Joined: Sun Dec 20, 2009 11:30 am
- Location: Fairfax, VA
- Contact:
From what I can tell, polygonal-based 3DO titles tend to look uglier around the edges on a greater occurance than on the Saturn. There are exceptions, of course (for instance, Bladeforce looks pretty sharp around the edges). The texture quality between games on both systems seems pretty close though.oldskool wrote:How many polygons can the 3DO display per second compared to the Saturn? It seems to me that the 3DO has smoother graphics and less jaggies.
- 3DO Experience
- 3DO ZONE ADMIN
- Posts: 3686
- Joined: Sun Jun 24, 2007 8:47 am
- Location: U.S.A.
- ewhac
- 3DO ZERO USER
- Posts: 48
- Joined: Mon Aug 16, 2010 4:31 am
- Location: San Francisco Peninsula
- Contact:
The framebuffer depth was 16 bits per pixel, arranged in a XRGB1555 layout. However, each 5-bit color component indexed into a table (CLUT -- Color LookUp Table) which yielded an 8-bit output. So that's where the 16M colors figure came from. The CLUT could be reloaded at any time, which gave you a very inexpensive fade to black. You could also use the CLUT for very simplistic gamma correction.
I also dimly recall that the high-order bit could be used to select an alternate CLUT. A couple of titles used this at the sixth bit of green, yielding RGB565.
The display resolution was 320 pixels across, period. There was a mode where you could apply horizontal and vertical box fillters. Adjacent pixels would be averaged together so that you'd see pixel(N), (pixel(N) + pixel(N+1)) / 2, pixel(N+1), etc. Since there were arguably 640 discrete transitions on each line, marketing decided to say the machine had 640 x 480 resolution, and inflated all their quoted fill rates by a factor of four. When developers learned the truth, they weren't happy. There was a feature whereby you could apply per-pixel weighting to get a good approximation of a true 640x480 image, but no one I knew ever used it because it was so goofy.
The cel hardware always output 16 bits per pixel to the framebuffer. Cel input was a variety of bit depths (1, 2, 3, 4, 5, and 16 bits per pixel). The decoded value was used to index into a pixel lookup table (PLUT, not to be confused with the CLUT) to yield a 16-bit output. If you used run-length encoding, you could also use declared transparency. So if, for example, you had a 4-bit image, you could have transparency *in addition* to the available 16 colors (in other words, you didn't have to spend a color on the transparent parts of your image).
Cel rendering was source-based -- the entire source image was sampled no matter how small you scaled it on screen. Smart developers (unlike me) did MIP-map management by hand, ensuring that the scaling never fell much below 1.0 and thereby wringing optimum performance from the chip.
I also dimly recall that the high-order bit could be used to select an alternate CLUT. A couple of titles used this at the sixth bit of green, yielding RGB565.
The display resolution was 320 pixels across, period. There was a mode where you could apply horizontal and vertical box fillters. Adjacent pixels would be averaged together so that you'd see pixel(N), (pixel(N) + pixel(N+1)) / 2, pixel(N+1), etc. Since there were arguably 640 discrete transitions on each line, marketing decided to say the machine had 640 x 480 resolution, and inflated all their quoted fill rates by a factor of four. When developers learned the truth, they weren't happy. There was a feature whereby you could apply per-pixel weighting to get a good approximation of a true 640x480 image, but no one I knew ever used it because it was so goofy.
The cel hardware always output 16 bits per pixel to the framebuffer. Cel input was a variety of bit depths (1, 2, 3, 4, 5, and 16 bits per pixel). The decoded value was used to index into a pixel lookup table (PLUT, not to be confused with the CLUT) to yield a 16-bit output. If you used run-length encoding, you could also use declared transparency. So if, for example, you had a 4-bit image, you could have transparency *in addition* to the available 16 colors (in other words, you didn't have to spend a color on the transparent parts of your image).
Cel rendering was source-based -- the entire source image was sampled no matter how small you scaled it on screen. Smart developers (unlike me) did MIP-map management by hand, ensuring that the scaling never fell much below 1.0 and thereby wringing optimum performance from the chip.
- 3DO Experience
- 3DO ZONE ADMIN
- Posts: 3686
- Joined: Sun Jun 24, 2007 8:47 am
- Location: U.S.A.