Forum: The Classroom
Topic: GeForce2 256 and Voodoo 5 5500, can you help me?
started by: aventari

Posted by aventari on May 23 2000,16:44
y0, fellow hax0rs. i've been thinking about upgrading my 'puter (currently a 300a@450 and TNT-1).

(background)
first i will upgrade the video card to either a voodoo5 5500 (gotta love the price 趦) or a geforce2 256. soon afterwards i will have a 550e or 600e on my BX6-2. i will be overclocking the cpu like crazy, so the AGP bus speeds will be 90+ mhz.

(now for the question)
does anyone know how well the voodoo5 5500 tolerates out of spec agp speeds?
how about the geforce2 gts?
or even the geforce1 DDR?

any help or comments on my upgrade path are appreciated!

------------------
aventari
"witty one-liner"


Posted by simulacrum on May 23 2000,16:46
I haven't heard on the 5500 or the GF2, but supposedly the GF1 DDR does pretty well. I have a friend running the Guillemot DDR at 87 just fine.
Posted by Lizzy on May 23 2000,16:47
I'm too stupid to even know how to overclock my computer, let alone know how something will stand up against it.
Posted by aventari on May 23 2000,16:57
okay who's the arsehole that stole Lizzy's password??

on the topic. will a new 7200 rpm hard drive up my performance noticeably? i have a maxtor 5400 8.4 gig. I'm thinking a new seagate 10.2 gig 7200 would be a good upgrade too

------------------
aventari
"witty one-liner"


Posted by cr0bar on May 23 2000,22:51
I highly recommend the RAID mod: < http://www.detonate.net/raid/ >

I've got two IBM 9GB/7200RPM/2MB Buffer/8.5ms seek drives running RAID 0 and it is SWEET.

------------------
"Everyone's favorite implement for any task"
------------------


Posted by Lizzy on May 23 2000,22:52
I'm just not a l33t h4x0r like some of you!
Posted by aventari on May 24 2000,01:37
Do the Promise Ultra66 cards that they sell in the stores now work? did they fix that little bug yet?(not a feature)

i think i'll do that, then save up for a gf2 gts or a voodoo5 5500

------------------
aventari
"witty one-liner"


Posted by simulacrum on May 24 2000,01:40
It's not a bug, rather the Promise controller is the same as their RAID controller, with one pinout difference. Same exact card, otherwise. They did try to change the BIOS, however there are ways around it, if you search around a bit.
Posted by simulacrum on May 24 2000,05:01
Definitely. If it's an older 5400, you'll see a difference in seek times as well. Windows will load up faster, and everything will just be overall more responsive.
Posted by Hellraiser on May 24 2000,07:26
Geforce 2 GTS all the way, baby! It's the sweetest card I ever owned, and it runs quite cool, 93 degrees according to the monitoring program that came with it. It is stable all the way up to the max you can overclock it, and it is awesome to play Quake III Arena at 1600x1200x32bpp and still get between 30 and 60 frames per second (What my old Fury used to get at 640x480). DVD is quite good as well, although not as nice as ATI or a standalone decoder card, but I can live with that. I am overclocking my system (89MHz AGP bus speed) and the Geforce takes it just fine.
Posted by Kuros on May 24 2000,09:42
No way, the V5 is the best, just tried 1600x1200x32 on my system managed min 64 frames on UT, hardware assist means no decoder board - but always 30 fps ???

Posted by Hellraiser on May 24 2000,10:29
Since I don't have UT, why don't you try Q3 and make it a fair test? I like to play with all the features on high everything enabled like ejecting brass, etc. When I play with a few more of the options disabled, I get 61 FPS in Q3 Timedemo1 at 1600x1200x32bpp. Beat that if you can.

------------------
Just your generic meaningless signature. Mix with 2 quarts water and stir till evenly coated.


Posted by Hellraiser on May 24 2000,10:31
Oh, and thats on a PIII 500 by the way, CPU's do make a difference. If you have an Athlon or Coppermine, you're likely to do better even though the Geforce 2 GTS is better than the V5.

------------------
Just your generic meaningless signature. Mix with 2 quarts water and stir till evenly coated.


Posted by incubus on May 24 2000,14:13
I know that CPU speed made a bigger difference between the GeForce1 and Voodoo3, cos the GeForce has some sort of coprocessor to take the load off the CPU, I think. I don't know if that's changed for these next-gen cards, though.

------------------
-- incubus
As I chase the leaves like the words I never find ...


Posted by simulacrum on May 24 2000,14:39
It's called a GPU. I know the GF1 and GF2 have it, but not sure about the V4/5.
Posted by aventari on May 24 2000,16:26
hmm since i have a slow CPU (celery 450) i think the geforce 2 would take more of a load off the CPU because of it's GPU. but does te V5 do the same?

Does anyone have a link to the Promise ata66 modification for the new BIOS?

------------------
aventari
"witty one-liner"


Posted by simulacrum on May 24 2000,16:53
There were some comments about it over at OCP, but I can't seem to find anything at the moment.
Posted by eng_man on May 24 2000,16:55
quote:
Originally posted by aventari:
does anyone know how well the voodoo5 5500 tolerates out of spec agp speeds?
how about the geforce2 gts?
or even the geforce1 DDR?

No one can predict how well a video card will work when it's out of spec. It may work, it may not. Just because one GF2 or Voodoo5 went to some ass high bus speed doesn't mean the one you get will. The best bet is to get whichever card you like the best, if it doesn't o/c well, return it. If you can't return it at least you have a card you will like.

------------------
< www.slapmahfro.net >
ya know ya wanna slap it ...


Posted by Anztac on May 24 2000,20:17
I would suggest the Geforce GTS2 all the way. The V5 looks to suck comparitavly. The V5 does not have a GPU (thus does not assist with the CPU processes). The Geforce also has more/faster memory, with better everything. I myself though would wait for ATI's Raedeon. It's fast, supports all the bump mappings, has a great texturing pipeline. That is the shortest way I could say it.

------------------
~Anztac - The guy who had the really long sig (formerly Kriegman)

"I am easily driven into a flying rage by blithering idiots"
-cr0bar [The god of this domain]

[This message has been edited by Anztac (edited May 24, 2000).]


Posted by cr0bar on May 25 2000,00:20
The "GPU" features only really apply to the onboard T&L, which no games currently fully utilize. Also, with CPUs hitting progressively higher speeds, there's not a whole lot of merit in the GPU argument. I know I ranted about the V5 6000, but I would still want one. The visual fx on the card far outpaces anything the GeForce 2 GTS can do.

------------------
"Everyone's favorite implement for any task"
------------------


Posted by Anztac on May 25 2000,00:52
what visualal effects? t-buffer? HA! Who wants a blurry far ground, or blurry, confusing trail. The only features that the T-cuffer brings that matter are soft reflections and FSAA. FSAA works on the G FTS2. Soft-reflections are easily done. The speed/price of the V5 sucks ass.

------------------
~Anztac - The guy who had the really long sig (formerly Kriegman)

"I am easily driven into a flying rage by blithering idiots"
-cr0bar [The god of this domain]


Posted by cr0bar on May 25 2000,00:54
The blurring imitates what you see in real life and movies, etc. The point of 3D rendering is to try to match real life as closely as possible.

------------------
"Everyone's favorite implement for any task"
------------------


Posted by Lizzy on May 25 2000,00:57
Anz why aren't you on ICQ?! I need someone to talk to!
Posted by Anztac on May 25 2000,01:17
I am now

------------------
~Anztac - The guy who had the really long sig (formerly Kriegman)

"I am easily driven into a flying rage by blithering idiots"
-cr0bar [The god of this domain]


Posted by aventari on May 25 2000,03:11
[about not knowing whether geforce will pull 89 mhz bus]

yes you can. you can say 'on average'. on average the celeron 300a will hit 450 mhz. on average a TNT2 can pull 89 mhz agp bus speeds. thats all im asking for..
i know how overclocking works, dude!

------------------
aventari
"witty one-liner"


Posted by firelance on May 25 2000,04:57
several mentioned hard drives. if your don't want to hack up your promise card, the new KA7-100 have the on board ata100 + raid support. pretty spiffy, if your looking to upgrade your mother board/cpu too. though i suppose this doesn't matter since your going the celeron route.

geforce 2 gts all the way, 趤 (until may 26, with prmotional offers from the buy.com sites)


Posted by Kuros on May 25 2000,09:08
okay, Hellraiser gets me this time, 54 fps on q3, but I am running UT without CD

Posted by Hellraiser on May 25 2000,09:33
I tried UT on a friends machine, but didn't like enough to buy the game. If I see it for less than ฮ, I might get it, but the best I can manage around here is แ.95. And I figured out that DVD is actually better with my Geforce than my old Fury; I just hadn't enabled DMA on my DVD drive again after installing the Geforce and VGart drivers. And a few DVD's that had troubles with ATI work properly now. All in all, I would definitely recommend the Geforce 2 GTS.
Posted by Sithiee on May 25 2000,15:47
im not sure if you all realize this, but q3 and UT are optimized differently. UT puts most of the burden on the processor, and q3 puts more on the graphics card, and can take advantage of the gpu more. thusly, on two different machines, depending on the processor/graphics pairing, you cant make good comparisons. Case in point, my friend has a dual p3 500, tnt2 system, and it runs UT at high resolutions with high framerates, but in q3, i can get better with my p2 450, geforce sdr(not overclocked) because it takes advantage of the gpu. to settle the debate, someone should buy one of both and stick in their machine and try it then. Im not gonna do it, cause im fucking poor...anyway, remember this when comparing stuff with q3 and ut.
Posted by Lizzy on May 25 2000,15:54
Q3A seems to run faster for me. I have a 300mhz Cyrix with a V3 2000, so what you're saying sounds pretty accurate.
Posted by aventari on May 25 2000,16:06
i'm suprised Q3a runs faster on your system Lizzy. UT has Glide support which is a dope ass API when you want pure speed. w/ yer voodoo3 i figured UT would rox0r

so it's an ultraata66 raid hack w/ 2 seagate 7200 10 gig'rs for me i think. Will i still be able to run my 8.4 gig, and 2 cdroms at the same time? (a total of 5 IDE devices)

------------------
aventari
"witty one-liner"


Posted by SimplyModest on May 25 2000,16:19
wow, i must be the biggest loser on reading this, because im running 4 video cards... (4 machines.) a voodoo 2 (in a 500 AMD k6-2), a ATI Rage Pro (PIII 450), a diamond stealth (pII 450), and a trident blade 3d (hey, its onlyศ and i was ready to use my new athlon 700)....

so, anyone think i need a new video card? (and what should i get?) (under 赨 preferably)

------------------
No signature required.


Posted by Sithiee on May 25 2000,19:11
but the glasses should work with things besides d3d, or at least just openGL, considering openGL looks so much better...and why is it asus is the only ones who make any geforce cards with tv out?
Posted by simulacrum on May 25 2000,19:27
As I understand it, the GeForces have the ability to attach a daughter card for TV out. At least I know most of the GF2s do. I think the GF1s do.
Posted by Sithiee on May 25 2000,20:26
the geforce is just a chip, and its only as powerful as the card its on, so it just depends on whether the vendor wants to do it or not...if i were to make a graphics card, it would have connectivity up the ass....tv in, out, tuner, dvi...not to mention at least 256MB of DDRRAM...i would have so much fun making hardware...itd prolly be far to expensive though...oh well...thats what wet dreams are for....i mean, thats what...oh crap, ill stop now
Posted by TeKniC on May 25 2000,22:16
the glasses work with openGL too, it looks really good, especially on shrooms... then again, most things do... The V6800 DDR deluxe has TV in and out built in, and rca out too. I love my video card...

[This message has been edited by TeKniC (edited May 25, 2000).]


Posted by Kuros on May 26 2000,04:14
Having just tried UT and Q3a on a friends machine (P2 400) with both his geforce and my V5 (what can I say nothing better to do ?), Sithiee does seem right.
Posted by Lordbrandon on May 26 2000,04:51
i have a prototype of an nv25 (gf3) oc'd to 736 mhz on my 1.5 ghz athalon, I have installed peltiers on all major chips in my box then submerged it in non conductive fluid
then the fluid is pumped thru an old buick radiator that i welded to a 3x3x4 swamp cooler the pump and the swamp cooler are
powered by a volkswagon bug engine. its kinda loud, but i get 950 fps on UT and Q3 at the same time, i had two monitors but last week the volk's engine threw a rod and broke the tube it was my diamond 20u20u i still have my sony cpd f-500 but the anti glare coating keeps collecting oil and exaust from the engine, the carbon-dioxide whould sometimes knock me out so i had to use my scuba tanks, that is until i got my rebreather. i know id totaly own but its about 185 degrees in the room and the fan blade has cut my right arm in six places, can any one tell me how to oc the refresh on my monitor so i can see al the frames my fatty rig allows me?
Posted by Lizzy on May 26 2000,05:31
Well, UT is a much larger game. It takes like 4x longer to start the game then enter a level with UT than Q3A (I have a whole 32mb of RAM - bleh).
Posted by TeKniC on May 26 2000,05:56
that sucks, i got 256.. and Simplymodest, if you want a new video card, I recomend the ASUS V6800 GeForce 256 DDR Deluxe with 3D glasses, its pretty sweet...
Posted by TiresiasX on May 26 2000,14:26
Hmmm...as for the GPU, I believe that any OpenGL games will take advantage of it. OpenGL makes T&L transparent to the developer, so OpenGL apps shouldn't even have to enable it to take advantage of it.

Direct3D is different; the T&L funcs are there, but the developer has to actually put in the T&L func calls himself. Since most cards did not have T&L before the GeForce, many D3D apps elect to do T&L in software without even checking for hardware T&L support.

And another issue that I have yet to hear a conclusion on...what if your CPU is fast enough that software T&L actually gets better FPS than hardware T&L? A good OpenGL driver would hopefully be able to predict which will be faster upon driver initialization...there are probably several other ways of doing this.

As for GeForce2 vs. Voodoo5, I'd go with GeForce2. I've lost my respect for 3Dfx, and I've seen their Voodoo5 fall behind the GeForce DDR in a lot of benchmarks.

As for Unreal and UT, I trust neither as a benchmark. Both are far too Glide-biased, and there's a lot of sloppy technique that nobody's worked out of the engine yet. I'm betting ten to one that OpenGL and D3D support in Unreal is actually just a layer that Unreal sticks in to translate standard Glide calls to the respective chosen API, much like a D3D wrapper. Yech.

As for IDE RAID, I heard a while back that IBM is making a 10,000RPM ATA-66 drive...wonder how that performs in a RAID?

And as for RAID 0 (i.e. just AID , I might warn some of you that the chances of losing all your data to a disk crash increase by a factor of the number of drives in your RAID. So unless you do regular full backups, my advice is not to choose RAID 0!


Posted by aventari on May 26 2000,15:14
hey sithee, do you have any pics of that? I've seen one at hardOCP, but i can't find it now. I wanna show the disbelievers here at work
They already think i'm crazy cuz i watercooled my celeron a year ago

------------------
aventari
"witty one-liner"


Posted by DuSTman on May 26 2000,16:39
TiresiasX, As i have looked at the OpenGL API (albeit in not much detail) there are two methods for describing the positions of vertices in the 3d world, Perspective projection (where the software describes the vertices in the frustrum shaped visible volume) and Orthographic projection (where the positions of vertices are described relative to arbitrary, mutually perpendicular X, Y, and Z axes.).

Now in order to draw the image the vertex data has to first be transformed into the frustrum shaped jobby.

Because full openGL compliance in still a bit sketchy on most drivers, and whether the software openGL implementations included a transform engine for fitting orthagonal projected scenes to the frustrum was not certain, most games, wisely, chose to transform the vertices to the perspective projection themselves, using software.

This means that the GPUs T&L engine doesn't get to flex its muscles to transform the vertex positions.

Only games which have the capability to describe positions in orthographic projection will be able to take advantage of onboard T&L on graphics cards.

DirectX, i believe, has only added support for orthographic projection at version 7.

At least, this is what i have gleaned from a quick look at the OpenGL specs.

DuSTman

[This message has been edited by DuSTman (edited May 26, 2000).]


Posted by Lizzy on May 26 2000,16:45
Uhh... holy crap?
Posted by DuSTman on May 26 2000,16:47
What, liz?
Posted by Lizzy on May 26 2000,16:48
Your post! I have no idea what the hell you said in it!
Posted by DuSTman on May 26 2000,16:52
Oh dear, i havn't got the wrong end of the stick, have I?
Posted by Lizzy on May 26 2000,16:57
I'm sure whatever you said was right, but I don't grasp programming very well.
Posted by Sithiee on May 26 2000,17:29
on a more serious note, i know someone whos entire computer sits in a vat type thing of mineral oil, and the mineral oil is cooled by an air conditioner....i bet thats better than vapor phase...
Posted by DuSTman on May 26 2000,18:03
OpenGL can take full advantage of the Geforce, but developers can generally not trust drivers enough to risk it.

Maybe there's a way of enquiring if there is hardware T&L on an OpenGL , in which case developers could take advantage of it, but also have to duplicate this functionality in software to cater for drivers without transform capability..


Posted by TiresiasX on May 26 2000,21:41
I agree, the GF2's per-pixel dynamic light calculation is going to be a big help. Still, I'm not sure the GF2 has enough power to avoid a performance hit (three multiplies and two additions per texel, plus miscellaneous calculations to normalize/denormalize the vectors, etc).

And while I don't program in Direct3D (I'm a *nix guy), I remember hearing about hardware T&L functions before DX7, and maybe even before 6. It hasn't been around long in D3D, though, which is why OpenGL is such a popular CAD API.


Posted by Anztac on May 26 2000,21:52
Thanks Dust. I knew T&L was good, I also kinda knew what you said, but that was very enlightining. thx!

------------------
~Anztac - The guy who had the really long sig (formerly Kriegman)

"I am easily driven into a flying rage by blithering idiots"
-cr0bar [The god of this domain]


Posted by Sithiee on May 26 2000,23:40
you should buy a card with an nvidia quatro...or whatever its called...it could make the geforce 2 wet its pants....or the graphics chip eqivelant....but it also costs a lotta money....hmm...i dont think that made much sense...
Posted by DuSTman on May 27 2000,05:11
It's a similar story to this for lighting.

OpenGL specs include the ability to specify the position of "lights" that the polygons get lit by, but most gaming cards don't support this.

Calculating the lighing at each pixel in real time would have been computationally infeasible, so games had to resort to multitexturing to do lighting, calculating bitmaps representing the light intensity at each pixel on the polygons beforehand, and then blending these lightmaps with the texture of that polygon to get the resulting pixel colour values. This works fine for light that stays the same, but for dynamic lighting it is bad - the fixed lights in quake 1 and 2 appeared OK, but the light from the rocket was just added to any texel within a range of that rocket. (remember quake 1 and 2 where you could see the light from explosions against the other side of the wall?)

Real-time calculation of the light maps would look more realistic, and that is what the lighting section of nVidias T&L is all about.

I don't know how sophisticated nVidias GPU lighting model is, but from a programmers view, this is a step in the right direction.


Posted by Sithiee on May 27 2000,05:41
I'd have to ask the guy if he would, but sure, ill see what i can do...i sorta pissed him off a few years ago with some false accusations that i dont feel like apologizing for...but yeah....

Lizzy, Dustman says in laymans terms "OpenGL doesnt get to take full advantage of GeForce" and thats really all you need to know...

does anyone find it a bit ironic that the l33t hax0r has to ask the lam0r to explain stuff?


Posted by TiresiasX on May 27 2000,12:48
LoL I might just do that (buy an nVidia Quadro). I've got all sorts of ridiculous hardware in here anyways, and a professional OpenGL board that does all the GeForce2 can do would rox0r I've held off getting a professional OpenGL board for the reason that most of them couldn't stack up to gaming cards in fill rate.

Still, though, nVidia would probably ace the Quadro with a low-cost (comparatively speaking) gaming card in six montns or a year. The main reason I buy truly serious hardware is to make upgrading fairly pointless for the next few years (although playing with the big irons is nice too).

Oh, and btw, I bet it wouldn't take much pressure to get nVidia to post a database of games and the extent of their GeForce2 support. I remember 3Dfx had that a while back (and may still have it), and nVidia's already got an article touting how over 100 games take advantage of the GPU...


Posted by Sithiee on May 27 2000,18:07
i know what you mean, after i got my GeForce card, i was just like "hell yeah bitches", but then of course like a week after i got my card they whipped out the DDR, and my penis shrank 3 inches =(...it would work better if new technology could only be released at one time of the year, and then people could have large penises all year long....but then that would be a perfect world...and no one would accept it...we would keep trying to wake up from that reality....ok, ill shut up now
Posted by SimplyModest on May 29 2000,11:20
any chance you could post a picture of that? eh?
Posted by SimplyModest on May 29 2000,11:27
uh.. oops.. that was meant for lordbrand0n. (about that fatty rig)
Posted by TiresiasX on May 29 2000,14:05
LoL Sithiee, I had kinda the same problem when I got my GeForce DDR, and then nVidia whipped out the GF2 it wasn't so bad though...I was running on a TNT1 and waiting for the "nv15" (GeForce), then decided to wait for the GeForce DDR. When I get my next card, my GeForce will probably go into my Linux box.

what's funny, though...I know some people who, about half a year ago, were still waiting for BitBoy's Glaze3D! How far behind is its release by now...a decade? heh

I saw a satirical preview on the Glaze3D on somethingawful.com somewhere..."More powerful than Jesus Christ!" Yet still no more than a diagram of the chip's internals. Funny as hell.


Posted by DuSTman on May 29 2000,14:33
I thought the geforce 2 was the NV15, and the ge 1 was the NV10?
Posted by aventari on May 29 2000,15:52
thats what i think too

Did you hear that the voodoo5 5500 was delayed another week? jesus, this card must have bugs out the ass it's being rushed so much. I'm less and less inclined to go with it now. the gf2 is much more likely to have fewer bugs being based on a tried and true design

------------------
aventari
"witty one-liner"


Posted by simulacrum on May 29 2000,18:17
Also the drivers for the GF2 are much more developed than the 5500's. From what I understand, the drivers for the GF1 and GF2 are basically the same, and some of the later drivers for the GF1 are the same as the early drivers for the GF2.
Posted by Hellraiser on May 29 2000,19:10
Had the card for about a week now, and the GF2 has no bugs that I am aware of. Definitely a great card.

------------------
Just your generic meaningless signature. Mix with 2 quarts water and stir till evenly coated.


Posted by Sithiee on May 29 2000,21:26
Well...i dont have a picture of it, but for the right price, anything can be arranged
Posted by Lordbrandon on May 29 2000,22:49
I had some pictures but the "MAN" classified
all proof of the uberrig as it was a threat to national security.
Posted by Lordbrandon on May 29 2000,23:01
i had to look up frustrum, or frustum:

The part of a solid, such as a cone or pyramid, between two parallel planes cutting the solid,
especially the section between the base and a plane parallel to the base.

frustum \Frus"tum\, n.; pl. L. Frusta, E. Frustums. [L. fruslum piece, bit.] 1. (Geom.) The part of a
solid next the base, formed by cutting off the, top; or the part of any solid, as of a cone, pyramid, etc.,
between two planes, which may be either parallel or inclined to each other.


Posted by xaustinx on May 30 2000,03:25
amist all this highly technical discussion about GPU and the rest of the nVidia marketing engine, and 3dfx and thier overgrown child.. i would just like to remind everyone that as long as it keeps 60frames per second flashing in front of your face and doesnt drop from there.. it doesnt really matter how much faster beyond that it goes.... cuase for the humans among us.. we cant see any more frame than 60 per second. all i need is a card that can do 1600x1200x32bpp 60fps at all times, with FSAA 4x, and a little T&L love. a complete OpenGL ICD would be nice as well cuase no one likes to program for D3D
Posted by Hellraiser on May 30 2000,07:33
Dream on, Dream on, Dream yourself a dream come true!

You'll probably get a card that can do that by the end of this year or the beginning of next year. By then, you'll want a card that can do 2048x1536x32bpp with a solid 100 frames per second for your 22 inch monitor.

------------------
Just your generic meaningless signature. Mix with 2 quarts water and stir till evenly coated.


Posted by Kuros on May 30 2000,08:10
and do you know of any game working at that res or expected to !!
Posted by jim on May 30 2000,11:01
I thought this was rather funny!!!!!

quote:
nVidia Makes Bold Claim, Gets Ass Handed Back

In a recent press release on the GeForce2, nVidia claimed real-time Pixar-level animation was capable. A Pixar employee responded by calling the GeForce2 a "toy".

In a press release posted on Silicon Investor, Jen-Hsun Huang, President of nVidia, claimed, "Achieving Pixar-level animation in real-time has been an industry dream for years. With twice the performance of the GeForce 256 and per-pixel shading technology, the GeForce2 GTS is a major step toward achieving that goal."

In response, Tom Duff from Pixar responded, "Do you really believe that their toy is a million times faster than one of the CPUs on our UltraSparc server?"

Duff then goes on about the throughput of the AGP bus in a typical PC, by asking, "Don't forget that the scene descriptions of (Toy Story2) frames average between 500MB and 1GB. The data rate required to read the data in real time is at least 96Gb/sec. Think your AGP port can do that?"

The Pixar employee then points out the time constraints of the ludicrous statements the marketing team at nVidia by saying in 10 to 15 years, nVidia may have Pixar-level animation in real-time, but not in this day and age.

While this doesn't mean nVidia engineers are ignorant of what goes on in the rendering world, it does point fingers at the marketing teams of the graphics world and how ignorant they must think users really are. This smells like Apple marketing schemes.


------------------
jim
< Brews and Cues >


Posted by jim on May 30 2000,11:01
I thought this was rather funny!!!!!

quote:
nVidia Makes Bold Claim, Gets Ass Handed Back

In a recent press release on the GeForce2, nVidia claimed real-time Pixar-level animation was capable. A Pixar employee responded by calling the GeForce2 a "toy".

In a press release posted on Silicon Investor, Jen-Hsun Huang, President of nVidia, claimed, "Achieving Pixar-level animation in real-time has been an industry dream for years. With twice the performance of the GeForce 256 and per-pixel shading technology, the GeForce2 GTS is a major step toward achieving that goal."

In response, Tom Duff from Pixar responded, "Do you really believe that their toy is a million times faster than one of the CPUs on our UltraSparc server?"

Duff then goes on about the throughput of the AGP bus in a typical PC, by asking, "Don't forget that the scene descriptions of (Toy Story2) frames average between 500MB and 1GB. The data rate required to read the data in real time is at least 96Gb/sec. Think your AGP port can do that?"

The Pixar employee then points out the time constraints of the ludicrous statements the marketing team at nVidia by saying in 10 to 15 years, nVidia may have Pixar-level animation in real-time, but not in this day and age.

While this doesn't mean nVidia engineers are ignorant of what goes on in the rendering world, it does point fingers at the marketing teams of the graphics world and how ignorant they must think users really are. This smells like Apple marketing schemes.


------------------
jim
< Brews and Cues >


Posted by jim on May 30 2000,11:05
Here is the full reply from the Pixar employee.... Funny stuff.
< http://www.siliconinvestor.com/stocktalk/msg.gsp?msgid=13781290 >

------------------
jim
< Brews and Cues >


Posted by Hellraiser on May 30 2000,11:54
Q3A and several others have the option to play at that resolution, Kuros. I can't try it because my monitor doesn't support that resolution, but it's still something that would be nice to be able to play at.

------------------
Just your generic meaningless signature. Mix with 2 quarts water and stir till evenly coated.


Posted by simulacrum on May 30 2000,12:26
My monitor can support it, I just get about 30 min/frame. Yes, you read that right.
Posted by aventari on May 30 2000,16:11
that pixar story is hilarious!

------------------
aventari
"witty one-liner"


Posted by ss59 on May 30 2000,16:18
generally 3Dfx cards handle high AGP bus speeds better than nvidia's. Take a quick scan through the cpu data base at < Overclockers.com > and read how many people have difficulty with their GF1 and GF2 cards topping out around 89MHz.
As for the V5 5500 vs. GF2 debate. It all depends on what you wanna do. With current drivers the GF2 is dominating 16bit and resolutins lower than 1024X768 at 32bit. Above that the V5 reigns supreme in 32bit. Also, if you want FSAA, then the V5 is the way to go, as it is supported in hardware unlike the GF2 which is done primarily in software, hence the major performance hit. A good place to compare benchmarks: < Anandtech >. That and the V5 is fairly cheaper, and has 64MB of ram, where the GF2 has 32. Even though it is DDR, the GF2 and V5 have the same amount of memory bandwidth, so don't let anyone tell you crap about that. As for T-Buffer, t&l engine, and the other fancy crap, nothing supports it, so there's no reason for it. And t&l will only help on cpu's slower than 600MHz, which are too slow to power a V5 or GF2 anyway, so there insn't much point.

------------------
s | s | 5 | 9

[This message has been edited by ss59 (edited May 30, 2000).]


Posted by Kuros on May 31 2000,12:03
Never knew that Hellraiser, I might try it on a new monitor from my college - can projectors display that a high res?
Posted by Hellraiser on May 31 2000,21:32
Just got some new drivers for my Geforce 2 GTS, and like I thought, there's room for improvement. I just ran the Q3 timedemo 1 and got 63.3FPS @ 1600x1200, up from 61 with the old drivers. It's a great card, each day I play with it I like it more.

------------------
Just your generic meaningless signature. Mix with 2 quarts water and stir till evenly coated.


Posted by xaustinx on Jun. 01 2000,01:33
you sick bastart you play with your video card? arent u worried about what my happen if something gets cuaght in the fan.. u know that stuff is only slippery for alittle while...

quote:
Originally posted by Hellraiser:
Just got some new drivers for my Geforce 2 GTS, and like I thought, there's room for improvement. I just ran the Q3 timedemo 1 and got 63.3FPS @ 1600x1200, up from 61 with the old drivers. It's a great card, each day I play with it I like it more.



Posted by Hellraiser on Jun. 01 2000,05:35
Most projectors are locked at a resolution between 640x480 and 1024x768, although there could be some exceptions to this. But they're wicked expensive. A lot of 21 inch monitors support those high resolutions, but not all. The one I'm getting does.

------------------
Just your generic meaningless signature. Mix with 2 quarts water and stir till evenly coated.


Posted by Hellraiser on Jun. 01 2000,07:53
Shut the fuck up, you dipshit! Go play with your little sister.

------------------
Just your generic meaningless signature. Mix with 2 quarts water and stir till evenly coated.


Posted by xaustinx on Jun. 01 2000,20:09
sorry dude.. couldnt resist.. it was past midnight.. i was on my tenth can of pepsi.. things gotta little wierd.. i'll restrain myself from now on.
Posted by TiresiasX on Jun. 01 2000,20:57
Hoo, it's been a few days...this thread has grown.

Anyways, on my earlier post...yes, I had a brain fart of some kind. I noticed the posts about "nv15?" and remembered a little late. The GeForce is the nv10, sorry

Anyways...I have to comment on 3Dfx's FSAA approach. IMHO, it's just brute-force applied to age-old techniques, which is why the performance hit is so severe. The GF2 will (or does now, if its driver has already been refined enough) do FSAA in a different manner...one that actually takes little or no performance hit! I hear part of it is due to a dedicated FSAA chip (on same die as GPU?), a solution 3Dfx's FSAA technique doesn't have room for. Score another one for the GF2...

Hah...and the GF2 marketing blunder was funny as hell Speaking of marketing blunders...

On 3Dfx's Q&A about the V5 6000, one question is like, "Is the 128MB memory used to hold replicate data for all four processors?" 3Dfx's answer (and I almost quote): "No, only the texture memory is replicated between the processors."

ONLY the texture memory? I laughed my ass off. I like the way they said, "Only." "Only" 90\% of all the memory needed anyways has to be wasted duplicating data for all four processors! What a crock.

------------------
"Welcome to my Neon Dream."


Posted by xaustinx on Jun. 02 2000,00:24
does anyone know when the next nVidia card is coming out.. i dont' know if i should wait until the end of the year to upgrade my video card or the end of the summer.

Speaking of the 90\% of wasted memory.. couldn't they gain a large amount of performance on all theier multi-processor cards by simple rewriting the drivers so that instead of replicating duplicate graphics data it does it in a more effecient manner? could that be a marketing stragety.. advertise this card that's subpar with the a lower price.. and know that it's only going out to 3dfx loyalists.. then preform a MAJOR driver upgrade giving a HUGE boost in performance? They did something similiar with the Voodoo3 and Graphics Quality.. i wouldnt put it past them to try something similiar with performance.

[This message has been edited by xaustinx (edited June 01, 2000).]


Posted by Hellraiser on Jun. 02 2000,00:35
After seeing the ATI Rage MAXX disaster, I think I'd wait a year or so for multiprocessor graphics technology to stablize instead of wasting 跌-600 on something that may or may not work as good as it's cracked up to be. My GeForce has done everything it was supposed to be able to do, so I'm quite happy now.

------------------
Just your generic meaningless signature. Mix with 2 quarts water and stir till evenly coated.


Posted by xaustinx on Jun. 02 2000,01:59
< http://www.us.st.com/stonline/prodpres/graphic/kyro/presenta.htm >

found that presentation somewhat interesting.. mebbe i'll wait for that.. doesnt the DreamCast use PowerVR for it's graphics?

[This message has been edited by xaustinx (edited June 02, 2000).]


Posted by xaustinx on Jun. 02 2000,21:21
Full Dreamcast Specs

---SNiP--

Main Specs
CPU SH4 - RISC CPU with 128-bit graphics engine (200MHz, 360 MIPS/1.4 GFLOPS)


Graphics Engine - Power VR Second-Generation (CG performance of over 3 million polygons per second)


Posted by TiresiasX on Jun. 03 2000,05:21
PowerVR? I think that chipset is too old and outmoded to be using in a nextgen gaming console. The original model's probably EOL by now, which means it actually costs less for the designer (NEC i think) to make something better. Maybe the PowerVR2 or some later generation of that chipset will be used. I kept seeing references to PowerVR2 in game HCLs a while back but never actually saw that chipset released...

------------------
"Welcome to my Neon Dream."


Posted by TiresiasX on Jun. 03 2000,16:53
Haha only 3 million polygons/sec? That's about what my Voodoo2 board boasted when I got it, and that was over two years ago. The only reason to buy or program for a gaming console IMHO is for gaming performance superior to the PC. And maybe, just maybe, as a price point.

If the Playstation 2 ever gets to the Western Hemisphere, it's going to outright humiliate the Dreamcast. Hell, half of us here probably already have PC gaming rigs capable of that.

------------------
"Welcome to my Neon Dream."


Posted by Sithiee on Jun. 03 2000,23:28
first off, nVidia releases stuff about every 6 months, you can probably expect the next chip out 6 months after when they released geforce 2....i dunno when that was, but its a set pattern with nvidia...always about 6 months....

the thing about gaming platforms like dreamcast and ps2 is not always about superior graphics and stuff (which truthfully shouldnt have much to do with why you like a game anyway)its about simplicity. With ps2 or dreamcast, when a developer makes a game, they know it will work on all nondefective units. If someone calls up and has a problem, they dont have to figure out if someone needs more ram or HD space or whatever, because its all the same. This is the same reason i have a psx and stuff...i mean yeah, its fun to boast about how my computer has more gonads than my friends, but my computer is a high maintenence machine, i plug stuff in and turn it on, no thinking required. You'll note that those of lesser intelligence often flock to the video game console and declare it king, that or theyre smart and poor, and so theyre forced into the position. Either way, its a set pattern. Its about 12:30, im really tired, if this makes no sense, im sorry, bye


Posted by Avenger on Jun. 08 2000,22:34
quote:
Originally posted by Anztac:
I would suggest the Geforce GTS2 all the way. The V5 looks to suck comparitavly. The V5 does not have a GPU (thus does not assist with the CPU processes). The Geforce also has more/faster memory, with better everything. I myself though would wait for ATI's Raedeon. It's fast, supports all the bump mappings, has a great texturing pipeline. That is the shortest way I could say it.


What are you smoking? The GPU, or hardware texture and lighting only works in _some_ games and only provides a benefit in _some_ circumstances and _none_ at all on very high end processors. The Geforce2 only has one processor and as such _less_ memory bandwidth than a V5 5500. You spend more b/c nVidia went w/ DDR RAM and not 2 processors. The ATI Radeon will suck b/c it has only one pipeline and ATI always has terrible drivers, at least as far as gaming.


Posted by Avenger on Jun. 08 2000,22:41
quote:
Originally posted by Anztac:
what visualal effects? t-buffer? HA! Who wants a blurry far ground, or blurry, confusing trail. The only features that the T-cuffer brings that matter are soft reflections and FSAA. FSAA works on the G FTS2. Soft-reflections are easily done. The speed/price of the V5 sucks ass.


Again, what are you smoking? While depth of field and motion blur might be undesirable in first person shooters, I'm willing to wait and see them myself. FSAA on the V5 uses rotated grid supersampling and as such is faster, looks better, and supports a lot more games. If you don't believe me, check out any major hardware review site. Soft shadows and reflections may be doable in software, but they will certainly be slower. If anything, the V5 5500 is cheaper than a GF2.


Posted by Avenger on Jun. 08 2000,22:53
quote:
Originally posted by aventari:
thats what i think too

Did you hear that the voodoo5 5500 was delayed another week? jesus, this card must have bugs out the ass it's being rushed so much. I'm less and less inclined to go with it now. the gf2 is much more likely to have fewer bugs being based on a tried and true design


The VSA-100 is still based on the Voodoo architecture, which has been around for an entire generation more than nVidia's first entrant into the 3d card realm.


Posted by Avenger on Jun. 08 2000,22:55
quote:
Originally posted by simulacrum:
Also the drivers for the GF2 are much more developed than the 5500's. From what I understand, the drivers for the GF1 and GF2 are basically the same, and some of the later drivers for the GF1 are the same as the early drivers for the GF2.

Even though 3dfx's drivers are only in the 3rd or so release the V5 5500 and the GF2 are neck and neck in the benchmarks. With such new drivers, 3dfx has more room to improve than nVidia.


Posted by Avenger on Jun. 08 2000,23:07
quote:
Originally posted by TiresiasX:

Anyways...I have to comment on 3Dfx's FSAA approach. IMHO, it's just brute-force applied to age-old techniques, which is why the performance hit is so severe. The GF2 will (or does now, if its driver has already been refined enough) do FSAA in a different manner...one that actually takes little or no performance hit! I hear part of it is due to a dedicated FSAA chip (on same die as GPU?), a solution 3Dfx's FSAA technique doesn't have room for. Score another one for the GF2...

You're totally misinformed. 3dfx's RGSS method is _new_! That's why it's actually faster than a GF2. The performance hit is _less_! The GF2 has nothing on hardware that supports FSAA, while the V5 does. nVidia never intended to have FSAA until 3dfx announced it. They just added it to their drivers as an option. They use an older method (OGSS), that doesn't eliminate pixel popping and requires a higher fill rate than RGSS. That's why it only works with certain APIs and certain games.


Posted by Sithiee on Jun. 09 2000,18:45
holy shit man. post it all into one fucking post. and what are you? 3Dfx's poster boy?...just because a company was good, doesnt mean it cant get bad. ie, lucasarts used to be the unchallenged leader in most computer game types, of late though, the games are starting to suck. allow me to prove you wrong in each of your posts....

1: about having more or less processors, and types of ram...
just because theres only one processor, doesnt mean its inferior, if i have a p3 1Ghz, it will always win over an 8086..same thing, just a more exadgerated difference. about DDR, DDR is not that much more expensive than SDR, and nVidia didnt go with DDR, the companies did because its faster. i have an SDR geForce, and it runs fucking fast anyway.

2:about tbuffer and v5 being cheaper...yes, the tbuffer is nice, but at what expense?, the v5 6000 is the closest performing card to the gf2, and its 600 dollars, so how is that less expensive, when most geforces are around 350 at the most?...i mean sure im in high math classes so i might think im better at most in math, but most people would probably agree that 600 is more than 350...

3:about still being based on the voodoo architechture, just because its been around longer, does not make it better. 64 bit processors will be faster than 32 bit, and they wont be based on the x86 architechture, and thats good, so your argument is null

4:about being neck and neck in benchmarks...i dont know what benchmarks you see, but every comparison ive seen to date of geforce2 and v5 cards have the geforce2 beating the v5 marginally, it only gets close in the super high resolutions.

5: about FSAA, if you would care to note, the person said "IMHO" meaning an opinion, and besides that, it is supported via hardware in geforce 2, just not too well. and just because nVidia tried to do it based on the fact 3Dfx did, doesnt make that bad, its called competition, and thats what the whole MS thing is all about.

basically, you needa do two things, support yourself with research and examples, and you needa combine things, note i only have 1 post to your 5


Posted by Hellraiser on Jun. 09 2000,20:52
You go Sithie!

------------------
Just your generic meaningless signature. Mix with 2 quarts water and stir till evenly coated.


Posted by Avenger on Jun. 10 2000,16:14
quote:
Originally posted by Sithiee:
holy shit man. post it all into one fucking post. and what are you? 3Dfx's poster boy?...just because a company was good, doesnt mean it cant get bad. ie, lucasarts used to be the unchallenged leader in most computer game types, of late though, the games are starting to suck. allow me to prove you wrong in each of your posts....

From years of experience in newsgroups, I've learned that ad hominem attacks are usually only employed by people who have lost an argument.

quote:

1: about having more or less processors, and types of ram...
just because theres only one processor, doesnt mean its inferior, if i have a p3 1Ghz, it will always win over an 8086..same thing, just a more exadgerated difference. about DDR, DDR is not that much more expensive than SDR, and nVidia didnt go with DDR, the companies did because its faster. i have an SDR geForce, and it runs fucking fast anyway.

I'm not arguing that the GF2 is inferior b/c it only has one processor. I'm arguing that b/c the V5 5500 has 2 processors, it has double the memory bandwidth than it would have had with a single processor and doesn't need DDR. That gives it greater memory bandwidth at a lower price. The companies didn't have any other choice but to go with DDR. If they had used SDR, the GF2 wouldn't have been any faster than a GF1. Try running your SDR GF1 above 1024x768x32. The bandwidth crunch will hit and framerates will collapse.

quote:

2:about tbuffer and v5 being cheaper...yes, the tbuffer is nice, but at what expense?, the v5 6000 is the closest performing card to the gf2, and its 600 dollars, so how is that less expensive, when most geforces are around 350 at the most?...i mean sure im in high math classes so i might think im better at most in math, but most people would probably agree that 600 is more than 350...

The V5 5500 is the closest performing card to the GF2 and it costs slightly less on average. nVidia's closest card to the V5 6000, < , will cost over 񘈨.

quote:

3:about still being based on the voodoo architechture, just because its been around longer, does not make it better. 64 bit processors will be faster than 32 bit, and they wont be based on the x86 architechture, and thats good, so your argument is null

I wasn't arguing that the V5 is better b/c the architecture has been around longer. On the contrary, someone I replied to argued that the GF2 is better b/c it's architecture has been around longer. I was correcting him.
Yes, most 64 bit processors are faster than 32 bit processors, except for Intel's Itanium flop.

quote:

4:about being neck and neck in benchmarks...i dont know what benchmarks you see, but every comparison ive seen to date of geforce2 and v5 cards have the geforce2 beating the v5 marginally, it only gets close in the super high resolutions.

The vast majority of GF2 and V5 reviews use only Q3, which is one of the very few games w/ hardware T&L support (and the V5 is still usually higher in high resolutions), and synthetic benchmarks that have little or no correlation with real world performance.

quote:

5: about FSAA, if you would care to note, the person said "IMHO" meaning an opinion, and besides that, it is supported via hardware in geforce 2, just not too well. and just because nVidia tried to do it based on the fact 3Dfx did, doesnt make that bad, its called competition, and thats what the whole MS thing is all about.

However, the fact that nVidia lies that the GF2 supports FSAA in every app is bad.

here >

"Full scene anti-aliasing is completely transparent to applications; end users can simply enable it with the NVIDIA control panel and experience FSAA in any title."

nVidia's FSAA is still a poor implementation and is done mostly in software.

<

"Suffice to say that, on average, a 4X Ordered Grid anti-aliasing should be very identical to a 2X Rotated Grid anti-aliasing in terms of quality. A 4X Rotated Grid anti-aliasing would be even better."

According to Riva3D >, an nVidia fan site,

"OGSS also is accomplished largely through software, which is why just about any 3D card can utilize this method if it is powerful enough to do so."


quote:

basically, you needa do two things, support yourself with research and examples, and you needa combine things, note i only have 1 post to your 5

Why don't you support your arguments w/ research and examples? What's wrong with replying to each person individually? It's a lot nicer than a post beginning with "You're all wrong" or something along those lines.


Posted by TiresiasX on Jun. 10 2000,20:18
Concerning memory bandwidth:

Two processors accessing shared memory does not equate to twice the sustained bandwidth. Memory access between requesting processors is often mutually exclusive, especially when memory writes are involved. Having multi-port memory chips makes it a little easier, but certain situations (especially two chips trying to write to the same memory block) still have to be done on separate memory clock cycles so that one chip doesn't fux0r the memory circuits or the other chip's work.

Even having simultaneous memory reads is difficult, as it requires an extra management layer around the memory ICs and may even require higher memory latency assumptions to allow for this management to take place.

And as for 3Dfx's anti-aliasing approach, their approach plus their aging architecture requires implementing more than one processor unit, if you read 3Dfx's own white paper on it. 3Dfx considers it a reasonable assumption that it can be done on one chip in the future (still with a severe performance hit), but they either haven't developed their drivers that far or do not have the capability in hardware. This is why the single-chip Voodoo4 has FSAA disabled.

------------------
"Welcome to my Neon Dream."


Posted by Sithiee on Jun. 11 2000,00:06
first id like to point out i did research this, and i did use examples, so shut your fucking mouth. second id like to point out that nowhere in my reply did i say "youre all wrong", so again, shut your fucking mouth...on the topic of ad hominem arguments, wasnt that what yours was?..and besides, what do you want? a pre-emptive response to what you havent already said?...gimme a break, im only human.

i do consistently run my geforce SDR above 1024x768x32, and i rarely have framerate drops, mind you this is on a p2 450, so you dont know what your talking about. maybe you shoudl start running tests before you start stating things about my computer.

how do you figure the closest will be the 128 board?...considering the memory on the v5 6000 has to be duplicated per processor, that means that the 128 must be split down to 32...wait...isnt thats whats on the v5 6000?...hmmm....

q3 has some T&L support, but it also is one of the most known games, and most graphics intensive. if people used something like SS2 for benchmarks, it would be dumb, because its not a limits pusher, and it wouldnt give anyone a very good idea of how the card performs. and besides, if 3Dfx cant see that T&L is teh future, then their benchmarks should be hampered.

about nVidia lying, sure that may not be the greatest thing, but can you seriously tell me that you would expect them to say "we have FSAA, but it doesnt work at all, and will not work with anything"?..no, so they say it does work, and hope that someone will make it work


Posted by Avenger on Jun. 12 2000,03:20
quote:
Originally posted by TiresiasX:
Concerning memory bandwidth:

Two processors accessing shared memory does not equate to twice the sustained bandwidth. Memory access between requesting


That may be true, but the GF2 and V5 5500 have about the same theoretical memory bandwidth, 5.2 GB/s. However, in high resolutions, when memory bandwidth is crucial, the V5 usually wins out.

quote:

And as for 3Dfx's anti-aliasing approach, their approach plus their aging architecture requires implementing more than one processor unit, if you read 3Dfx's own white paper on it. 3Dfx considers it a reasonable assumption that it can be done on one chip in the future (still with a severe performance hit), but they either haven't developed their drivers that far or do not have the capability in hardware. This is why the single-chip Voodoo4 has FSAA disabled.

The fact that the V5 requires 2 processors for FSAA is irrelevant to the end user. The V4 is a low end card, designed largely for OEMs and meant to compete with cards that don't offer FSAA. 3dfx's architecture may be old, but look at what Intel has done with the 5 year old ppro core. AFAIK, they still hold over \%70 of the x86 market.


Thanks for the reasonable and rational post.


Posted by Avenger on Jun. 12 2000,03:48
quote:
Originally posted by Sithiee:
first id like to point out i did research this, and i did use examples, so shut your fucking mouth. second id like to point out that nowhere in my reply did i say "youre all wrong", so again, shut your fucking mouth...on the topic of ad hominem arguments, wasnt that what yours was?..and besides, what do you want? a pre-emptive response to what you havent already said?...gimme a break, im only human.

You didn't have a single reference in your post. It was simply a mixture of common knowledge and opinionated heresay. I never accused you of saying "you're all wrong". I was pointing out that individual responses from _me_ were better than _me_ posting a message attacking the whole board. I attacked arguments, not the people who originated them. Why are you being so hypocritical? What makes you think that I asked for a response to something I have yet to say?

quote:

i do consistently run my geforce SDR above 1024x768x32, and i rarely have framerate drops, mind you this is on a p2 450, so you dont know what your talking about. maybe you shoudl start running tests before you start stating things about my computer.

At those resolutions it's still a lot slower than a DDR Geforce.

quote:

how do you figure the closest will be the 128 board?...considering the memory on the v5 6000 has to be duplicated per processor, that means that the 128 must be split down to 32...wait...isnt thats whats on the v5 6000?...hmmm....

Even the 128mb GF2 won't approach the V5 6000 and it still costs more. Only the textures need to be duplicated for each processor. As a result, 128mb is plenty of memory, especially with the texture compression employed by 3dfx, except for insanely high resolutions such as 2048x1536x32 and nothing until the NV20 or Rampage has the fillrate to handle that.

quote:

q3 has some T&L support, but it also is one of the most known games, and most graphics intensive. if people used something like SS2 for benchmarks, it would be dumb, because its not a limits pusher, and it wouldnt give anyone a very good idea of how the card performs. and besides, if 3Dfx cant see that T&L is teh future, then their benchmarks should be hampered.

There are dozens of other system intensive games that can be tested. IMHO, tile based rendering is the future.

quote:

about nVidia lying, sure that may not be the greatest thing, but can you seriously tell me that you would expect them to say "we have FSAA, but it doesnt work at all, and will not work with anything"?..no, so they say it does work, and hope that someone will make it work

I think they should be honest and admit that they added FSAA as an afterthought and it only works on certain games. They designed the GF2 and are responsable for making it work as designed; no one else is.


Posted by TiresiasX on Jun. 12 2000,06:44
well Justice, you could get yourself an SGI IRIX workstation...if you happen to feel like selling your house for it, LoL

and an interesting thing...if you check at Tom's Hardware, when he's not (rightfully) knocking RDRAM, he's got a pretty decent review of the GF2 that actually puts it side-by-side with the Quadro, nVidia's professional openGL chipset...

The Quadro consistently performs slightly worse than the standard GF2!

I suppose it's not so surprising...the Quadro's probably optimized for T&L and had some of its fill rate capabilities scrapped to cut cost. Makes sense, since most 3D designers (me included) prefer to test-render in wireframe and do the final render completely in software. Real shame though...looked like the Quadro might have been the perfect chipset for professional OpenGL and gaming combined.

------------------
"Welcome to my Neon Dream."


Posted by TiresiasX on Jun. 12 2000,06:58
Oh, btw, SGI's been marketing Wintel/Linux boxes for a while (with a seriously marked up proce, heh). They used to sell very nice quad Xeon workstations with their own proprietary peripheral bus, which is superior to PCI. Back then, they were using a Number Nine vid accelerator...now they've got single-CPU boxes with VPro graphics chipset.

Anyone ever heard of VPro? I haven't...proprietary SGI company, perhaps? And what happened to Number Nine? They had some seriously kickass products a year ago, and now they've "ceased operation." It's like they were just completely forgotten.

------------------
"Welcome to my Neon Dream."


Posted by JusticeDenied on Jun. 12 2000,17:32
I gotta agree with avenger on these.... I think for the most part he has it nailed.

Drivers are already being patched see < www.voodooextreme.com > LOD patch is sweet


I wont be happy till I have a card that can do 1600x1200x32 true fsaa and all the visual eyecandy bells and whistles at 90 frames a sec... now who is gonna build it for me ?


Powered by Ikonboard 3.1.4 © 2006 Ikonboard