GetDotted Domains

Viewing Thread:
"NV20 exposed!! NVidia come up trumps again!"

The "Freeola Customer Forum" forum, which includes Retro Game Reviews, has been archived and is now read-only. You cannot post here or create a new thread or review on this forum.

Sun 21/01/01 at 01:21
Regular
Posts: 787
A guy was asking about HSR. Since this is quite a big post, I reckoned it deserved a new topic. If you go to http://www.digit-life.com/ articles/nv20/index.html, there is an article that gives a detailed preview of Nvidia's new NV20 chipset. This will officially be revealed on 27th February for some Intel developer dudes, and so could likely be nestling within our machines in the summer. This gives a breakdown of the specs, with no real gameplay stats, as it uses only a make-up reference card. Still you can get a rough idea of what is to be expected. It looks like Nvidia have decided rather power up their already capable beast, they will expand the capabilities. This will include full support for DX8.0 scripting, which apparently goes beyond cards out today. Another thing is the inclusion of HSR, all be it a modified sue of z-buffer algorithms.
The article also highlights the possible plateau in cards that we may see soon. The limiting factor in cards is really not the memory or number of transisitors in the chip, but memory bandwidth. This dictates the rate at which images are composed and rendered, and ultimately the FPS. I don't know what it is currently, but the NV20 will have upto 250MHz bandwidth, giving 8Gbytes/sec throughput. Another thing, which in my book is good, is the focus on the efficient use of this bandwidth, by employing compression techniques, and rejigging old architecture.
Other ramped up bits include the increase in textures count to 4096 x 4096, twice that of the Geforce Ultra 64MB cards. This is the max number of textures that can be used by DX8 in a texture pipeline at one time. The pipeplines are what is rendered per pixel, i.e on the NV20 it wil be 4, the same as geforce stuff. Essentially this allows developers to apply 4 textures per pixel, the GTS stuff, make 3D more realistic. However, there are only 2 texture blocks per pipeline. Some games, such as Quake 3 can use 3 or 4 textures per pixel, and so the implication is that with only 2 texture blocks, the workload is doubled, and so the achievable fill rate drops. The important thing here is that there would be no gain by having 4 texture blocks, because the memory wouldn't be able to pump the data through fast enough due to the bandwidth.
Another included feature, which does not appear on the Geforce or Geforce 2 cards, is volume textures. This seems to be in response to their inclusion on the ATI Radeon cards. This was something that really made the Radeon stand out, as well as true 32bit colour. This too is included on the NV20. Although the Radeon supports compressed volume textures. It might have been thought unnecessary on the NV20, due to increased performance speed.
Additionally the hardware T&L and FSAA have both been considerably beefed up giving better performance, and making your games run much faster. The problem with the first set of geforce cards, and to some extent the GTS, PRO and ultras, was that these effects had to be balanced with the fill-rate and resolution used. The result was a big drop in FPS if FSAA was fully turned on. So Quake 3 looked pretty but didn't whizz past. This was why many gamers were sceptical about FSAA. Essentially all this was due to limited bandwidth and so on. But, with better hardware T&L and FSAA, increased bandwidth and more efficiently data compression sorts this out.
The article also goes into some detail about the NV2A, the chipset for the XBox, and blows away some of those astpunding figures we've been hearing. Part of the reason that DX8.0 is so well supported by the NV20 is due to the close working relationship that NVidia and Microsoft have developed because of XBox demands.
It can get rather dry, especially towrds the end. Most of the useful data (at least for most gamers) is at the beginning. It goes on to compare performances and abilities with the Geforece Ultra and ATI Radeon cards. Such things as buffer enhancing, pixel and vertex shader details. I'm sure some of you appreciate these things.
So, whilst you power hungry fiend out there may not be overly impressed by lack of extra grunt the NV20 offers, I'm suitably impressed. NVidia have worked hard to make things look better. We have reached the plateau in terms of resolution and full on fill rate needed. The next step is to continue to make things look more realistic and visually stunning. By further exploiting the power of DX8.0, it should make things easier for future game designers, who can design for DX8.0, rather than, specifically for the cards out there. Expect to be seeing 64MB cards soon, followed by some mother-humping 128MB cards shortly after. Some guys might be thinking that's unnecessary, but the article highlights that, whilst bandwidth is the cap for most things, extra memory still helps push things along quicker. I'm off to start saving my money now (I can see my bank manager's face now! :), ).
Mon 22/01/01 at 20:38
Regular
Posts: 25
I have heard of these drivers too, though I can't recall the site that usually offers them. As far as I've found, what is in the pre-release drivers goes into the release version. Mostly these drivers are stable, but every now and then they can be a bit unpredictable. I guess you take the risk when using drivers like these.
Mon 22/01/01 at 18:16
Posts: 0
i have heard that there are leaked nvidia drivers version 7 (i think) that makes the use of HSR in the geforce cards. i know these drivers are probably in the alpha stage at best, but does this mean that HSR will be in the next driver release?
Sun 21/01/01 at 01:21
Regular
Posts: 25
A guy was asking about HSR. Since this is quite a big post, I reckoned it deserved a new topic. If you go to http://www.digit-life.com/ articles/nv20/index.html, there is an article that gives a detailed preview of Nvidia's new NV20 chipset. This will officially be revealed on 27th February for some Intel developer dudes, and so could likely be nestling within our machines in the summer. This gives a breakdown of the specs, with no real gameplay stats, as it uses only a make-up reference card. Still you can get a rough idea of what is to be expected. It looks like Nvidia have decided rather power up their already capable beast, they will expand the capabilities. This will include full support for DX8.0 scripting, which apparently goes beyond cards out today. Another thing is the inclusion of HSR, all be it a modified sue of z-buffer algorithms.
The article also highlights the possible plateau in cards that we may see soon. The limiting factor in cards is really not the memory or number of transisitors in the chip, but memory bandwidth. This dictates the rate at which images are composed and rendered, and ultimately the FPS. I don't know what it is currently, but the NV20 will have upto 250MHz bandwidth, giving 8Gbytes/sec throughput. Another thing, which in my book is good, is the focus on the efficient use of this bandwidth, by employing compression techniques, and rejigging old architecture.
Other ramped up bits include the increase in textures count to 4096 x 4096, twice that of the Geforce Ultra 64MB cards. This is the max number of textures that can be used by DX8 in a texture pipeline at one time. The pipeplines are what is rendered per pixel, i.e on the NV20 it wil be 4, the same as geforce stuff. Essentially this allows developers to apply 4 textures per pixel, the GTS stuff, make 3D more realistic. However, there are only 2 texture blocks per pipeline. Some games, such as Quake 3 can use 3 or 4 textures per pixel, and so the implication is that with only 2 texture blocks, the workload is doubled, and so the achievable fill rate drops. The important thing here is that there would be no gain by having 4 texture blocks, because the memory wouldn't be able to pump the data through fast enough due to the bandwidth.
Another included feature, which does not appear on the Geforce or Geforce 2 cards, is volume textures. This seems to be in response to their inclusion on the ATI Radeon cards. This was something that really made the Radeon stand out, as well as true 32bit colour. This too is included on the NV20. Although the Radeon supports compressed volume textures. It might have been thought unnecessary on the NV20, due to increased performance speed.
Additionally the hardware T&L and FSAA have both been considerably beefed up giving better performance, and making your games run much faster. The problem with the first set of geforce cards, and to some extent the GTS, PRO and ultras, was that these effects had to be balanced with the fill-rate and resolution used. The result was a big drop in FPS if FSAA was fully turned on. So Quake 3 looked pretty but didn't whizz past. This was why many gamers were sceptical about FSAA. Essentially all this was due to limited bandwidth and so on. But, with better hardware T&L and FSAA, increased bandwidth and more efficiently data compression sorts this out.
The article also goes into some detail about the NV2A, the chipset for the XBox, and blows away some of those astpunding figures we've been hearing. Part of the reason that DX8.0 is so well supported by the NV20 is due to the close working relationship that NVidia and Microsoft have developed because of XBox demands.
It can get rather dry, especially towrds the end. Most of the useful data (at least for most gamers) is at the beginning. It goes on to compare performances and abilities with the Geforece Ultra and ATI Radeon cards. Such things as buffer enhancing, pixel and vertex shader details. I'm sure some of you appreciate these things.
So, whilst you power hungry fiend out there may not be overly impressed by lack of extra grunt the NV20 offers, I'm suitably impressed. NVidia have worked hard to make things look better. We have reached the plateau in terms of resolution and full on fill rate needed. The next step is to continue to make things look more realistic and visually stunning. By further exploiting the power of DX8.0, it should make things easier for future game designers, who can design for DX8.0, rather than, specifically for the cards out there. Expect to be seeing 64MB cards soon, followed by some mother-humping 128MB cards shortly after. Some guys might be thinking that's unnecessary, but the article highlights that, whilst bandwidth is the cap for most things, extra memory still helps push things along quicker. I'm off to start saving my money now (I can see my bank manager's face now! :), ).

Freeola & GetDotted are rated 5 Stars

Check out some of our customer reviews below:

Great services and friendly support
I have been a subscriber to your service for more than 9 yrs. I have got at least 12 other people to sign up to Freeola. This is due to the great services offered and the responsive friendly support.
Excellent
Excellent communication, polite and courteous staff - I was dealt with professionally. 10/10

View More Reviews

Need some help? Give us a call on 01376 55 60 60

Go to Support Centre
Feedback Close Feedback

It appears you are using an old browser, as such, some parts of the Freeola and Getdotted site will not work as intended. Using the latest version of your browser, or another browser such as Google Chrome, Mozilla Firefox, or Opera will provide a better, safer browsing experience for you.