I, for one, am entirely too pleased about this: http://www.research.ibm.com/resources/news/20010503_display.shtml
Also I heard similar news not long ago about a similar breakthrough in the field of light emitting polymer based display manufacturing by the Epson corporation, who are now able to produce print-heads custom made for layering various LEP substances upon a fine wire mesh, making practical the economical manufacture of flat panel screens built from these materials. (this technology is not solely controlled by epson, as it comes after a similar development from a local company)
What concerns me is the timescale that these technologies are likely to be implemented in. it's been a few months since the announcement that they had the LEP manufacturing technology, and I would have thought that display manufacturers would be desperate to snap this one up.. however, such screens have yet to appear.
Also the limitations of fixed-pixel-size displays may turn out to be a significant disadvantage until the panel resolutions are extreme : A CRT monitor has a fixed number of holes in it's grille/mesh and can never be termed to provide a perfectly sharp image as the actual time at which the video cards RAMDAC (as well as the sloppy transition from one beam intensity to the next caused by the reactance of the RAMDAC/cable/gun system), but this fuzziness increases in a fairly steady way as you increase the resolution.. The "primary" benefit of current TFT screens is their sharpness at their native resolution, but they then have severe difficulties when it comes to displaying a screen of a resolution other than the native. The choices are to either display the screen smaller (but sharp) or expand it to fill the screen using either a nearest neighbor system, or a bilinear filtering algorithm. The bilinear filtering algorithm is an obviously superior approach than the nearest-neighbor, and probably also better just leaving it small as we get to have large resolutions due to these manufacturing enhancements (this would make the screen size a titchy little blob in the middle of the screen).
But trying to display a bitmap on a screen with a lower native resolution is a worse problem, and is similar to the problem of texture minification in 3d graphics. Not enough pixels, so displaying it at full size is out of the question.. Randomly cull the occasional pixel (colour the pixel the colour of the nearest "virtual pixel" of the screen you're trying to represent) still a bad solution as it wouldn't represent some pixels.. Bilinear filter? would look a bit better than nearest neighbor, but could still ultimately neglect pixels.. The only real way to do it would be to take a weighted average of the "virtual pixels" in the constituency (weighted according to distance of virtual pixel from actual pixels centre.. the problem is that with large resolutions, this would require the image scaler to keep, for each screen pixel, the "virtual pixels" under it, their relevance factor, and possibly the running total (as it would have to be accomplished as an iterative process, probably).. Which will indeed be a very computationally intensive process when it's expected to update the physical frame buffer at 200fps by some dude running quake8 on the new nVidia Melvinator 3.
There is no real "good" way to scale down an image that satisfies all the criterion (prettiness, practicality) for fixed pixel size displays..
So, i think that we're going to need ungodly resolutions on our actual screen panels but not actually go up all that high with the display resolution.
What do you think?