Gormless wrote:
"Scott Glasgow" wrote in message
Gormless wrote:
"Walterius" wrote in message
RAM and 1024 GHz processors.
1024 GHz is something we won’t see in my lifetime or our grandchildren’s. I think we’re up to around 3.4 GHz at present.
Helen
Hmm, think again. Just 14 years ago the original IBM PC debuted with a 4.77 MHz processor. Less than half a generation later we’re almost 1000 times as fast at 3.4 GHz. I would venture that we will probably see at least comparable growth in the next half-generation (where a generation is generally taken to mean 30 years), yielding systems running comfortably beyond 1024 GHz by the time your children are only preparing to have your grandchildren, much less by the end of your grandchildren’s lifetime.
And that clock speed difference doesn’t even reveal the true speed delta between then and now. Those early PCs had an ISA bus with 8-bit and 16-bit peripheral expansion cards, and a slow 20-bit (IIRC) memory bus. The PC-AT introduced the blazingly fast 8 MHz ISA bus at a 12 MHz processor speed. Furthermore, the processors (CPUs) had no memory management, no floating point processor, no 1st or 2nd level cache, etc., as well as internal architectures which did not support pipelining, prefetch queues, or predictive execution. Now, we have systems with all of these features, built in multimedia extensions, supplied by a 64/128 bit memory bus, running on screaming fast dual-gate double data rate DRAM at 533 MHz, with independent graphics processing provided by video subsystems that are all but computers in themselves, and chipsets which provide standardized in-built networking, including wireless 802.11a/b/g (in my Thinkpad R50p with Pentium M), and 5.1 stereo sound support. And all this for 1/8 the price of the original PC-XT, and that’s in today’s dollars!
Rest assured, Helen. You ain’t seen nuthin’ yet! 😉
Cheers,
Scott
Thanks for this Scott. I don’t confess to understanding an awful lot about computers. The leap from 3.4 to 1024 in, what, say 60 years, may or may not be possible, and may well be indicated as likely if a graph was produced which showed recent trends in increases. However, I thought that things were slowing down nowadays, something to do with them having got the little wires and things on chips as close together as they can be. Another issue at play, as I thought, was heat dissipation. I heard that if chips get much more powerful then getting rid of the heat produced will become very difficult.
Helen
Just a photographer, not a computer scientist.
Well, you see, the thing is that it’s not just a matter of feature density (so many transistors per mm). A great many of the improvements in processor performance have come about as a result of other factors: architecture changes, microcode redesign, multiple operations per cycle, pipelining and prefetch, etc. Yes, these have been accompanied by increases in transistor count, but the performance improvements are far out of proportion to the transistor count delta.
Even there, the predictions of limits seem to be valid about as long as it takes to get them published. I recall back around the 486 period when the Pentium was introduced that they were talking about the .90 micron barrier (1 micron = 1 millionth of a meter). Now Intel has .18 micron and .13 micron processes in production. There are interesting developments underway which may eventually make feature size irrelevant, those involving quantum computing. Current processes basically depend upon storage and manipulation of two-state data. Proposed quantum computers have the potential of storing and manipulating multiple states simultaneously. This, as you can probably see, exponentially changes the equations describing what it is possible to do with processing power. Current developmental experiments building Long Single-Walled Carbon Nanotube Strands (bucky strands) for use in semiconductors and interconnects open the possibility of new physical concepts in computing, as well.
Other factors include those mentioned above–improvements in bus speeds, separation of bus functions for optimization of particular data types and logical and physical data flows, memory speed and capacity increases, improvements in operating systems to take advantage of these hardware improvements, and many other incremental improvements all combine to make our subjective computing experience faster and faster.
Basically, what it all comes down to, when one examines _all_ of the advances along the current frontiers of technological development, is that any flat statement as to the limitations of speed and power is likely to err seriously on the conservative side. If there was a Vegas line, I’d be betting on the upside myself. 😉
Cheers,
Scott