Photoshop CS3 and OS X 10.6 Snow Leopard

AB
Posted By
Alan Browne
Sep 2, 2009
Views
1871
Replies
47
Status
Closed
Apparently there are some minor bugs with PS-CS3 under Snow Leopard. David Pogue (NYT) claims crashes every 10 minutes or so, but most people just experience some very minor display bugs with easy workarounds if (at all) required. (no effect on actual work).

Adobe refuse to "officially" test CS3 under snow leopard. This is a PO for me – I don’t consider s/w I paid over $600 for less than 2 yrs ago to be legacy. Many feel the same – has been a contentious issue in various blogs, including the John Nash blog hosted at Adobe.

How to Improve Photoshop Performance

Learn how to optimize Photoshop for maximum speed, troubleshoot common issues, and keep your projects organized so that you can work faster than ever before!

D
DrJohnRuss
Sep 2, 2009
On Sep 2, 4:06 pm, Alan Browne
wrote:
Apparently there are some minor bugs with PS-CS3 under Snow Leopard. David Pogue (NYT) claims crashes every 10 minutes or so, but most people just experience some very minor display bugs with easy workarounds if (at all) required.  (no effect on actual work).

Adobe refuse to "officially" test CS3 under snow leopard.  This is a PO for me – I don’t consider s/w I paid over $600 for less than 2 yrs ago to be legacy.  Many feel the same – has been a contentious issue in various blogs, including the John Nash blog hosted at Adobe.

I am running CS2, CS3 and CS4 on a Mac Book Pro that has been upgraded to Snow Leopard. CS3 and CS4 run flawlessly. CS2 has some "graphic anomalies" and I have seen several unexplained crashes that seem to have to do with dialogs opening up. But I am quite pleased with the start up times for Photoshop with 10.6. I have a solid state drive, and Photoshop boots (and loads an enormous number – more than 200 – plug-ins) in less than 2 seconds.
ER
Elliott Roper
Sep 2, 2009
In article , Alan Browne
wrote:

Apparently there are some minor bugs with PS-CS3 under Snow Leopard. David Pogue (NYT) claims crashes every 10 minutes or so, but most people just experience some very minor display bugs with easy workarounds if (at all) required. (no effect on actual work).

Adobe refuse to "officially" test CS3 under snow leopard. This is a PO for me – I don’t consider s/w I paid over $600 for less than 2 yrs ago to be legacy. Many feel the same – has been a contentious issue in various blogs, including the John Nash blog hosted at Adobe.

Adobe are ruining their already bad name.
They can hardly expect people to upgrade to CS4 just for Snow Leopard when CS4 has no Grand Central and no no Open CL and is not 64 bit.

They better be nice to everyone with CS(mumble) and hope they still feel like ever doing business again with Adobe until it races to bring out CS-something that supports all 10.6’s scary technology.

Snow Leopard should not cause Adobe too much heartache in looking after CS3. When they try to scare me into buying CS4 just so I can say "supported", they are sure tempting me to upgrade to "stolen".

What I *will* do is run CS3 till it breaks and Adobe does not fix it. If CS5 with all the OS toys is out before CS3 breaks, Adobe has a chance of getting some money off me, even if I have to go back to Australia again to get it at 60% off treasure-island price. (Treasure Island is American software company speak for the UK)


To de-mung my e-mail address:- fsnospam$elliott$$
PGP Fingerprint: 1A96 3CF7 637F 896B C810 E199 7E5C A9E4 8E59 E248
AB
Alan Browne
Sep 2, 2009
AB
Alan Browne
Sep 2, 2009
John Russ wrote:
On Sep 2, 4:06 pm, Alan Browne
wrote:
Apparently there are some minor bugs with PS-CS3 under Snow Leopard. David Pogue (NYT) claims crashes every 10 minutes or so, but most people just experience some very minor display bugs with easy workarounds if (at all) required. (no effect on actual work).

Adobe refuse to "officially" test CS3 under snow leopard. This is a PO for me – I don’t consider s/w I paid over $600 for less than 2 yrs ago to be legacy. Many feel the same – has been a contentious issue in various blogs, including the John Nash blog hosted at Adobe.

I am running CS2, CS3 and CS4 on a Mac Book Pro that has been upgraded to Snow Leopard. CS3 and CS4 run flawlessly. CS2 has some "graphic anomalies" and I have seen several unexplained crashes that seem to have to do with dialogs opening up. But I am quite pleased with the start up times for Photoshop with 10.6. I have a solid state drive, and Photoshop boots (and loads an enormous number – more than 200 – plug-ins) in less than 2 seconds.

Glad to hear it.
S
Savageduck
Sep 2, 2009
On 2009-09-02 13:06:26 -0700, Alan Browne
said:

Apparently there are some minor bugs with PS-CS3 under Snow Leopard. David Pogue (NYT) claims crashes every 10 minutes or so, but most people just experience some very minor display bugs with easy workarounds if (at all) required. (no effect on actual work).
Adobe refuse to "officially" test CS3 under snow leopard. This is a PO for me – I don’t consider s/w I paid over $600 for less than 2 yrs ago to be legacy. Many feel the same – has been a contentious issue in various blogs, including the John Nash blog hosted at Adobe.

This was available last week:
http://www.macworld.com/article/142449/2009/08/nack.html

Regards,

Savageduck
ER
Elliott Roper
Sep 2, 2009
In article , Savageduck
<savageduck@{REMOVESPAM}me.com> wrote:

On 2009-09-02 13:06:26 -0700, Alan Browne
said:

Apparently there are some minor bugs with PS-CS3 under Snow Leopard. David Pogue (NYT) claims crashes every 10 minutes or so, but most people just experience some very minor display bugs with easy workarounds if (at all) required. (no effect on actual work).
Adobe refuse to "officially" test CS3 under snow leopard. This is a PO for me – I don’t consider s/w I paid over $600 for less than 2 yrs ago to be legacy. Many feel the same – has been a contentious issue in various blogs, including the John Nash blog hosted at Adobe.

This was available last week:
http://www.macworld.com/article/142449/2009/08/nack.html
I might have jumped too soon on Adobe

http://blogs.adobe.com/jnack/2009/09/a_few_problems_found_wi th_ps_sl.htm l#comments

Yesterday’s blogNack.
"We’re continuing to work with Apple to diagnose & troubleshoot issues that customers report when running Photoshop CS3 and CS4 on Snow Leopard. At the moment we’re aware of a couple of problems:…"

So they have not left CS3 completely behind..


To de-mung my e-mail address:- fsnospam$elliott$$
PGP Fingerprint: 1A96 3CF7 637F 896B C810 E199 7E5C A9E4 8E59 E248
R
rfischer
Sep 3, 2009
Elliott Roper wrote:
In article , Alan Browne
wrote:

Apparently there are some minor bugs with PS-CS3 under Snow Leopard. David Pogue (NYT) claims crashes every 10 minutes or so, but most people just experience some very minor display bugs with easy workarounds if (at all) required. (no effect on actual work).

Adobe refuse to "officially" test CS3 under snow leopard. This is a PO for me – I don’t consider s/w I paid over $600 for less than 2 yrs ago to be legacy. Many feel the same – has been a contentious issue in various blogs, including the John Nash blog hosted at Adobe.

Adobe are ruining their already bad name.
They can hardly expect people to upgrade to CS4 just for Snow Leopard when CS4 has no Grand Central and no no Open CL and is not 64 bit.

You dont’ know as much as you think you know.

They better be nice to everyone with CS(mumble) and hope they still feel like ever doing business again with Adobe until it races to bring out CS-something that supports all 10.6’s scary technology.

Obviously they should have planned for Snow Leopard several years ago when they were doing CS2 development.


Ray Fischer
AB
Alan Browne
Sep 3, 2009
Savageduck wrote:
On 2009-09-02 13:06:26 -0700, Alan Browne
said:

Apparently there are some minor bugs with PS-CS3 under Snow Leopard. David Pogue (NYT) claims crashes every 10 minutes or so, but most people just experience some very minor display bugs with easy workarounds if (at all) required. (no effect on actual work).
Adobe refuse to "officially" test CS3 under snow leopard. This is a PO for me – I don’t consider s/w I paid over $600 for less than 2 yrs ago to be legacy. Many feel the same – has been a contentious issue in various blogs, including the John Nash blog hosted at Adobe.

This was available last week:
http://www.macworld.com/article/142449/2009/08/nack.html

Better that you read the actual Nack blog – see a large number of people not happy with Adobe.
AB
Alan Browne
Sep 3, 2009
N
nospam
Sep 3, 2009
In article , Alan Browne
wrote:

Better that you read the actual Nack blog – see a large number of people not happy with Adobe.

and a lot of people are happy with adobe. people who aren’t are usually the ones who post.
N
nospam
Sep 3, 2009
In article , Alan Browne
wrote:

… well … except for that "official" statement where they will not update CS3 because of Snow Leopard.

cs3 is no longer for sale. why would they update it?
AB
Alan Browne
Sep 3, 2009
nospam wrote:
In article , Alan Browne
wrote:

Better that you read the actual Nack blog – see a large number of people not happy with Adobe.

and a lot of people are happy with adobe. people who aren’t are usually the ones who post.

I’m quite happy with Adobe. Doesn’t mean I won’t mention what doesn’t please me.
AB
Alan Browne
Sep 3, 2009
nospam wrote:
In article , Alan Browne
wrote:

… well … except for that "official" statement where they will not update CS3 because of Snow Leopard.

cs3 is no longer for sale. why would they update it?

Not asking for feature updates, but maintenance due to Snow Leopard.
ER
Elliott Roper
Sep 3, 2009
In article , Alan Browne
wrote:

nospam wrote:
In article , Alan Browne
wrote:

… well … except for that "official" statement where they will not update CS3 because of Snow Leopard.

cs3 is no longer for sale. why would they update it?

Not asking for feature updates, but maintenance due to Snow Leopard.

I see it as a commercial decision. If there are many people choosing to skip CS4 because of its lack of Snow Leopard magic, Adobe can either keep them sweet till CS5 or hack them off mightily to the point where they’ll never buy anything of Adobe’s ever again. If they see CS4’s sole supported status as an attempt to milk them twice in 12 months, they well might.
On the other hand, if keeping CS3 alive severely impacts Adobes ability to deliver CS5 before hell freezes over, you would have to cut them a tiny bit of slack.
My uninformed guess is that CS3 and 4 are pretty similar in the way they interact with the OS, so that fixing both is not much more hassle for Adobe than fixing one when either mis-performs under Snow Leopard.


To de-mung my e-mail address:- fsnospam$elliott$$
PGP Fingerprint: 1A96 3CF7 637F 896B C810 E199 7E5C A9E4 8E59 E248
DM
Doug McDonald
Sep 4, 2009
Ray Fischer wrote:

Obviously they should have planned for Snow Leopard several years ago when they were doing CS2 development.

Exactly. And that is the big point.

I don;t really know about the Mac and its failure to pick … TWICE … the winning chip for its CPU, but I do know Windows and its progression though new generations chips and various versions of Windows.

I have written lots and lots of programs for Windows. Each and every one, from 16 bit Windows to now, still runs perfectly, on each version of Windows, with one partial exception: one written for 16 bit Windows that explicitly wrote directly to registers on a graphics card … and, of course, that one didn’t even run on the original Windows version if it had the wrong card, so it was know to be of
limited lifetime.

Anybody else could do the same thing if they wished. Just write to the rules and don’t cheat.

Doug McDonald
R
rfischer
Sep 4, 2009
Doug McDonald wrote:
Ray Fischer wrote:
Obviously they should have planned for Snow Leopard several years ago when they were doing CS2 development.

Exactly. And that is the big point.

I was being sacrastic. Expecting the developers at Adobe to know what is going to happen in the computer business 8 years in advance is idiocy.

I don;t really know about the Mac and its failure to pick … TWICE … the winning chip for its CPU, but I do know Windows and its progression though new generations chips and various versions of Windows.

Intel didn’t "win" because of of the technical excellence of their chips. The instruction set is crap.

I have written lots and lots of programs for Windows. Each and every one, from 16 bit Windows to now, still runs perfectly,

BFD. That only means that you write trivial programs that don’t make use of any advanced features.


Ray Fischer
N
nospam
Sep 4, 2009
In article <h7ps2i$5vr$>, Doug McDonald
wrote:

Obviously they should have planned for Snow Leopard several years ago when they were doing CS2 development.

Exactly. And that is the big point.

how does one plan for a future operating system whose specs had not even been thought of, let alone finalized?

cs2 came out 4 years ago and work begun roughly 6 years ago, so you’d have needed a crystal ball that could see at least 5-6 years into the future. are you planning for whatever windows will be like in 2015??

I don;t really know about the Mac

yet you comment on it.

and its failure to pick … TWICE …
the winning chip for its CPU,

68k and powerpc were *much* better chips than x86 in many ways. unfortunately, better does not always mean commercial success.

but I do know Windows and its progression
though new generations chips and various versions of Windows.

which has nothing to do with the mac.

I have written lots and lots of programs for Windows.

nothing the size or complexity of photoshop.

Each and every one,
from 16 bit Windows to now, still runs perfectly, on each version of Windows, with one partial exception: one written for 16 bit Windows that explicitly wrote directly to registers on a graphics card … and, of course, that one didn’t even run on the original Windows version if it had the wrong card, so it was know to be of
limited lifetime.

which has absolutely nothing to do with photoshop and snow leopard on a mac.

Anybody else could do the same thing if they wished. Just write to the rules and don’t cheat.

what makes you think adobe didn’t write to the rules? have you examined their source code? maybe the issue is apple. maybe it’s adobe. maybe it’s a combination of both.
DM
Doug McDonald
Sep 4, 2009
nospam wrote:

and its failure to pick … TWICE …
the winning chip for its CPU,

68k and powerpc were *much* better chips than x86 in many ways. unfortunately, better does not always mean commercial success.

the powerpc was a good chip. The 68000 was a major disaster because of pipeline bottlenecks. The major bottlenecks in the Intel chips are mostly but not entirely in obscure corners and are mostly deprecated (though they still work.) The 68000 was filled everywhere with problems.

Doug McDonald
MB
Miles Bader
Sep 4, 2009
Doug McDonald writes:
I don;t really know about the Mac and its failure to pick … TWICE … the winning chip for its CPU, but I do know Windows and its progression though new generations chips and various versions of Windows.

Er, well hindsight is everything. Remember, the original Mac was released in 1984, when it was far from obvious that Intel and Intel-Compatible CPUs would dominate so much (and of course it was _designed_ earlier than that). At that time, there was _far_ more variation in computer designs than there is now.

When Apple switched to the PPC, things had consolidated a bit around the "PC compatible", but Intel’s CPUs were still pretty poor, and the general wisdom at the time was that better CPU architectures would increasingly dominate in the future.

Microsoft doesn’t build hardware so they just kind of go with whatever the market throws at them (I guess they’ve tried to nudge things occasionally — e.g. porting NT to the Alpha and Mips processors — but never very hard).

-Miles

p.s. I use neither Macs nor Windows, so I’m not prejudiced either way 🙂


Cat, n. A soft, indestructible automaton provided by nature to be kicked when things go wrong in the domestic circle.
N
nospam
Sep 4, 2009
In article <h7pvpd$7jk$>, Doug McDonald
wrote:

and its failure to pick … TWICE …
the winning chip for its CPU,

68k and powerpc were *much* better chips than x86 in many ways. unfortunately, better does not always mean commercial success.
the powerpc was a good chip. The 68000 was a major disaster because of pipeline bottlenecks. The major bottlenecks in the Intel chips are mostly but not entirely in obscure corners and are mostly deprecated (though they still work.) The 68000 was filled everywhere with problems.

the 68k was much better than the 8086 back in the 1980s when it was chosen for the mac. for one thing, it was a 32 bit cpu, something that came much later to x86, which meant memory access was flat and none of that segmented memory shit. that made apps like photoshop much easier to write and debug.
MB
Miles Bader
Sep 4, 2009
Doug McDonald writes:
the powerpc was a good chip. The 68000 was a major disaster because of pipeline bottlenecks.

In 1984 (well, even earlier actually, as they had to design it before selling it)?! Most microprocessors were not pipelined at all then.
[Were _any_? Mainframes were, but micros?]

At that time (early-mid 80s), the 68000 was considered a _much_ better processor. It had far more registers, wider registers, a more regular design (so easier for compiler writers), a flat memory model, and was in many ways far more forward-thinking than the rather wacky and clunky 8086 architecture.

Intel’s main advantages were considered to be cost and a vague "compatibility" (heh) with earlier 8-bit intel cpus.

[Anyway, what is it about the 68k that you consider less pipeline-friendly? The 8086 with its dearth of registers and excessive use of dedicated registers seems far worse in that respect.]

-Miles


Non-combatant, n. A dead Quaker.
R
rfischer
Sep 4, 2009
Miles Bader wrote:
Doug McDonald writes:
the powerpc was a good chip. The 68000 was a major disaster because of pipeline bottlenecks.

In 1984 (well, even earlier actually, as they had to design it before selling it)?! Most microprocessors were not pipelined at all then.
[Were _any_? Mainframes were, but micros?]

At that time (early-mid 80s), the 68000 was considered a _much_ better processor. It had far more registers, wider registers, a more regular design (so easier for compiler writers), a flat memory model, and was in many ways far more forward-thinking than the rather wacky and clunky 8086 architecture.

The current x86 instruction set is basically an extension of the 8-bit 8080 CPU’s instruction set.


Ray Fischer
JU
jclarke.usenet
Sep 4, 2009
Miles Bader wrote:
Doug McDonald writes:
I don;t really know about the Mac and its failure to pick … TWICE … the winning chip for its CPU, but I do know Windows and its progression though new generations chips and various versions of Windows.

Er, well hindsight is everything. Remember, the original Mac was released in 1984, when it was far from obvious that Intel and Intel-Compatible CPUs would dominate so much (and of course it was _designed_ earlier than that). At that time, there was _far_ more variation in computer designs than there is now.

When Apple switched to the PPC, things had consolidated a bit around the "PC compatible", but Intel’s CPUs were still pretty poor, and the general wisdom at the time was that better CPU architectures would increasingly dominate in the future.

Microsoft doesn’t build hardware so they just kind of go with whatever the market throws at them (I guess they’ve tried to nudge things occasionally — e.g. porting NT to the Alpha and Mips processors — but never very hard).

Actually NT was designed from the ground up to be portable–they saw it as a competitor to Unix. However there wasn’t really much interest in the non-Intel versiona and finally Microsoft decided that supporting them was a losing proposition. Windows 2000 and Server 2K3 and 2K5 were ported to the Itanic without much fuss.
JU
jclarke.usenet
Sep 4, 2009
Miles Bader wrote:
Doug McDonald writes:
the powerpc was a good chip. The 68000 was a major disaster because of pipeline bottlenecks.

In 1984 (well, even earlier actually, as they had to design it before selling it)?! Most microprocessors were not pipelined at all then.
[Were _any_? Mainframes were, but micros?]

At that time (early-mid 80s), the 68000 was considered a _much_ better processor. It had far more registers, wider registers, a more regular design (so easier for compiler writers), a flat memory model, and was in many ways far more forward-thinking than the rather wacky and clunky 8086 architecture.

Intel’s main advantages were considered to be cost and a vague "compatibility" (heh) with earlier 8-bit intel cpus.

Nothing vague about it–all of my CP/M applications ran fine on my first PC. Wouldn’t be surprised if they run on my current 64-bit quad-core machine.

[Anyway, what is it about the 68k that you consider less pipeline-friendly? The 8086 with its dearth of registers and excessive use of dedicated registers seems far worse in that respect.]
-Miles
Miles Bader wrote:
Doug McDonald writes:
the powerpc was a good chip. The 68000 was a major disaster because of pipeline bottlenecks.

In 1984 (well, even earlier actually, as they had to design it before selling it)?! Most microprocessors were not pipelined at all then.
[Were _any_? Mainframes were, but micros?]

At that time (early-mid 80s), the 68000 was considered a _much_ better processor.

By Mac heads.

It had far more registers, wider registers, a more regular design (so easier for compiler writers), a flat memory model, and was in many ways far more forward-thinking than the rather wacky and clunky 8086 architecture.

You have the key point exactly wrong: it was NOT forward looking. Forward looking would have meant looking for best design in a ruthlessly pipelined chip. Neither chip, of course,
actually did such looking. The Intel chip just, by accident, was better. Motorola probably looked much too hard at the PDP-11.

Intel’s main advantages were considered to be cost and a vague "compatibility" (heh) with earlier 8-bit intel cpus.
[Anyway, what is it about the 68k that you consider less pipeline-friendly? The 8086 with its dearth of registers and excessive use of dedicated registers seems far worse in that respect.]

It appears to be in the details rather than the overall idea. The "dearth of registers and excessive use of dedicated registers" is of course completely harmless. The Intel use of a stack floating point unit was not good for programming, but
apparently is harmless for modern chip design.

The real point is that "neatness" of instruction set is completely unimportant. The 68000 had a neat set. What matters is how well it works pipelined, which is understood only by specialists.

Doug McDonald
ER
Elliott Roper
Sep 4, 2009
In article , J. Clarke
wrote:

Actually NT was designed from the ground up to be portable–they saw it as a competitor to Unix. However there wasn’t really much interest in the non-Intel versiona and finally Microsoft decided that supporting them was a losing proposition. Windows 2000 and Server 2K3 and 2K5 were ported to the Itanic without much fuss.

Good grief!
Has this group mutated into alt.folklore.computers?

Back on topic, let’s hope that Adobe don’t deliberately break CS3 on Snow Leopard. The recent signs are they are stepping back from the brink of being complete ars

NO CARRIER


To de-mung my e-mail address:- fsnospam$elliott$$
PGP Fingerprint: 1A96 3CF7 637F 896B C810 E199 7E5C A9E4 8E59 E248
MB
Miles Bader
Sep 4, 2009
"mcdonaldREMOVE TO ACTUALLY REACH ME"@scs.uiuc.edu writes:
At that time (early-mid 80s), the 68000 was considered a _much_ better processor.

By Mac heads.

No. Generally. Before the mac came out. By designers, not users.

The 68k and derivatives were very popular amongst workstation makers in the early 80s (e.g., apollo, later sun, …), and was generally thought of as being a good and modern architecture — unlike the 8086.

It had far more registers, wider registers, a more regular design (so easier for compiler writers), a flat memory model, and was in many ways far more forward-thinking than the rather wacky and clunky 8086 architecture.

You have the key point exactly wrong: it was NOT forward looking. Forward looking would have meant looking for best design in a ruthlessly pipelined chip.

The 68k _did_ have many attributes that were forward thinking — larger registers, more registers, fewer dedicated registers, etc. (all of which are good for making hardware fast, and easily adapting to larger problems). It was also overly complicated in terms of things like addressing modes etc — but then, so was the 8086.

Intel’s main advantages were considered to be cost and a vague "compatibility" (heh) with earlier 8-bit intel cpus.
[Anyway, what is it about the 68k that you consider less pipeline-friendly? The 8086 with its dearth of registers and excessive use of dedicated registers seems far worse in that respect.]

It appears to be in the details rather than the overall idea. The "dearth of registers and excessive use of dedicated registers" is of course completely harmless. The Intel use of a stack floating point unit was not good for programming, but
apparently is harmless for modern chip design.

The real point is that "neatness" of instruction set is completely unimportant. The 68000 had a neat set. What matters is how well it works pipelined, which is understood only by specialists.

Sure, pipelining is important — but what is it about the 68k that makes it worse for pipelining than the 8086?

The points I mentioned _do_ actually make a difference, and it’s not in the 8086’s favor: to take advantage of pipelining you want to avoid artificial (unneeded) dependencies — and limited registers and dedicated registers _create_ artificial dependencies.

The 68k obviously had bad points too, and modern RISC designs are better in many ways; however, the 68k was at the least, better than the 8086 (which had most of the same problems, and many more).

-Miles


「すっごい」と呟いてる。「へんてcおなも
Miles Bader wrote:

The 68k obviously had bad points too,

such as addressing modes that were, in detail, bad
for pipelining.

The 68000 was influenced by the PDP-11, which was
a "clean" "symmetric" design which had fatal flaws in the overcomplicated addressing. The RISC designs
used much cleaner addressing, but requiring more instructions.

But time has told the tale, and the Intel designs won, big time.

Doug McDonald
CH
Chris Hills
Sep 4, 2009
In message
writes
"mcdonaldREMOVE TO ACTUALLY REACH ME"@scs.uiuc.edu writes:
At that time (early-mid 80s), the 68000 was considered a _much_ better processor.

By Mac heads.

No. Generally. Before the mac came out. By designers, not users.

Very true.

The 68k and derivatives were very popular amongst workstation makers in the early 80s (e.g., apollo, later sun, …), and was generally thought of as being a good and modern architecture — unlike the 8086.

I agree.. I loved programming the 68K It had a lovely architecture Actually the 68K is still in use in some missile systems. 68K became Coldfire and the ghost of 68K lives on in ARM but I doubt they will admit it.

The x86 was a pig and not really liked much. The problem is, like VHS it took over the world. Or rather the x86 did not but the PC did.

It had far more registers, wider registers, a more regular design (so easier for compiler writers), a flat memory model, and was in many ways far more forward-thinking than the rather wacky and clunky 8086 architecture.

You have the key point exactly wrong: it was NOT forward looking. Forward looking would have meant looking for best design in a ruthlessly pipelined chip.

The 68k _did_ have many attributes that were forward thinking — larger registers, more registers, fewer dedicated registers, etc. (all of which are good for making hardware fast, and easily adapting to larger problems). It was also overly complicated in terms of things like addressing modes etc — but then, so was the 8086.

As I said the ghost of 68K lives on in the ARM cores. The memory map inthe 68K was much better than the x86 as was the memory handling

Intel’s main advantages were considered to be cost and a vague "compatibility" (heh) with earlier 8-bit intel cpus.
[Anyway, what is it about the 68k that you consider less pipeline-friendly? The 8086 with its dearth of registers and excessive use of dedicated registers seems far worse in that respect.]

It appears to be in the details rather than the overall idea. The "dearth of registers and excessive use of dedicated registers" is of course completely harmless. The Intel use of a stack floating point unit was not good for programming, but
apparently is harmless for modern chip design.

The real point is that "neatness" of instruction set is completely unimportant. The 68000 had a neat set. What matters is how well it works pipelined, which is understood only by specialists.

Sure, pipelining is important — but what is it about the 68k that makes it worse for pipelining than the 8086?

Nothing.

The points I mentioned _do_ actually make a difference, and it’s not in the 8086’s favor: to take advantage of pipelining you want to avoid artificial (unneeded) dependencies — and limited registers and dedicated registers _create_ artificial dependencies.

The 68k obviously had bad points too, and modern RISC designs are better in many ways; however, the 68k was at the least, better than the 8086 (which had most of the same problems, and many more).

True. BTW the reasons why Apple switched to Intel had nothing to do with Intel having a better architecture as such. PPC was heading off, or rather not turning to, the market Intel was in that Appel wanted to take the Macs to.

The PPC is used in telecoms and another related high reliability applications. Applications that don’t need multi cores etc. So for Freescale Apple are a small, if significant, customer but not one worth developing new multi core parts for.


\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
CH
Chris Hills
Sep 4, 2009
In message <h7rf7r$rs8$>, "mcdonaldREMOVE TO ACTUALLY REACH ME"@scs.uiuc.edu writes
Miles Bader wrote:

The 68k obviously had bad points too,

such as addressing modes that were, in detail, bad
for pipelining.

The 68000 was influenced by the PDP-11, which was
a "clean" "symmetric" design which had fatal flaws in the overcomplicated addressing. The RISC designs
used much cleaner addressing, but requiring more instructions.
But time has told the tale, and the Intel designs won, big time.

And VHS won out over Beta…. though the professionals continued to use Beta until DVD and other technologies made both VHS and Beta obsolete.

The fact that Intel parts are used in desktop PC’s has no bearing on their technical prowess. It is purely commercial.


\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
N
nospam
Sep 4, 2009
In article <h7rf7r$rs8$>, < ME"@scs.uiuc.edu> wrote:

But time has told the tale, and the Intel designs won, big time.

it only won big time because of windows. it has nothing to do with which chip is better.
N
nospam
Sep 4, 2009
In article <h7r2pf$ma6$>, < ME"@scs.uiuc.edu> wrote:

At that time (early-mid 80s), the 68000 was considered a _much_ better processor.

By Mac heads.

it was considered a better processor *before* there was a mac, and why apple chose it *for* the mac. they originally wanted to use a different processor.

It had far more registers, wider registers, a more regular design (so easier for compiler writers), a flat memory model, and was in many ways far more forward-thinking than the rather wacky and clunky 8086 architecture.

You have the key point exactly wrong: it was NOT forward looking. Forward looking would have meant looking for best design in a ruthlessly pipelined chip. Neither chip, of course,
actually did such looking. The Intel chip just, by accident, was better. Motorola probably looked much too hard at the PDP-11.

no, the intel chip, just by accident, benefitted from microsoft’s dominance. it has nothing to do with which is better. had the original pc used a different chip, intel wouldn’t be where they are today.
N
nospam
Sep 4, 2009
In article , Chris H
wrote:

True. BTW the reasons why Apple switched to Intel had nothing to do with Intel having a better architecture as such. PPC was heading off, or rather not turning to, the market Intel was in that Appel wanted to take the Macs to.

yep. os x is platform agnostic. it doesn’t matter what the cpu is. freescale/ibm dropped the ball and had little planned for low power chips (i.e., laptops) where the market is headed. intel did. so apple switched. in fact, the first intel macs were faster in some ways and slower in others.

The PPC is used in telecoms and another related high reliability applications. Applications that don’t need multi cores etc. So for Freescale Apple are a small, if significant, customer but not one worth developing new multi core parts for.

there was a *very* nice dual core g4 that apple had planned to use in a future powerbook before they committed to switching to intel.
MB
Miles Bader
Sep 4, 2009
nospam writes:
But time has told the tale, and the Intel designs won, big time.

it only won big time because of windows. it has nothing to do with which chip is better.

Indeed.

Basically: Intel (like MS) got very, very lucky, and managed not to drop the ball (at least too often).

These days they go to _heroic_ lengths to make the x86 architecture work in a modern context, but the original design sure doesn’t help.

-Miles


Infancy, n. The period of our lives when, according to Wordsworth, ‘Heaven lies about us.’ The world begins lying about us pretty soon afterward.
AB
Alan Browne
Sep 4, 2009
Miles Bader wrote:
Doug McDonald writes:
the powerpc was a good chip. The 68000 was a major disaster because of pipeline bottlenecks.

In 1984 (well, even earlier actually, as they had to design it before selling it)?! Most microprocessors were not pipelined at all then.
[Were _any_? Mainframes were, but micros?]

At that time (early-mid 80s), the 68000 was considered a _much_ better processor. It had far more registers, wider registers, a more regular design (so easier for compiler writers), a flat memory model, and was in many ways far more forward-thinking than the rather wacky and clunky 8086 architecture.

We reviewed both in the late 80’s early 90’s and found nothing particularly advantageous with the 68000 and went with the 8086 (esp. 80186) for real time navigation systems. Another division went with the 68000.

In the end no big deal. I coded mostly in assembler and Pascal for 8086 (some C too) and found it to be a good machine for real time where we tended to keep modules small, compact and fast. More GP registers would have been nicer, but it was really no big deal.
AB
Alan Browne
Sep 5, 2009
Chris H wrote:

True. BTW the reasons why Apple switched to Intel had nothing to do with Intel having a better architecture as such. PPC was heading off, or rather not turning to, the market Intel was in that Appel wanted to take the Macs to.

The PPC is used in telecoms and another related high reliability applications. Applications that don’t need multi cores etc.

Who says they don’t? Telecoms are just as interested in lowering their space and power/carbon footprint as any other large server operator. There is no "reliability" difference per se.

So for
Freescale Apple are a small, if significant, customer but not one worth developing new multi core parts for.

The real reason is that the PPC consumed more power than the x86 architecture for equivalent work. A big deal in laptops which is what pushed the change.

Apple really don’t care how many core’s there are as long as processing capability grows. On silicon this appears to be thermally limited to approx. 3 GHz (4 GHz or so if aggressive cooling (liquid) is used). So it is much cheaper to lay down more cores than push clock ticks.

The spinback is that Apple have taken advantage of multi-core/multi-CPU architectures and re-engineered the OS with GCD for Mac OS X 10.6 and beyond. This would not have evolved this way w/o the multi-core direction the industry has taken for small computers and servers.
N
nospam
Sep 5, 2009
In article , Alan Browne
wrote:

So for
Freescale Apple are a small, if significant, customer but not one worth developing new multi core parts for.

The real reason is that the PPC consumed more power than the x86 architecture for equivalent work. A big deal in laptops which is what pushed the change.

powerpc consumed *less* power than x86. when apple released the first macbook pro, the power adapter went from 65w to 85w and the battery itself was also bigger, with roughly the same run time.

macbooks and macbook pros run much hotter than a powerbook g4, although part of that heat is from the gpu, not the intel or powerpc chip. as a result, the minimum fan speed is about 2000 rpm in order to dissipate the heat, whereas a powerbook g4 could stop the fan entirely and be perfectly silent.

the early g5 chips consumed a lot of power, however, freescale/ibm dropped the ball with reducing that. they started sampling a low power g5 chip, but like the dual core g4 (also low power, ~15w if i recall, quite low), it was too little too late.
AB
Alan Browne
Sep 6, 2009
nospam wrote:
In article , Alan Browne
wrote:

So for
Freescale Apple are a small, if significant, customer but not one worth developing new multi core parts for.
The real reason is that the PPC consumed more power than the x86 architecture for equivalent work. A big deal in laptops which is what pushed the change.

powerpc consumed *less* power than x86. when apple released the first macbook pro, the power adapter went from 65w to 85w and the battery itself was also bigger, with roughly the same run time.
macbooks and macbook pros run much hotter than a powerbook g4, although part of that heat is from the gpu, not the intel or powerpc chip. as a result, the minimum fan speed is about 2000 rpm in order to dissipate the heat, whereas a powerbook g4 could stop the fan entirely and be perfectly silent.

the early g5 chips consumed a lot of power, however, freescale/ibm dropped the ball with reducing that. they started sampling a low power g5 chip, but like the dual core g4 (also low power, ~15w if i recall, quite low), it was too little too late.

Nice twist, but evades the facts.
AB
Alan Browne
Sep 6, 2009
nospam wrote:
In article , Alan Browne
wrote:

So for
Freescale Apple are a small, if significant, customer but not one worth developing new multi core parts for.
The real reason is that the PPC consumed more power than the x86 architecture for equivalent work. A big deal in laptops which is what pushed the change.

powerpc consumed *less* power than x86. when apple released the first macbook pro, the power adapter went from 65w to 85w and the battery itself was also bigger, with roughly the same run time.
macbooks and macbook pros run much hotter than a powerbook g4, although part of that heat is from the gpu, not the intel or powerpc chip. as a result, the minimum fan speed is about 2000 rpm in order to dissipate the heat, whereas a powerbook g4 could stop the fan entirely and be perfectly silent.

the early g5 chips consumed a lot of power, however, freescale/ibm dropped the ball with reducing that. they started sampling a low power g5 chip, but like the dual core g4 (also low power, ~15w if i recall, quite low), it was too little too late.

"Reasons

Steve Jobs stated that Apple’s primary motivation for the transition was their disappointment with the progress of IBM’s development of PowerPC technology, and their greater faith in Intel to meet Apple’s needs. In particular, he cited the performance per watt (that is, the speed per unit of electrical power) projections in the roadmap provided by Intel. This is an especially important consideration in laptop design, which affects the hours of use per battery charge."

http://en.wikipedia.org/wiki/Apple%E2%80%93Intel_transition
N
nospam
Sep 6, 2009
In article , Alan Browne
wrote:

The real reason is that the PPC consumed more power than the x86 architecture for equivalent work. A big deal in laptops which is what pushed the change.

powerpc consumed *less* power than x86. when apple released the first macbook pro, the power adapter went from 65w to 85w and the battery itself was also bigger, with roughly the same run time.
macbooks and macbook pros run much hotter than a powerbook g4, although part of that heat is from the gpu, not the intel or powerpc chip. as a result, the minimum fan speed is about 2000 rpm in order to dissipate the heat, whereas a powerbook g4 could stop the fan entirely and be perfectly silent.

the early g5 chips consumed a lot of power, however, freescale/ibm dropped the ball with reducing that. they started sampling a low power g5 chip, but like the dual core g4 (also low power, ~15w if i recall, quite low), it was too little too late.

Nice twist, but evades the facts.

no evasion. those *are* the facts.
N
nospam
Sep 6, 2009
In article , Alan Browne
wrote:

Steve Jobs stated that Apple’s primary motivation for the transition was their disappointment with the progress of IBM’s development of PowerPC technology, and their greater faith in Intel to meet Apple’s needs. In particular, he cited the performance per watt (that is, the speed per unit of electrical power) projections in the roadmap provided by Intel. This is an especially important consideration in laptop design, which affects the hours of use per battery charge."

http://en.wikipedia.org/wiki/Apple%E2%80%93Intel_transition

well if it’s on wikipedia it must be true.

steve said that intel’s road map better aligned with what apple wanted to do. they also get to run windows at native speeds making the transition for switchers that much easier, something that worked out very well.

he did mention mips per watt but that’s a silly metric. the g4 consumed *less* power than comparable intel chips at the time. it was the g5 that was a power hog, but ibm was sampling a low power g5 (around 20-25w as i recall, typical for a laptop part). ibm also dropped the ball on the g5 itself. steve said ‘3gz in a year’ and that did not happen in a year.

the move to intel was planned from the beginning.
AB
Alan Browne
Sep 6, 2009
nospam wrote:
In article , Alan Browne
wrote:

Steve Jobs stated that Apple’s primary motivation for the transition was their disappointment with the progress of IBM’s development of PowerPC technology, and their greater faith in Intel to meet Apple’s needs. In particular, he cited the performance per watt (that is, the speed per unit of electrical power) projections in the roadmap provided by Intel. This is an especially important consideration in laptop design, which affects the hours of use per battery charge."

http://en.wikipedia.org/wiki/Apple%E2%80%93Intel_transition

well if it’s on wikipedia it must be true.

steve said that intel’s road map better aligned with what apple wanted to do. they also get to run windows at native speeds making the transition for switchers that much easier, something that worked out very well.

he did mention mips per watt but that’s a silly metric.

He said "performance" per watt, not MIPS. I’d of thought you knew the difference. (Did you use lower case to camouflage that?).

That is _the_ metric for the laptop market and at the time an increasingly important metric for servers.

Keep spinning – the reason Apple switched was power consumption. I would add that seasoning in the sauce (intel’s broader selection of peripheral chips for a broad range of end-use products) also pulled Apple that way.
AB
Alan Browne
Sep 6, 2009
nospam wrote:
In article , Alan Browne
wrote:

The real reason is that the PPC consumed more power than the x86 architecture for equivalent work. A big deal in laptops which is what pushed the change.
powerpc consumed *less* power than x86. when apple released the first macbook pro, the power adapter went from 65w to 85w and the battery itself was also bigger, with roughly the same run time.
macbooks and macbook pros run much hotter than a powerbook g4, although part of that heat is from the gpu, not the intel or powerpc chip. as a result, the minimum fan speed is about 2000 rpm in order to dissipate the heat, whereas a powerbook g4 could stop the fan entirely and be perfectly silent.

the early g5 chips consumed a lot of power, however, freescale/ibm dropped the ball with reducing that. they started sampling a low power g5 chip, but like the dual core g4 (also low power, ~15w if i recall, quite low), it was too little too late.
Nice twist, but evades the facts.

no evasion. those *are* the facts.

You’re evading the business reason at the time: performance/watt.
JA
Josh Askew
Jul 20, 2011
In article <4aa076aa$0$1651$>
(Ray Fischer) wrote:
Doug McDonald wrote:
Ray Fischer wrote:
Obviously they should have planned for Snow Leopard several years ago when they were doing CS2 development.

Exactly. And that is the big point.

I was being sacrastic. Expecting the developers at Adobe to know what is going to happen in the computer business 8 years in advance is idiocy.

I don;t really know about the Mac and its failure to pick … TWICE … the winning chip for its CPU, but I do know Windows and its progression though new generations chips and various versions of Windows.

Intel didn’t "win" because of of the technical excellence of their chips. The instruction set is crap.

Not that you’re capable of using any of them anyway.

I have written lots and lots of programs for Windows. Each and every one, from 16 bit Windows to now, still runs perfectly,

BFD. That only means that you write trivial programs that don’t make use of any advanced features.

What an obnoxious asshole.


Ray Fischer
AB
Alan Browne
Jul 20, 2011
On 2011-07-20 09:59 , Josh Askew wrote:
In article<4aa076aa$0$1651$>
(Ray Fischer) wrote:
Doug McDonald wrote:
Ray Fischer wrote:
Obviously they should have planned for Snow Leopard several years ago when they were doing CS2 development.

Exactly. And that is the big point.

I was being sacrastic. Expecting the developers at Adobe to know what is going to happen in the computer business 8 years in advance is idiocy.

I don;t really know about the Mac and its failure to pick … TWICE … the winning chip for its CPU, but I do know Windows and its progression though new generations chips and various versions of Windows.

Intel didn’t "win" because of of the technical excellence of their chips. The instruction set is crap.

The instruction set is unusual but, esp. in later chips, extremely flexible. To the point where notions of RISC advantages have all but disappeared.

I wrote lots of assembler on the 8086/186/286/386 and had no issues writing compact, efficient code.

I’ve also written assembler on some processors that have extremely bizarre register structures forcing the programmer to write a lot of lines of code to save a given program state and set up to do a particular set of instructions. The COSMAC and the TI TMS320 signal processors come to mind. Or even older minis like the HP 2100 which had no stack – that challenged structured coding somewhat and made re-entrancy very inefficient.

As most s/w, including OS’, is now written in HOL, esp. C and variants, the instruction set is of little (and usually no) importance to the programmer.

Various programs on the Mac have been shown, for a given machine, to run more quickly on intel machines than on the PowerPC. (Not all s/w as there are areas of strength for each processor type).


gmail originated posts filtered due to spam.
S
Savageduck
Jul 20, 2011
On 2011-07-20 07:25:43 -0700, Alan Browne
said:

On 2011-07-20 09:59 , Josh Askew wrote:
In article<4aa076aa$0$1651$>
(Ray Fischer) wrote:
Doug McDonald wrote:
Ray Fischer wrote:
Obviously they should have planned for Snow Leopard several years ago when they were doing CS2 development.

Exactly. And that is the big point.

I was being sacrastic. Expecting the developers at Adobe to know what is going to happen in the computer business 8 years in advance is idiocy.

I don;t really know about the Mac and its failure to pick … TWICE … the winning chip for its CPU, but I do know Windows and its progression though new generations chips and various versions of Windows.

Intel didn’t "win" because of of the technical excellence of their chips. The instruction set is crap.

The instruction set is unusual but, esp. in later chips, extremely flexible. To the point where notions of RISC advantages have all but disappeared.

I wrote lots of assembler on the 8086/186/286/386 and had no issues writing compact, efficient code.

I’ve also written assembler on some processors that have extremely bizarre register structures forcing the programmer to write a lot of lines of code to save a given program state and set up to do a particular set of instructions. The COSMAC and the TI TMS320 signal processors come to mind. Or even older minis like the HP 2100 which had no stack – that challenged structured coding somewhat and made re-entrancy very inefficient.

As most s/w, including OS’, is now written in HOL, esp. C and variants, the instruction set is of little (and usually no) importance to the programmer.

Various programs on the Mac have been shown, for a given machine, to run more quickly on intel machines than on the PowerPC. (Not all s/w as there are areas of strength for each processor type).

Damn! You guys resurrected a two year old thread which nobody has touched since 9/6/2009.

Couldn’t you have waited until the "Lion"-CS5 & plug-in conflicts start showing up later today?


Regards,

Savageduck
R
rfischer
Jul 21, 2011
Alan Browne wrote:
(Ray Fischer) wrote:

Intel didn’t "win" because of of the technical excellence of their chips. The instruction set is crap.

The instruction set is unusual but, esp. in later chips, extremely flexible. To the point where notions of RISC advantages have all but disappeared.

According to Intel, at any rate. The waste a lot of silicon trying to make that instruction set perform.

I wrote lots of assembler on the 8086/186/286/386 and had no issues writing compact, efficient code.

The issue isn’t writing the code. The issue is executing it quickly. It is not easy to decode the instructions and process them quickly.


Ray Fischer | Mendocracy (n.) government by lying | The new GOP ideal

Must-have mockup pack for every graphic designer 🔥🔥🔥

Easy-to-use drag-n-drop Photoshop scene creator with more than 2800 items.

Related Discussion Topics

Nice and short text about related topics in discussion sections