CS4 large file very slow load

RK
Posted By
Raymond_Keller
Mar 12, 2009
Views
4948
Replies
60
Status
Closed
Hi, I’m working with a friend who’s got perhaps unusual Photoshop needs. He’s working on a 10000×20000 pixel 16 bit image with maybe 6 image layers and 12 total layers. Loading the file takes about 20 minutes.

The system is a 2.6 GHz quad-core i7 with 12 GB RAM and a 32GB SSD primary scratch, 200 GB of secondary scratch, with a 1 GB Radeon HD 4870, running CS4 64bit on Vista 64.

What seems odd to me is that memory usage climbs steeply for the first perhaps 4 GB, looking like the load might complete in a minute or two, then the CPU kicks in for the long, long remainder of the load. The file on disk is roughly 5 GB.

We’ve got issues aside from this, but I’ll detail them in another thread.

Any ideas on how to improve load performance here? Thanks.

MacBook Pro 16” Mockups ๐Ÿ”ฅ

– in 4 materials (clay versions included)

– 12 scenes

– 48 MacBook Pro 16″ mockups

– 6000 x 4500 px

RK
Rob_Keijzer
Mar 12, 2009
Raymond,

Did you configure a separate physical HDD as a scratch disc for Photoshop?

You are not creating a large poster at 600 ppi, are you? You wouldn’t be the first to go down that road, only to return a few days later having found out it’s a dead end. ๐Ÿ™‚

Rob
RK
Raymond_Keller
Mar 12, 2009
The first scratch disk is a separate physical 32 GB solid state drive, dedicated to being Photoshop scratch (all space on it is available just for that).

The second scratch disk is a separate physical 500 GB regular spinning disc drive with 200 GB free, and it sees no other activity aside from Photoshop use.

When the file loads, even before the primary scratch fills up, the CPU kicks in and the process lags hard. Does anyone know why?

You could say we’re working on something like a high-resolution poster. It’s more like we’re working on a museum gallery print. I’m not the artist, but if I understand correctly, 200 ppi might be enough.

He’s already done 3.5′ x 6′ prints. What’s "dead end" about high-resolution large-scale printing?
CF
chris_farrell
Mar 12, 2009
Have you tried the scratch disks the other way round – your SSD might be bottlenecking the data – which one do you have? Some cheaper ssd’s use the jMicron control which is Slooooooooow

Does it have to be 16bit? – I produce very large fine art prints ( current one is 10m x 8bit and load times are around 1.30mins (with 3 layers) with 16gb ddr2(800mhz)) and they’ve always been 8bit.

How long does it take to load?

More ram may be needed if you need multi-layered 16bit images.
RK
Raymond_Keller
Mar 12, 2009
I’m not sure how any SSD is going to slow up a file load relative to a spinning disk. I guess the Velociraptors are pretty darn fast, but the secondary scratch is not a Velociraptor. The SSD is the Intel X25-M, which if I understand correctly has been well tested by third parties as performing well.

What’s this weird stuff going on during load? Is Photoshop somehow deciding to do an unreasonable amount of tiny writes to scratch?

About the 16 bit / channel — the artist does not seem to want to budge here. I explained the concept of human eye sensitivity versus the 24 M colors available from 8 bit, but he has a few reasons he’s resistant. The first is he wants to maximize fidelity (color range, resolution) throughout as much of the process as possible before collapsing to the range available at the print shop. The Hasselblad provides data at 50 megapixel and 16 bit and he wants to carry those as far as he can.

If he can solve the problem with throwing more hardware at it, he’d rather do that than chance losing fidelity earlier in the process.

Rather than attempt the tack of persuading the artist to collapse fidelity earlier, I would prefer to focus on what is going on technically. It’s not clear to me that throwing hardware at the problem will expand the degree we can carry the fidelity. I don’t know how to fit more than 12 GB RAM into a system.

My primary concern at this point is figuring out just how the system is constraining the work. What’s generating the lag?

Relatedly, how is one supposed to do a large resolution image in Photoshop?
RK
Raymond_Keller
Mar 12, 2009
Wait a second… your images are how large? 32 feet long at 300 ppi? Is that 118000 pixels long?

So except for our being in 16 bit your images are far larger? Our objective is to-scale prints of our subject matter. Right now we’ve captured images that represent 2 x 3 meters printed with good resolution. We’re aiming ultimately to capture images at the size you appear to be working at.

What kind of hardware are you using?

Could the problem be that Photoshop does a poor job of handling 16 bit images?
CC
Chris_Cox
Mar 12, 2009
16 bit has nothing to do with it. The image is much larger than RAM, it’s going to use up RAM then hit the scratch disk really, really hard for a while (which is what you describe).

A 64 bit OS and more RAM might help you.
RK
Raymond_Keller
Mar 12, 2009
The software is CS4 64 bit. The operating system is Vista 64 bit. The system RAM is 12 GB.

With 12 GB of RAM and a 32 GB fast dedicated scratch disk I’d expect to be able to handle large images pretty well. Is this setup really not adequate for a 10000×20000 pixel image? If not, what specs would you say are adequate for work in this realm?

Is a 20 minute load time to be expected even when one is hitting scratch during load?
CC
Chris_Cox
Mar 12, 2009
That depends on the layers and contents in those layers. Yes, if you’re waiting on the scratch disk – then that load time sounds about right.

Also remember that the size on disk is compressed – in memory it will be larger. With the right layer contents, it could easily exceed the 32 Gig primary scratch size. When you have the document open, look at the document size that Photoshop reports and see how large it is in memory. (I’m betting it’s over 30 Gig)

How to improve load performance: work on a smaller file, allocate a larger percentage of RAM to Photoshop, get more RAM, get a faster scratch disk (including a RAID array).
CF
chris_farrell
Mar 12, 2009
My current image size is 117420×31890 pixels and with 3 layers it’s 17gb (working file size) currently the images is not too complex as it’s drawn within Photoshop and I haven’t introduced any scanned elements to it yet – this will increase the file size dramatically. If I use over 3 layers I notice a performance hit.

I’m using a , Asus p5q deluxe m/b, 16gb ram, 1 system drive (74gb raptor), 1 file drive (7200rpm), 1 dedicated scratch disk (150gb raptor), 1 dedicated large format image drive for recent projects (750gb 7200rpm).

I try to keep most file writing/reading elements on separate drive so that the disk heads are not conflicting or fighting for access.

I hope this helps

chris
RK
Rob_Keijzer
Mar 13, 2009
This is all Stephen Johnson’s fault! ๐Ÿ™‚

Rob
RK
Raymond_Keller
Mar 13, 2009
Yes, if you’re waiting on the scratch disk – then that load time sounds about right.

The scratch disk is rated at 227 MB/s for write speed. Does Photoshop have a problem with SSD scratch disks?

I mean, in 20 minutes’ time I should be able to write 266 GB. And by the time the file finishes loading, the full 32 GB aren’t used up, so nowhere near the drive’s max speed is being achieved. This is the Intel X25-E — I think it’s the gold standard right now as far as SSD performance. Maybe Photoshop is writing scratch using really small chunk sizes?

Has anyone used Photoshop with the X25-E?
DE
David_E_Crawford
Mar 13, 2009
Maybe your vista is indexing, running system restore, defraging the hard drive, windows defender is checking the file or your anti virus is scanning your file while trying to process your request. Not saying this is all happening at the same time but even one will slow things down. Something to look into anyways.
CC
Chris_Cox
Mar 13, 2009
Raymond – no problems with any of the SSDs we tested. But we can’t guarantee everybody’s driver. And Windows drivers will get less performance than the drive is capable of.

That’s 227 if you had a simple, single stream going — and scratch disk access isn’t that simple.

Using the Large Tiles plugin might help your performance – give it a try.
F
Freeagent
Mar 13, 2009
Vista indexing is my bet, now that David pointed it out. It’s on by default.
DE
David_E_Crawford
Mar 13, 2009
"When the index is running, it generally won’t affect your computer’s performance. When you make changes to files, however, the index quickly updates those changes, momentarily putting a small additional load on your computer’s resources"

Copied this from the windows help site. According to what is indexed: pictures are one of them.
DE
David_E_Crawford
Mar 13, 2009
I have modified my index search to just the start menu.
RP
Russell_Proulx
Mar 14, 2009
Sorry if this is a dumb question… but you are loading the image from the hard drive and not optical media (DL-DVD or BluRay)?

I’ll do a test when I get to my office and see how long a similar file takes to load on my Vista64 system.
RP
Russell_Proulx
Mar 14, 2009
I just fabricated an 6-layer 10000x20000px 16bit image similar to your specs (without the adjustment layers) and when I went to save it to disk (300g VRaptors) in PSB format it was not fast. I gave up waiting after a few minutes assuming that your 20 minute experience might well be accurate. We’re talking a pretty HUGE image that I assume requires a lot of number crunching as well as simply reading from the HD to get it in and out of Photoshop.

I once saw a technique demonstrated where an image was converted to a Smart Object, its size was changed (ie: from 10000x20000px to 2000x4000px). Retouching was done on these smaller versions. Then as a final step the size was reset to it’s original pixel dimensions and magically the image was rendered back to the larger size without any quality lost. That seemed a bit far fetched though the idea of working with smaller ‘proxi’ images has been around for a while (Macromedia xRes) and I’ve been meaning to try it on huge images like hirez collages. Anyone know what’s up with this?
F
Freeagent
Mar 14, 2009
Saved here (as .psb) in about 4 minutes, opens in less than one.

12000 x 18000 px, 16 bit, 5 adjustment layers plus one normal layer. This seems awfully fast compared to your numbers – did I miss something?

This is a fairly modest system by current standards: Vista64, 8GB, C2D E6750, separate scratch (7k2) but no RAID. CPU usage around 35-45 % per core throughout. Total RAM usage reported by Task Manager 6,53 GB.

Maximize compatibility off, Vista indexing off.
RP
Russell_Proulx
Mar 14, 2009
12000 x 18000 px, 16 bit, 5 adjustment layers plus one normal layer.

No, it’s 6 different images (all 16bit) on 6 layers PLUS the adjustment layers totalling 12.

Specs are: "10000×20000 pixel 16 bit image with maybe 6 image layers and 12 total layers."

Opening a ‘one layer image’ with a few adjustment layers takes me only 15 seconds to load after a cold boot. Opening the image right after saving it is a bit of a cheat as much of it will load from cache.
F
Freeagent
Mar 14, 2009
OK, that makes sense. 5 additional full layers is probably another story.

It was just an image that I happened to be working on, upsampled.
RP
Russell_Proulx
Mar 14, 2009
I just added 6 adjustment layers to a single image file and it took 3 minutes to save. The image is 1.94 GB on disk. It’s the 6 different images on 6 layers that really makes this thing huge and 20 minutes sounds believable.
CC
Chris_Cox
Mar 14, 2009
Don’t forget that layers with real contents in them take more memory, time, and disk space than empty (or solid color) layers.
CF
chris_farrell
Mar 14, 2009
Do you have to have all the images in one file?.Why not separate each layer image +adjustments into individual files.
LH
Lawrence_Hudetz
Mar 15, 2009
Don’t bank on SSD’s as they exist on the market today to really be faster than even a 7200 RPM HD. When I did an evaluation of CS4 on the Intel Smackover board, i7 920 overclocked a bit. I used an SSD for scratch. To compare, I actually used C drive as scratch. The SSD was very close in speed to the C drive scratch, off about 2 or 3 millisec in an overall timing of 14 sec for a Smart Sharp on a 250M file, 16 bit, no layers.
RK
Raymond_Keller
Mar 15, 2009
The artist decided to RMA the X25-E and to build a RAID 0 of 4 Patriot SSDs. This’ll run a couple hundred dollars more, but by my calculations should actually result in better performance per dollar. (With twice as good read performance, actually.) I just hope the problem really is scratch device speed. That’s what I’ve been trying to clarify.

I couldn’t get him to test the SSD performance, or to tell me if the bog down seemed to correspond with the time the system overran RAM and went to scratch. Anyway, if he has the money, the RAID of SSDs is not a bad thing to have. It’s 4x the space, too, which is nice, and it’s expandable (both performance- and space-wise), which is nice.

Lawrence: It seems everyone is recommending taking manufacturer speed ratings with a grain of salt. Thankfully there are a number of third party benchmarking reviews. Did your filter operation even touch scratch?

Chris Farrell: The idea is to create a composite, so having all the images in a single file seems critical. The later we can put off merging layers, the better off the artist is in being able to do his work. I’m hoping that ultimately we can avoid doing any merging that’s inconvenient for him, but we’ll see what necessity dictates. Thanks very much for sharing your setup info. It gives me hope.

Our current image relative to yours is about 1/10 the pixels, 2x the color depth, 3x the image layers. A simple (simplistic, probably) calculation says that your working file size (in-memory size) might be 1.6x what we should be getting. I have not been able to get the artist to report the working size. Is this the value reported in the PS4 status bar, or is there another place I should ask him to look? (I think the 20 minute load time depresses him, so if I can ask him very succinctly, the effort stands a better chance of happening.)

Chris Cox: I expected that Photoshop doesn’t do simple streaming write for scratch, but the exact nature of its scratch interaction would really help me to provision a device to handle it. If you have any idea of what kind of operations Photoshop does with scratch, that would be really helpful. Specifically, the size and frequency of reads and writes. I imagine comparing the information to a graph like this:
<http://metku.net/reviews/patriot-warp-v2/atto.png>

Freeagent, Russell: Thanks very much for your testing. The information makes me hopeful that the problem really is about scratch space usage and not something weird like Vista swapping Photoshop without Photoshop’s knowledge.

If it is scratch performance, there’s the possibility that Vista has difficulty with SSDs. There seems to be a lot of discussion out there about tuning Vista to work with SSDs. I’m hoping that the RAID might insulate Vista from the SSDs enough that no tuning will be needed, but I’m reading up on tuning anyway (and trying to disentangle what’s relevant for a dedicated scratch drive as opposed to an OS-containing SSD).
RP
Russell_Proulx
Mar 15, 2009
IMO, Photoshop performance is dependent on ‘all the links in the chain’, such as the processor speed, the video card, the motherboard and its various bus speeds, etc..

There’s a point where optimizing one process further will not increase overall efficiency much further as the cause of any slowdown probably lays elsewhere.
DE
David_E_Crawford
Mar 15, 2009
RAID 0 is faster. However, since your data will be spread out between 4 drives the failure of one hard drive will cost you everything.
BL
Bob Levine
Mar 15, 2009
I’ve always found that to be a totally lame argument. It only takes one harddrive to fail to lose everything anyway.

Always have a backup!

Bob
P
pfigen
Mar 15, 2009
What type of layer and background compression are you using? If it’s a tiff file with zip compression on everything, that will drastically increase save and open times.
RK
Raymond_Keller
Mar 15, 2009
Bob: David’s point is actually a real phenomenon. A striped (0) RAID is as many times more vulnerable to failure as its number of drives. For example, if you have a drive that’s got a 1 in 10 chance of failing in 5 years, a RAID array spread over half a dozen of such drives has a greater than 50% chance of failing in the same period. But, yeah, as you say, backups are good. Especially if you’re shortening the lifespan of your volume. (For dedicated scratch disks, backups are unnecessary.)

David: RAID 0 is faster than what?

Russell: It’s hard to tell where the system is currently being constrained. I’m hoping that if the RAID of SSDs doesn’t solve the speed issue that the artist will give me some time to diagnose.
LH
Lawrence_Hudetz
Mar 15, 2009
Raymond, the SSD certainly did. That’s why I used that particular image. It was created by doing a stitched, tiled image consisting of about ten separate images, at 16 bit. The Smart Sharpen was set to minimize sharpening on the highlights, which slows it way down.

SSD’s are rather at the primitive state. Potentially, there is great promise for speed increase, but SSD’s at best will not match the number of read/write cycles that magnetic materials can do. It also depends on whether the unit is a single level cell (SLC) or multi level cell (MLC). MLC is slower and less reliable but cheaper, as more data per cell is available.

Google SSD and you will find a plethora of information

The unit I checked costs about $800.
BL
Bob Levine
Mar 15, 2009
Bob: David’s point is actually a real phenomenon

No, it’s not…it’s a theory. I have two machines here with RAID(0). One is a bit over a year old, the other about three and half years old. Neither has been a problem.

Over that time, I’ve read countless stories of crashed harddrives in non RAID setups. I stand by statement. Go for the increase in performance and have a good backup because it doesn’t really matter what happens when one drive containing valuable data crashes if you don’t have it backed up.

Bob
RP
Russell_Proulx
Mar 15, 2009
It’s hard to tell where the system is currently being constrained. I’m hoping that if the RAID of SSDs doesn’t solve the speed issue that the artist will give me some time to diagnose.

My point was that you will at some point hit the maximum performance that current desktop computers offer, and until there are improvements in all parts of the chain, there’s nothing much you can do to make it go any faster. I suspect you’re wasting a lot of effort to get a very small % change over what you’ve experienced to date. I’ll be watching the thread to see what you’ve come up with. I hope your friend will be able to accept that at one point it’s as good as it’s going to get.

As Scotty was fond of saying "Ya can’t change the laws of physics!"

๐Ÿ™‚
RK
Raymond_Keller
Mar 15, 2009
Bob: Please pardon me for appearing contrary. As I understand it, the relative vulnerability of a RAID 0 array v. individual drives is a well-established fact.

<http://en.wikipedia.org/wiki/Raid_0#RAID_0_failure_rate>

I appreciate that your personal experience may compellingly conflict with this idea.

(An important footnote: A RAID 0 of SSDs has a different Mean Time Between Failure calculation than a RAID 0 of spinning disks. Spinning disks degrade continuously during power-on time, while the disks spin, but SSDs degrade with use. So since a RAID 0 array divides use across drives it actually _improves_ the longevity of groups of SSDs. Anyway, _that_ is a theory.)

Lawrence: The SSD certainly did what? I think maybe you’re saying that the Photoshop operation you performed certainly made use of scratch. Did you verify this by some means, such as watching the scratch file grow?

For general performance, some SSDs have been measured to do very well. See the results in this article:

<http://techreport.com/articles.x/15931/2>

The Intel X25-E SSD leads a pack of 7200 and 10K RPM drives.

The question remains of whether Photoshop’s scratch access profile aggravates specific shortcomings of SSDs (small chunk problems). And whether there’s anything that can be done about that.

As for endurance, "million cycle flash" has been around for almost a decade and is now commonplace. This means, as far as rewrite cycles, modern SSDs have a continuous-use lifetime of half a century.

<http://www.storagesearch.com/ssdmyths-endurance.html>

(Though there is some question about degraded performance over time, caused by shuffling around blocks and there’s some talk about "refreshing" drives using a kind of low-level erase.

<http://www.pcper.com/article.php?aid=669>

)
BL
Bob Levine
Mar 15, 2009
You’re discussing statistics and I’m discussing real life. Statistically, you have a 500% better chance of winning the lottery by buying 5 tickets instead of 1.

In reality,the chances are so slim that 1 ticket has just as much a chance as 5.

Same for RAID(0). Again…what’s the difference if you have one drive with data on it crash on you or one drive in a RAID crash?

The answer is zero and the benefits of RAID(0) with large files is well worth it.

Bob
LH
Lawrence_Hudetz
Mar 15, 2009
The scratch disk dis grow.

As I said, the expectation of better performance is there, and is valid. I ran my tests late last year. Probably outdated/ But still, even $400 is too much for a scratch disk, imo.

The low level erase is using the same technology as re-write cd’s. That holds the greatest promise, because (MLC) flash has to erase to 0’s before re-writing, which means that the data has to be copied then written back.

So far as statistics, well, one event doesn’t make or break the numbers.

You mean your real life, right Bob?
DE
David_E_Crawford
Mar 16, 2009
RAID 1 will help protect from data loss as it mirror images on all drives the same. One fails the other 3 you can still run with. Biggest draw back is the you lose about 50 percent of total storage space. Too much waste for me. I never went RAID.

Raymond: Last year I was just reading up on RAID for Photoshop and 3d max runnig faster. But as Bob pointed out, and I agree with him, that a back up hard drive is much better. Safer anyways. I just use 2 300 gig velociraptors as separate drives.

As mentioned above there is a lot more involved like other hardware and motherboard and such.
DM
dave_milbut
Mar 16, 2009
because it doesn’t really matter what happens when one drive containing valuable data crashes if you don’t have it
backed up.

not really. if you lose a drive out of a raid array you have slim to no chance of recovering it. if a single drive crashes, chances are fair to very good you can get back most or all of what was on the drive at some point, depending on your determination (or what’s in your wallet).
RK
Raymond_Keller
Mar 16, 2009
Chris Cox: I’ve added to my laundry list of questions. I hope this isn’t too much a bother.

1. From earlier: The nature of Photoshop’s scratch activity would really help me to provision a device to handle it. If you have any idea of what kind of operations Photoshop does with scratch, that would be really helpful. Specifically, the size and frequency of reads and writes. I imagine comparing the information to a graph like this:
<http://metku.net/reviews/patriot-warp-v2/atto.png>

2. Is there an optimal NTFS cluster size for scratch? Is there an optimal RAID 0 stripe size for scratch?

3. Could Windows be swapping out Photoshop unnecessarily (i.e., causing slowdown)? Might turning off the pagefile help performance?

4. Are you familiar with SteadyState? Are you aware of that helping Photoshop performance with spinning disks or SSDs?

5. Is there any way to disable PSD/PSB compression?

6. What is "DisableScratchCompress", and do you think it might help? And what about "ForceVMBuffering"?

Meanwhile, I’ll give a try at the "Bigger Tiles" plugin you suggested. Thanks very much for that.
BL
Bob Levine
Mar 16, 2009
Again, only in theory and for a much greater cost than a backup plan. I’ve heard it all before and I’m not buying any of it. Anyone without a backup is playing russian roulette with their data.

Bob
DM
dave_milbut
Mar 16, 2009
still there are some really good questions there. esp (imo) number 2…
LH
Lawrence_Hudetz
Mar 16, 2009
There is a difference between what one buys for themselves and shooting down the entire concept for others. Options need to be left open as there is always more than one way to do a job. Separating the ego from the technology is important, imo.
CC
Chris_Cox
Mar 16, 2009

1) The exact size of the transfers depends on a number of system characteristics and which optional plugins you have installed. In general, it should be equal or less than the tile size (which varies due to the above factors).
The pattern of reads and writes depends a lot on what you’re doing, the size of the document in RAM, the available RAM, etc.

2) Not that we’ve been able to determine. The performance has more to do with the RAID controller once you have 3 or 4 fast disks. (2 disks helps, but they’re still the bottleneck)

3) No, turning off the pagefile is always a mistake.

4) Not really, but it doesn’t sound like it would help anything…

5) No.

6) Try them. They may or may not help depending on the characteristics of your system. (That’s why they’re optional)
RK
Raymond_Keller
Mar 16, 2009

1. I guess you’re saying that scratch performance varies so widely there’s no good description of how it generally behaves. Nothing like "lots of small writes (around 4K) while loading, complete mixed bag while editing"? Do you (or anyone) know of any utilities that would help me to see r/w sizes and frequencies?

4. Here’s where the idea came from: < http://www.pcper.com/article.php?aid=669&type=expert&amp ;pid=7> :

SteadyState reroutes all disk writes, regardless of their randomness, to a contiguous ย‘changeย’ file. This brings small write performance much closer to the ย‘sequential writeย’ speed of a given drive. In the case of the X25-M, it will significantly reduce the internal fragmentation that occurs as a result of random writes.

5. ๐Ÿ™ Any plans for uncompressed PSDs? Or maybe multi-threaded file loading?

6. I’ll give them a try.

Thanks very much for your answers, Chris.
DM
dave_milbut
Mar 17, 2009
Thanks very much for your answers, Chris.

as always. thanks chris!
LH
Lawrence_Hudetz
Mar 17, 2009
Happy to see your continual presence here Chris.
CC
Chris_Cox
Mar 17, 2009

1) Because of compression, different tile sizes, etc. I can’t tell you anything about it.

4) Hmm, interesting side effect – but likely to cause problems with a scratch file and heavy IO.

5) Nope. Threading won’t help a single file load – the best threading could do is load in the background while you work on another document (slowing both down quite a bit). And we have a lot of work to do before we could load or save in the background.
RK
Raymond_Keller
Mar 17, 2009
Ah, I was thinking that threading might help with spreading the decompress load across processors. Maybe I’ve misunderstood something here.
CC
Chris_Cox
Mar 17, 2009
Decompressing takes very little time (5%, maybe), most of the time is spend waiting for the disk IO.
RK
Raymond_Keller
Mar 17, 2009
Hm. You must be referring specifically to scratch IO. We’re now looking at setting up a RAID 0 of six Barracudas for scratch. These have a max sustained transfer of 107.8 MB/s read, 109.9 MB/s write and they’re inexpensive ($65 shipped). I’m cringing, though, anticipating continued slow load performance due to lots of small writes.

It’ll be a few days before we have the new RAID set up, but I’ll try to get some performance numbers (benchmarks of the array and of Photoshop’s use of it) to share with everyone here. I might also try SteadyState to see if it has any impact.

Cheers.
F
Freeagent
Mar 22, 2009
A striped (0) RAID is as many times more vulnerable to failure as its number of drives

I’m resurrecting this because it just occurred to me why that argument is meaningless. It’s Schroedinger’s cat.

You cannot use statistics to predict the behaviour of individual cases. It’s a different paradigm.

OK, next ๐Ÿ˜‰
DM
dave_milbut
Mar 22, 2009
It’s Schroedinger’s cat.

I thought that one was dead. ๐Ÿ™‚
BL
Bob Levine
Mar 22, 2009
I thought that one was dead. ๐Ÿ™‚

It is, but the backup is running just fine. <g>

Bob
DM
dave_milbut
Mar 22, 2009
but it’s still dead… ๐Ÿ˜‰
F
Freeagent
Mar 22, 2009
Not necessarily, Dave. It’s just a probability (but try to tell the cat that).

OK, OK. Different metaphor: how about the guy with one hand on the stove and the other in a bucket of ice. That’s how long before your drives will fail, RAID or no RAID. There’s just no telling either way.
BC
Bart_Cross
Mar 22, 2009
I haven’t used RAID on any of my systems. I use multiple drives as a JBOD and carefully configure the system so that drives that would be used simultaneously are on different controllers.
DE
David_E_Crawford
Mar 22, 2009
Raymond,
Did you find time to play with the SSD experiment. If so did you notice any performance gain or loss?
RK
Raymond_Keller
Apr 2, 2009
I was unable to get SSD performance values before the artist RMA’d the X25-E.

Ultimately 5 Seagate Barracuda 7200.12s were reportedly installed by the artist (as a RAID0 presumably), but he continued to have problems. He seems uninterested in having me help to diagnose where the bottleneck exists, so I have no benchmarks to share with that setup.

My guess is that he lost confidence in my being able to help after the system failed to perform to his desired levels when it was first built. I hear he’s enlisting the help of someone with nine years of Photoshop experience. I hope he can do something. If I hear any information about their progress I will share it.
DE
David_E_Crawford
Apr 2, 2009
Thanks Raymond.

Sounds like he will find out the hardway that sometimes a system just can’t be tossed together, expect it to go POOF, and be perfect.

MacBook Pro 16” Mockups ๐Ÿ”ฅ

– in 4 materials (clay versions included)

– 12 scenes

– 48 MacBook Pro 16″ mockups

– 6000 x 4500 px

Related Discussion Topics

Nice and short text about related topics in discussion sections