RayLucchesi
58p192 comments posted · 0 followers · following 0
8 years ago @ RayOnStorage Blog - VMware VVOLs potential... · 0 replies · +1 points
Thanks for the comment. Yes the queue management issues for VVOLS is sort of an internal storage subsystem problem but could present some interesting impacts to the unwary. Rgds, Ray
8 years ago @ RayOnStorage Blog - Existential threats · 0 replies · +1 points
Ray
8 years ago @ RayOnStorage Blog - MCS, UltraDIMMs and me... · 0 replies · +1 points
Ray
8 years ago @ RayOnStorage Blog - Two dimensional magnet... · 0 replies · +1 points
Ray
8 years ago @ RayOnStorage Blog - Veeam's upcoming V8 vi... · 0 replies · +1 points
Ray
9 years ago @ RayOnStorage Blog - MCS, UltraDIMMs and me... · 0 replies · +1 points
Ray
10 years ago @ RayOnStorage Blog - EMC ViPR virtues & vex... · 0 replies · +1 points
Thanks for the comment. I would have to say that ViPR has a ways to go to prove it's overall value that's just not there today. From my perspective, how well they play in the data plane will determine how useful it is in the long run. But they have gone out of their way to leave it alone as much as they can.
Ray
10 years ago @ RayOnStorage Blog - EMC ViPR virtues & vex... · 0 replies · +1 points
Ray
10 years ago @ RayOnStorage Blog - The antifragility of d... · 0 replies · +1 points
Ray
10 years ago @ RayOnStorage Blog - The antifragility of d... · 0 replies · +1 points
1) Wear leveling with Defect skipping is not the same as defect skipping alone. The later should exhibit more randomized wear failures than the former which is what I am getting at. Yes, there are many different types of drive and SSD failures out there but given a "mature" development process (which may never happen in my lifetime) failure rates should be governed by more by attributes of the componentry and technology rather than code. But my main concern is with the variance of the failure rate. I have yet to see any statistics that show the variance of disk or SSD failure rates which have a distinct bearing on the discussion.
2) Yes, SSDs read much faster than disks, and as such may have a faster rebuild time. But not all SSDs write (sequentially) faster than disks. So this may slow down rebuild time. I have no stats on SSD RAID group rebuild times to know if they are significantly faster than disk, but I am guessing as MLC SSD capacities go up and MLC write times slow down, someday MLC SSD RAID group rebuilds won't be that much faster than 15Krpm disks of comparable capacities.
3) I am not as much worried by all flash arrays or even hybrid disk-flash arrays, they all should understand these issues much better than I. However, like you say DIYer doesn't necessarily share this knowledge and needs to be aware of these concerns.
4) Like I say in the post, a more randomized view of predictive maintenance is one of the things that can help. Having a more conservative approach to when to swap a drive out doesn't necessarily make them have a more wider distribution of failure rates.
My original goal in writing the post was to show how antifragility can be applied to RAID groups disk and SSD and what we can do to make SSDs a better RAID group participant. I had no intention of belittling the performance and other advantages that can come with SSDs which we all know so well.