Quote:I specifically said it improves transfer rates. Increasing transfer rates by 50% or even 100% is not a large real world performance benefit. A very large portion of average transactions is latency, which is the large advantage of SSDs. Real world benchmarks with RAID have proved this in every review I've seen that bothers to test beyond using a synthetic benchmark or doing tests which are essentially equivalent to a synthetic sequential read test. My own experience with RAID 5 and a hardware RAID controller also left me with no actual perception of speed improvement.
Here is a pretty in-depth article on reasonably real-world benchmarks. A little old, but nothing has changed that will change the conclusion: http://www.storagereview.com/articles/2004...html?page=0%2C5
Did you look at when this article was written? This article is 5 years out of date. Since then RAID controller technology and the HDDs themselves have been improved by using more cache to fetch information more quickly and prepare for future requests. By using caching technologies, data can be shifted to system memory more quickly and can allow systems to function more quickly. Even one of the initial graphs shows how much read time is effected by having multiple drives (with a read time on a single drive being twice as much as on two drives in a RAID 0, about 1.8ms to 0.9ms). Likewise, in the 5 year span that this article was written in, seek times have dropped by half what this article lists (going from aroun 8 to 9 ms to 4 to 4.5 ms)
Quote:When considering the initial point I was making (that SAS is a non factor to home users because the solution I outlined outperforms SAS in every way and also costs less), moving to RAID also will increase latency. Latency is a large factor in the end-user perception of speed. Consider that drives have somewhere around 100 MB / sec internal transfer rates, some are faster, but this is a nice round number.
Cost depends on if you have a controller or not. Newer MBs now are coming with two to four SAS capable ports on them, meaning that you don't have to purchase a seperate controller if you want SAS capabilities, only the SAS drives themselves. If you look at the prices of SSDs and SAS drives, you'll find that you typically are paying twice as much for an SSD of approxiamtely the same size to that of a SAS drive. (from Newegg: Hitachi 300G SAS drive and Super Talent 256G SSD
Quote:Consider transfer of a relatively large file for the average user... 1MB, this would take 10 ms ( 1 MB / 100MB / sec = 0.010 sec). But the drive must find the spot on the drive where the data is first. Average seek times are on the order of 5-10 ms for 15k RPM drives and 10-15 ms for 7200 RPM drives. In an ideal world, only one seek is necessary, but in real life, you have fragmented files that may require 2 or 3 seeks until it's done.
So in a best case, user perceived transfer speed is about 50 / 50 on transfer and latency. In reality it's somewhere between that and 75 / 25 (with 75 being seek latency)
SSDs still have the same issue as HDDs, they do slow down when performing non-sequential reads and writes just as a spindel drive would as well. If you look at the statistics of an SSD, you will always see it listed as sequential reads and writes, never random reads and writes. Articles I've seen from Tom's Hardware, Anand Tech, and a variety of other hardware websites that have performed true non-sequential tests comparing SSDs and Spindel drives have shown that in non-sequential reads/writes, SSDs are little better than their spindel based brethern showing speeds usually 3 to 4 orders in magnitude less than their sequential read and write speeds.
Quote:RAID 0 doubles the transfer rate, but generally gives a very small penalty to seek latency, we can assume this zero for the sake of argument. Still, moving from a 15k SAS to a 7200 RPM or even a 10k RPM SATA setup will increase latency by 50-100%, for what could be a net zero benefit over SAS, to a potential downgrade vs. 15k SAS.
Again, you're not taking into the effect of how little needs to be read from each disk in a RAID 0 array. Because of how RAID 0 work, information is evenly split across the drives in the raid array, this means that each drive has to let sectors and report that information back. If you take out seek time, as you're doing in your example, this means that the amount of data read drops dramatically as more drives are introduced (single drive vs. a RAID 0 of two drives is a 50% decrease in read time, a single drive vs. a RAID 0 of four drives is a 75% decrease in read time and a single drive vs. a RAID 0 of 8 drives is a 87.5% decrease in read time).
Likewise, as spindel speed goes up, seek time drops, as such a 15k RPM drive will find the data in a little under half the time a 7.2k RMP drive will find.
Quote:SSD removes almost all latency, and dominates all mechanical drives in random performance, this is a large chunk of what people actually do, so it has a large perceived improvement from an end user.
No, it does not dominate in random reads. As I noted above, random reads caused SSDs just as much problems as spindel drives. Where SSDs blow spindel drives out of the water is sequential reads and writes.
You're right on the domination, but it's in sequential reads (which is what most end users are doing, not random reads).
Quote:Again, I mentioned this, you seem to glossed over major points in my post. I'll explain in more detail. SSD will track where writes are heavy and when they get more usage in one area of the SSD, it puts some block of data there that doesn't get written much and does the heavy writing somewhere else. If you're reading about average lifespans of 2 years from modern SSD drives, you're reading either outdated literature or someplace other than the places I'm reading:
http://www.storagesearch.com/ssdmyths-endurance.html
http://communities.intel.com/message/68106
Estimates are on the order of 10-15 years unless you re-write your drive every few minutes. The future is today.
If you look at the articles from storagesearch, even he has contrdictory information on that page saying in one place 51 years, another 15 years, and still another 5 years (from one of the links in the white side articles that is recent). You'll also note that some of those numbers for life span have decreased as you increased in SSD size as well (which he eludes to in the article should be the oppostie with life expectancy incresing with drive size).
Quote:SSD is a reality. This coming from someone who works manufacturing media for mechanical drives. Large scale adoption of SSD drives puts me out of a job. They are feasible today for the people who would be willing to shell out for a SAS drive.
I'm not doubting SSDs are here to stay, but I am saying that the performance that you're stating is not at the levels you are stating. You are looking specifically at sequential reads and writes on SSDs and comparing it to random reads and writes, thus comparing apples and oranges. What you need instead to do, which I've read articles on at Tom's and Anand's is compare sequential to sequential and random to random and then take a look at performace of the technology types.
Right now, I wouldn't go with SSDs because the price to performance isn't there yet. (You're paying 8 times as much over a 7.2k RPM spindle drive to a similar sized SSD and you're paying 2 times as much for a 15k RPM SAS drive to an SSD.)
Quote:I don't want the absolute best reliability. I want "good enough" while being practical and usable. Again, the initial point I was making was designing a storage system better than SAS for cheaper.
Anything RAID 5 is impractical due to controller requirements. Parity bit calculations require real hardware controllers to have any reasonable write performance, which = big $$$, definitely more expensive than a basic SAS controller.
If you look at MBs out there now, you can buy a MB that has a RAID 5 controller built in for maybe a $50 to $75 more than a MB with a RAID controller that does RAID 0, 1, 1+0. Likewise, what really makes add in RAID controllers more expensive is not the RAID controller chip itself, but the connectors, onboard memory, and interface with the MB through the PCI bus. By having he controller on board, you interface with the PCI bus directly, you can use the memory on the MB, and you can use the connectors directly attached to the MB to connect to the HDDs.
Quote:The other problem with RAID 5 is the issues that if your controller dies, you data is gone until you buy another controller. I used to use RAID 5. It works, but for home use it's not very practical, the controller becomes a factor in your reliability calculation, combined with the costs, it is not a practical solution for someone who is dedicating less than $1k to their storage system.
How is this any different than a controller for SATA drives? If your controller goes, it goes and you're not going to be able to access your data until you have a working controller available.
Quote:Mirroring is much more practical. A drive dies, and you have all your info on another identical, bootable drive with no special controller requirements, no special anything. When upgrading all your new motherboard needs is a SATA port, since there are plenty of ways to do mirroring without RAID. This is what I do now. The reliability of the system with 2 drives is already approaching cosmic coincidence level of probability, you don't really need more than that.
Practical yes, reliable no. If you look at reliabilty with a raid, from highest to lowest it goes: 5+6, 1+6, 1+5, 1+0, 6, 5, 1, 0 (I leave out RAID 3 and 4 because no one really uses those technologies anymore). For the general home use, 1+0 is the more reliable and also very fast. It requires 4 drives to pull off, but with the prices now, you can get a TB of data is as fast as RAID 0 while mirroring it for aroudn $200 (which will only get you about 64GB on a lone SSD).
Quote:The other issue with reliability is that no matter how reliable your array is, there is a real threat of natural catastrophe (earthquake, hurricane, flooding, fire, etc..) This is where the 3rd drive in a firesafe or off-site comes in. Reliability of 2 drives is enough. Then add the third for the catastrophes.
Again, the point I was making was SCSI (SAS) in home usage. There are 2 main advantages to SAS: reliability and performance. My example was one that would outperform SAS in both areas while at the same time being cheaper. The points you bring are RAID 5 + 6? a 12 drive setup beats SAS in both areas and is cheaper? I'm not sure that in real world performance it actually is going to beat a 15k RPM SAS drive in performance, for one. It definitely won't beat it on cost for 2, a 12 port RAID 5/6 card that doesn't stuff write performance is probably around the same cost of 600 GB of SAS drives.
The point I was making is that in the highest level of reliability RAID 5+6 is the pinnacle and nothing short of a natural disaster or direct sabotage is going to cause you to lose data. The point I was making is not in a home enviroment situation, but in a reliability situation. For a home user, RAID 1+0 is the highest reliability that you should look at (requires a minimum of 4 drives). You stated that RAID 1 is highly reliable, but it's not as reliable as you state. Most cases it's going to be good enough, but not in all cases (where you may want both speed and reliability).
Sith Warriors - They only class that gets a new room added to their ship after leaving Hoth, they get a Brooncloset
Einstein said Everything is Relative.
Heisenberg said Everything is Uncertain.
Therefore, everything is relatively uncertain.
Einstein said Everything is Relative.
Heisenberg said Everything is Uncertain.
Therefore, everything is relatively uncertain.