• MTK@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    20 hours ago

    I generally agree, it won’t take long for SSDs to be cheap enough to justify the expense. HDD is in a way similar to CD/DVD, it had it’s time, it even lasted much longer than expected, but eventually technology became cheaper and the slightly cheaper price didn’t make sense any more.

    SSD wins on all account for live systems, and long term cold storage goes to tapes. Not a lot of reasons to keep them around.

    • xthexder@l.sw0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 hours ago

      As a person hosting my own data storage, tape is completely out of reach. The equipment to read archival tapes would cost more than my entire system. It’s also got extremely high latency compared to spinning disks, which I can still use as live storage.

      Unless you’re a huge company, spinning disks will be the way to go for bulk storage for quite a while.

      • MTK@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 hours ago

        Define “quite a while”

        Sure, enterprise is likely to make the switch first, but it’s also likely to kick start the price reduction to consumers. So I actually don’t think it’s that far away. I would guess we are like 5 years away from SSDs being the significant majority of consumer storage technology by volume.

        Even now, as a self hoster it’s pretty reasonable to have SSDs if you are talking about single digit TB. Sure SSDs are about 2x the price, but we are talking about a difference of like 60 USD if you only need 2 TB.

        • xthexder@l.sw0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          12 hours ago

          Based on current trends, I’d say we might get SSDs and HDDs at the same cost per GB around 2030. That’s based on prices being 12-13x higher in 2015, and around 5x higher now. SSD cost efficiencies are slowing down, but there will also be a big change in demand once the prices get close, because SSDs have other advantages people will switch as soon as it’s economical.

          I’ve currently got a 200TB storage array using enterprise HDDs (shout out to Backblaze’s HDD failure rate publications), and I definitely would not have been able to afford 200TB of enterprise SSDs.

          • MTK@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 hours ago

            Yeah, for anything over 10TB for an individual consumer it will take time.

      • Marud@lemmy.marud.fr
        link
        fedilink
        English
        arrow-up
        0
        ·
        19 hours ago

        Well, tape is still relevant for the 3-2-1 backup rule and I worked in a pretty big hosting company where you would get out 400 tb of backup data each weekend. it’s the only media allowing to have a real secured fully offline copy that won’t depend on another online hosting service

        • xthexder@l.sw0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          18 hours ago

          If you’re storing petabytes of data sure, but when a tape drive costs $8k+ (Only price I could find that wasn’t “Call for quote”), and only storing less than 500TB, it’s cheaper to buy hard drives.

          I’m not sure how important 2 types of media is these days, I personally have all my larger data on harddrives, but with multiple off-site copies and raid redundancy. Some people count “cloud” as another type of storage, but that’s just “somebody else’s harddrive”

  • AnUnusualRelic@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 hours ago

    I’m about to build a home server with a lot of storage (relatively, around 6 or 8 times 12 TB as a ballpark), and I didn’t even consider anything other than spinning drives so far.

    • fuckwit_mcbumcrumble@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 hours ago

      For servers physical space is also a huge concern. 2.5” drives cap out at like 6tb I think, while you can easily find an 8tb 2.5” SSD anywhere. We have 16tb drives in one of our servers at work and they weren’t even that expensive. (Relatively)

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      22 hours ago

      The disk cost is about a 3 fold difference, rather than order of magnitude now.

      These disks didn’t make up as much of the costs of these solutions as you’d think, so a dish based solution with similar capacity might be more like 40% cheaper rather than 90% cheaper.

      The market for pure capacity play storage is well served by spinning platters, for now. But there’s little reason to iterate on your storage subsystem design, the same design you had in 2018 can keep up with modern platters. Compared to SSD where form factor has evolved and the interface indicates revision for every pice generation.

    • Nomecks@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      22 hours ago

      Spinning platter capacity can’t keep up with SSDs. HDDs are just starting to break the 30TB mark and SSDs are shipping 50+. The cost delta per TB is closing fast. You can also have always on compression and dedupe in most cases with flash, so you get better utilization.

      • Fluffy Kitty Cat@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 hours ago

        The cost per terabyte is why hard disk drives are still around. Once the cost for the SSD is only maybe 10% higher is when the former will be obsolete.

      • SaltySalamander@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        21 hours ago

        You can also have always on compression and dedupe in most cases with flash

        As you can with spinning disks. Nothing about flash makes this a special feature.

        • enumerator4829@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          18 hours ago

          See for example the storage systems from Vast or Pure. You can increase window size for compression and dedup far smaller blocks. Fast random IO also allows you to do that ”online” in the background. In the case of Vast, you also have multiple readers on the same SSD doing that compression and dedup.

          So the feature isn’t that special. What you can do with it in practice changes drastically.

        • Nomecks@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          20 hours ago

          The difference is you can use inline compression and dedupe in a high performance environment. HDDs suck at random IO.

    • Natanael@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 hours ago

      It’s losing cost advantages as time goes. Long term storage is still on tape (and that’s actively developed too!), and flash is getting cheaper, and spinning disks have inherent bandwidth and latency limits. It’s probably not going away entirely, but it’s main usecases are being squeezed on both ends

  • pr0sp3kt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 day ago

    I had a terrible experience through all my life with HDDs. Slow af, sector loss, corruption, OS corruption… I am traumatized. I got 8TB NvMe for less than $500… Since then I have not a single trouble (well except I n electric failure, BTRFS CoW tends to act weird and sometimes doesnt boot, you need manual intervention)

  • hapablap@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    My sample size of myself has had 1 drive fail in decades. It was a solid state drive. Thankfully it failed in a strangely intermittent way and I was able to recover the data. But still, it surprised me as one would assume solid state would be more reliable. That spinning rust has proven to be very reliable. But regardless I’m sure SSD will be/are better in every way.

    • DSTGU@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      22 hours ago

      I believe you see the main issue with your experiences - the sample size. With small enough sample you can experience almost anything. Wisdom is knowing what you can and what you cant extrapolate to the entire population

      • fuckwit_mcbumcrumble@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        21 hours ago

        I have one HDD that survived 20+ years, and an aliexpress SSD that died in 6 months. Therefore all SSDs are garbage!!!

        That’s also the only SSD I’ve ever had fail on me and I’ve had them since 2011. In that same time I’ve had probably 4 HDDs fail on me. Even then I know to use data from companies like backblaze that have infinitely more drives than I have.

  • NeuronautML@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 day ago

    I doubt it. SSDs are subject to quantuum tunneling. This means if you don’t power up an SSD once in 2-5 years, your data is gone. HDDs have no such qualms. So long as they still spin, there’s your data and when they no longer do, you still have the heads inside.

    So you have a use case that SSDs will never replace, cold data storage. I use them for my cold offsite back ups.

    • n2burns@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Nothing in this article is talking about cold storage. And if we are talking about cold storage, as others gave pointed out, HHDs are also not a great solution. LTO (magnetic tape) is the industry standard for a good reason!

      • NeuronautML@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        18 hours ago

        Tape storage is the gold standard but it’s just not realistically applicable to low scale operations or personal data storage usage. Proper long term storage HDDs do exist and are perfectly adequate to the job as i specified above and i can attest this from personal experience.

    • MonkderVierte@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      You’re wrong. HDD need about as much frequently powering up as SSD, because the magnetization gets weaker.

      • NeuronautML@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        18 hours ago

        Here’s a copy paste from superuser that will hopefully show you that what you said is incorrect in a way i find expresses my thoughts exactly

        Magnetic Field Breakdown

        Most sources state that permanent magnets lose their magnetic field strength at a rate of 1% per year. Assuming this is valid, after ~69 years, we can assume that half of the sectors in a hard drive would be corrupted (since they all lost half of their strength by this time). Obviously, this is quite a long time, but this risk is easily mitigated - simply re-write the data to the drive. How frequently you need to do this depends on the following two issues (I also go over this in my conclusion).

        https://superuser.com/questions/284427/how-much-time-until-an-unused-hard-drive-loses-its-data

      • floquant@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        21 hours ago

        Note that for HDDs, it doesn’t matter if they’re powered or not. The platter is not “energized” or refreshed during operation like an SSD is. Your best bet is to have some kind of parity to identify and repair those bad bits.

    • floquant@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Sorry dude, but bit rot is a very real thing on HDDs. They’re magnetic media, which degrades over time. If you leave a disk cold for 2-5 years, there’s a very good chance you’ll get some bad sectors. SSDs aren’t immune from bit rot, but that’s not through quantum tunneling - not any more than your CPU is affected by it at least.

      • NeuronautML@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        17 hours ago

        I did not meant to come across as saying that HDDs don’t suffer bit rot. However, there are specific long term storage HDDs that are built specifically to be powered up sporadically and resist external magnetic influences on the track. In a proper storage environment they will last over 5 years without being powered up and still retain all information. I know it because i use them in this exact scenario for over 2 decades. Conversely there are no such long term storage SSDs.

        SSDs store information through trapped charges which most certainly lose charge through quantuum tunneling as well as generalized charge leakage. As insulation loses effectiveness, the potential barrier for the charge allows for what is normally a manageable effect, much like in the CPU like you said, to become out of the scope of error correction techniques. This is a physical limitation that cannot be overcome.

  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    Probably at some point as prices per TB continue to come down. I don’t know anyone buying a laptop with a HDD these days. Can’t imagine being likely to buy one for a desktop ever again either. Still got a couple of old ones active (one is 11 years old) but I do plan to replace them with SSDs at some point.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    Haven’t they said that about magnetic tape as well?

    Some 30 years ago?

    Isn’t magnetic tape still around? Isn’t even IBM one of the major vendors?

    • n2burns@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Anyone who has said that doesn’t know what they’re talking about. Magnetic tape is unparalleled for long-term/archival storage.

      This is completely different. For active storage, solid-state has been much better than spinning rust for a long time, it’s just been drastically more expensive. What’s being argued here is that it’s not performant and while it might be more expensive initially, it’s less expensive to run and maintain.

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Nvme is terrible value for storage density. There is no reason to use it except when you need the speed and low latency.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        There’s a cost associated with making that determination and managing the storage tiering. When the NVME is only 3x more expensive per amount of data compared to HDD at scale, and “enough” storage for OS volume at the chepaest end where you can either have a good enough HDD or a good enough SDD at the same price, then OS volume just makes sense to be SSD.

        In terms of “but 3x is pretty big gap”, that’s true and does drive storage subsystems, but as the saying has long been, disks are cheap, storage is expensive. So managing HDD/SDD is generally more expensive than the disk cost difference anyway.

        BTW, NVME vs. non-NVME isn’t the thing, it’s NAND v. platter. You could have an NVME interfaced platters and it would be about the same as SAS interfaced platters or even SATA interfaced. NVME carried a price premium for a while mainly because of marketing stuff rather than technical costs. Nowadays NVME isn’t too expensive. One could make an argument that number of PCIe lanes from the system seems expensive, but PCIe switches aren’t really more expensive than SAS controllers, and CPUs have just so many innate PCIe lanes now.

  • solrize@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Hdds we’re a fad, I’m waiting for the return of tape drives. 500TB on a $20 cartridge and I can live with the 2 minute seek time.

  • Sixty@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    I’ll shed no tears, even as a NAS owner, once we get equivalent capacity SSD without ruining the bank :P

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Considering the high prices for high density SSD chips…
      Why are there no 3.5" SSDs with low density chips?

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        Not enough of a market

        The industry answer is if you want that much volume of storage, get like 6 edsff or m.2 drives.

        3.5 inch is a useful format for platters, but not particularly needed to hold nand chips. Meanwhile instead of having to gate all those chips behind a singular connector, you can have 6 connectors to drive performance. Again, less important for a platter based strategy which is unlikely to saturate even a single 12 gb link in most realistic access patterns, but ssds can keep up with 128gb with utterly random io.

        Tiny drives means more flexibility. That storage product can go into nas, servers, desktops, the thinnest laptops and embedded applications, maybe wirh tweaked packaging and cooling solutions. A product designed for hosting that many ssd boards behind a single connector is not going to be trivial to modify for any other use case, bottleneck performance by having a single interface, and pretty guaranteed to cost more to manufacturer than selling the components as 6 drives.