• GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    no thanks Seagate. the trauma of losing my data because of a botched firmware with a ticking time bomb kinda put me off your products for life.

    see you in hell.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Can someone recommend me a hard drive that won’t fail immediately? Internal, not SSD, from which cheap ones will die even sooner, and I need it for archival reasons, not speed or fancy new tech, otherwise I have two SSDs.

      • AdrianTheFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        I think refurbished enterprise drives usually have a lot of extra protection hardware that helps them last a very long time. Seagate advertises a mean time to failure on their exos drives of ~200 years with a moderate level of usage. I feel like it would almost always be a better choice to get more refurbished enterprise drives than fewer new consumer drives.

        I personally found an 8tb exos on servedpartdeals for ~$100 which seems to be in very good condition after checking the SMART monitoring. I’m just using it as a backup so there isn’t any data on it that isn’t also somewhere else, so I didn’t bother with redundancy.

        I’m not an expert, but this is just from the research I did before buying that backup drive.

  • solrize@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Well, largest this week. And

    Yeah, $800 isn’t a small chunk of change, but for a hard drive of this capacity, it’s monumentally cheap.

    Nah, a 24TB is $300 and some 20TB’s are even lower $ per TB.

    • Armand1@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      I got some 16TB drives recently for around $200 each, though they were manufacturer recertified. Usually a recertified drive will save you 20-40%. Shipping can be a fortune though.

      EDIT: I used manufacturer recertified, not refurbished drives.

        • pulsewidth@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          I would absolutely not use refurbs personally. As part of the refurb process they wipe the SMART data which means you have zero power-on hours listed, zero errors, rewrite-count, etc - absolutely no idea what their previous life was.

          • Glitchvid@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            If you’ve got a RAID array with 1 or 2 parity then manufacturer recertified drives are fine; those are typically drives that just aged out before being deployed, or were traded in when a large array upgraded.

            If you’re really paranoid you should be mixing mfg dates anyway, so keep some factory new and then add the recerts so the drive pools have a healthy split.

    • Victor@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I paid $600+ for a 24 TB drive, tax free. I feel robbed. Although I’m glad not to shop at Newegg.

      • PancakesCantKillMe@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Yes, fuck Newegg (and amazon too). I’ve been using B&H for disks and I have no complaints about them. They have the Seagate Ironwolf Pro 24TB at $479 currently, but last week it was on sale for $419. (I only look at 5yr warranty disks.)

        I was not in a position to take advantage as I’ve already made my disk purchase this go around, so I’ll wait for the next deep discount to hit if it is timely.

        • scarabic@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Christ, remember when NewEgg was an actual store? Now they’re just a listing service for the scum-level of retailer and drop shippers. What a shame.

        • solrize@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          3 months ago

          I hate amazon but haven’t been following stuff about newegg and have been buying from them now and then. No probs so far but yeah, B&H is also good. Also centralcomputer.com if you are in the SF bay area. Actual stores.

          • PancakesCantKillMe@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            Newegg was the nerd’s paradise 10+ years ago. I would spend thousands each year on my homelab back then. They had great customer service and bent over backwards for them. Then they got bought out and squeezed and passed that squeeze right down to the customers. Accusing customers of damaging parts, etc. Lots of slimeball stuff. They also wanted to be like amazon, so they started selling beads, blenders and other assorted garbage alongside tech gear.

            After a couple of minor incidents with them I saw the writing on the wall and went to amazon who were somewhat okay then. Once amazon started getting bad, I turned to B&H and fleaBay. I don’t buy as much electronic stuff as I used to, but when I do these two are working…so far.

  • daggermoon@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I wanna fuck this HDD. To have that much storage on one drive when I currently have ~30TB shared between 20 drives makes me very erect.

  • Punkie@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Yeah, but it’s Seagate. I have worked in data centers, and Seagate drives had the most failures of all my drives and somehow is still in business. I’d say I was doing an RMA of 5-6 drives a month that were Seagate, and only 4-5 a year Western Digital.

    • brap@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I hear you. I’m not sure I’ve ever had a Seagate drive not fail on me.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      For affordable set it and forget it cold storage, this is incredible. For anything actively being touched, yeah definitely a pass.

  • needanke@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    What is the usecase for drives that large?

    I ‘only’ have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.

    • tehn00bi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      Jesus, my pool takes a little over a day, but I’ve only got around 100 tb how big is your pool?

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      High capacity storage pools for enterprises.
      Space is at a premium. Saving space should/could equal to better pricing/availability.

    • SuperUserDO@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      There is an enterprise storage shelf (aka a bunch of drives that hooks up to a server) made by Dell which is 1.2 PB (yes petabytes). So there is a use, but it’s not for consumers.

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        That’s a use-case for a fuckton of total capacity, but not necessarily a fuckton of per-drive capacity. I think what the grandparent comment is really trying to say is that the capacity has so vastly outstripped mechanical-disk data transfer speed that it’s hard to actually make use of it all.

        For example, let’s say you have these running in a RAID 5 array, and one of the drives fails and you have to swap it out. At 190MB/s max sustained transfer rate (figure for a 28TB Seagate Exos; I assume this new one is similar), you’re talking about over two days just to copy over the parity information and get the array out of degraded mode! At some point these big drives stop being suitable for that use-case just because the vulnerability window is so large that the risk of a second drive failure causing data loss is too great.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      What drives do you have exactly? I have 7x6TB WD Red Pro drives in raidz2 and I can do a scrub less than 24 hours.

      • needanke@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I have 2*12TB whitelabel WD drives (harvested from external drives but Datacenter drives accourding to the SN) and one 16 TB Toshiba white-label (purchased directly also meant for datacenters) in a raidz1.

        How full is your pool? I have about 2/3rds full which impacts scrubbing I think. I also frequently access the pool which delays scrubbing.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    So how much data would I lose when it dies?

    Edit for those who didn’t read the smirk, yes 36Tb, as a way to point out what someone answered below: if you’re using a drive this big have your data recovery procedures on fleek.

    • NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Assuming you aren’t striping, up to 36 TB. If you follow even halfway decent practices with basically any kind of RAID other than 0, hopefully 0 Bytes.

      The main worry with stuff like this is that it potentially takes a while to recover from a failed drive even if you catch it in time (alert systems are your friend). And 36 TB is a LOT of data to work through and recover which means a LOT of stress on the remaining drives for a few days.

      • kevincox@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        aren’t striping

        I think you mean “are striping”.

        But even with striping you have backups right? Local redundancy is for availability, not durability.

        • NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Words hard

          And I would go so far as to say that nobody who is buying 36 TB spinners is doing offsite backups of that data. For any org doing offsites of that much data you are almost guaranteed using a tape drive of some form because… they pay for themselves pretty fast and are much better for actual cold storage backups.

          Seagate et al keep pushing for these truly massive spinners and I really do wonder who the market is for them. They are overly expensive for cold storage and basically any setup with that volume of data is going to be better off slowly rotating out smaller drives. Partially because of recovery times and partially because nobody but a sponsored youtuber is throwing out their 24 TB drives because 36 TB hit the market.

          I assume these are a byproduct of some actually useful tech that is sold to help offset the costs while maybe REALLY REALLY REALLY want 72 TBs in their four bay Synology.

          • kevincox@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            And I would go so far as to say that nobody who is buying 36 TB spinners is doing offsite backups of that data.

            Was this a typo? I would expect that almost everyone who is buying these is doing offsite backups. Who has this amount of data density and is ok with losing it?

            Yes, they are quite possibly using tape for these backups (either directly or through some cloud service) but you still want offsite backups. Otherwise a bad fire and you lose it all.

    • Psythik@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      More like zero, cause modern AAA games require an NVME (or at least an SSD) and this is a good old fashioned 7200 RPM drive.

    • walden@sub.wetshaving.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Man, I used to LOVE defragmenting drives. I felt like I was actually doing something productive, and I just got to sit back and watch the magic happen.

      Now I know better.

  • wise_pancake@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    It will take about 36 hours to fill this drive at 270mb/s

    That’s a long time to backup your giraffe porn collection.