• 0 Posts
  • 38 Comments
Joined 2 years ago
cake
Cake day: August 6th, 2023

help-circle


  • Drive failures have almost nothing to do with access if they are mechanical. Most failures are from bearing or solder interconnect failures over time.

    Also, most seeding is in smaller chunks that are read and cached if popular… Meaning less drive hits than 1-1 read vs upload.

    You will almost always have drives fail from other aspects like heat or power or old age before wear from seeding would ever be enough to matter.

    I have drives in the excess of 10+ years, with several seeds that have been active for many years of those, that are still running just fine.








  • Private trackers are closed communities for sharing torrents. Often you can either interview to get access, or occasionally one with have open sign up for access. These usually have strict requirements to maintain a reasonable ratio of seeding for your downloads to prevent greedy users from ruining performance of sharing.

    Redacted is one of these communities, based strictly around music and maintaining quality, refusing to allow low quality encoding of the data. It is harder to get into the community, as well as very strict seeding requirements to maintain.

    Information about who they are and how to apply for access can all be found at https://interviewfor.red/



  • If you have access to certain music-focused private torrent trackers, many will do spotlight articles on independent or smaller artists who are also members.

    This kind of sharing is often welcomed and a valued thing, so could even be a way in to some of those communities.

    Redacted does this, and I’ve been introduced to some really good music this way.

    Alternately, as others have noted, Bandcamp is a good way to offer as well. If you go this route, even with setting the music as free, you might make some small money… I’ve often tipped a bit via the “Pay what you want” pricing tier.


  • For my larger boxes, I only use SuperMicro. Most other vendors do weird shit to their back planes that make them incompatible or charge for licenses for their ipmi/drac/lightsout . Any reputable reseller of server gear will offer SuperMicro.

    The disk to ram ratio is niche, and I’ve almost never run into that outside of large data warehouse or database systems (not what we’re doing here). Most of my machines run nearly idle even serving files several active streams or 3gb/sec data moves on only 16gb RAM. I use CPU being maxed out as a good warning that one of my disks needs checking, since silvering or degraded in ZFS chews CPU.

    That said, hypervisors eat RAM. Whatever machine you might want to perform torrents, transcoding, etc, give that box RAM and either a good supported GPU or a recent Intel quicksync chip.

    For organizing over the arrays, I use raided SSD for the downloads, with the torrent client moving to the destination host for seeding on completion.

    Single instance of radarr and sonarr, instead I update the root folder for “new” content any time I need to point to a new machine. I just have to keep the current new media destination in sync between the Arr and the torrent client for that category.

    The Arr stacks have gotten really good lately with path management, you just need to ensure the mounts available to them are set correct.

    In the event I need to move content between 2x different boxes, I pause the seed and use rsync to duplicate the torrent files. Change path and recheck the torrent. Once that’s good I either nuke and reimport in the Arr, or lately I’ve been doing better naming convention on the hosts so I can use preserving hardlinks. Beware, this is pretty complex route unless you are very comfortable in Linux and rsync!

    I’m using OMV on bare metal personally. My proxmox doesn’t even have OMV, it’s on a mini PC for transcoding. I see no problem running OMV inside proxmox though. My baremetal boxes are dedicated for just NAS duties.

    For what it’s worth, keep tasks as minimal and simple as you can. Complexity where it’s not needed can be pain later. My nas machines are largely identical in base config, with only the machine name and storage pool name different.

    If you don’t need a full hypervisor, I’d skip it. Docker has gotten great in its abilities. The easiest docker box I have was just Ubuntu with DockGE. It keeps it’s configs in a reliable path so easy to backup your configs etc.



  • Most of my drives are in the 3tb/4tb range… Something about that timeframe made some reliable disks. Newer disks have had more issues really. A few boxes run some 8tb or 12tb, and I keep some external 8tb for evacuation purposes, but I don’t think I trust most options lately.

    HGST / Toshiba seems to have done good by me overall, but that’s subjective certainly.

    I have 2 Seagate I need to pull from one of the older boxes right now, but they are 2tb and well past their due:

    root@Mizuho:~# smartctl -a /dev/sdc|grep “Vendor|Product|Capacity|minutes” Vendor: SEAGATE Product: ST2000NM0021 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Accumulated power on time, hours:minutes 41427:43

    root@Mizuho:~# smartctl -a /dev/sdh|grep “Vendor|Product|Capacity|minutes” Vendor: SEAGATE Product: ST2000NM0021 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Accumulated power on time, hours:minutes 23477:56

    Typically I’m a Debian/Ubuntu guy. Easiest multi tool for my needs.

    I usually use OpenMediaVault for my simple NAS needs.

    Proxmox and XCP-NG for hypervisor. I was involved in the initial development of OpenStack, and have much love for classic Xen itself (screw Citrix and their mistreatment of xenserver).

    My dockers are either via DockGE or the compose plugins under OMV, leaning more toward DockGE lately for simplicity and eye candy.

    Overall, I’ve had my share of disk failures. Usually from being sloppy. I only trust software RAID, as I have better shot at recovery if I’m stupid enough to store something critical on less that N+2.

    I usually buy drives only on previous generation, and at that only when price absolutely craters. The former due to being bitten by new models crapping out early, and latter due to being too poor to support my bad habits.

    Nearly all of my SATA disks came from externals, but that’s become tenuous lately… SMR disks are getting stuck into these more and more, and manufacturers sneakier about hiding shit design.

    Used SAS from a place with solid warranty seems to be most reliable. About half my fleet was bought used and I’ve only lost about 1/4 of those with less than 5+ years active run time.




  • I personally have dedicated machines per task.

    8x SSD machine: runs services for Arr stack, temporary download and work destination.

    4-5x misc 16x Bay boxes: raw storage boxes. NFS shared. ZFS underlying drive config. Changes on a whim for what’s on them, but usually it’s 1x for movies, 2x for TV, etc. Categories can be spread to multiple places.

    2-3x 8x bay boxes: critical storage. Different drive geometric config, higher resilience. Hypervisors. I run a mix of Xen and proxmox depending on need.

    All get 10gb interconnect, with critical stuff (nothing Arr for sure) like personal vids and photos pushed to small encrypted storage like BackBlaze.

    The NFS shared stores, once you get everything mapped, allow some smooth automation to migrate things pretty smoothly around to allow maintenance and such.

    Mostly it’s all 10 year old or older gear. Fiber 10gb cards can be had off eBay for a few bucks, just watch out for compatibility and the cost for the transceivers.

    8 port SAS controllers can be gotten same way new off eBay from a few vendors, just explicitly look for “IT mode” so you don’t get a raid controller by accident.

    SuperMicro makes quality gear for this… Used can be affordable and I’ve had excellent luck. Most have a great ipmi controller for simple diagnostic needs too. Some of the best SAS drive planes are made by them.

    Check BackBlaze disk stats from their blog for drive suggestions!

    Heat becomes a huge factor, and the drives are particularly sensitive to it… Running hot shortens lifespan. Plan accordingly.

    It’s going to be noisy.

    Filter your air in the room.

    The rsync command is a good friend in a pinch for data evacuation.

    Your servers are cattle, not pets… If one is ill, sometimes it’s best to put it down (wipe and reload). If you suspect hardware, get it out of the mix quick, test and or replace before risking your data again.

    You are always closer to dataloss than you realize. Be paranoid.

    Don’t trust SMART. Learn how to read the full report. Pending-Sectors above 0 is always failure… Remove that disk!

    Keep 2 thumb drives with your installer handy.

    Keep a repo somewhere with your basics of network configs… Ideally sorted by machine.

    Leave yourself a back door network… Most machines will have a 1gb port. Might be handy when you least expect. Setting up LAGG with those 1gb ports as fallback for the higher speed fiber can save headaches later too…