By DMing me you consent for them to be shared with whomever I wish, whenever I wish, unless you specify otherwise

  • 1 Post
  • 38 Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle
  • I don’t believe so. Maybe someone’s written a script on github, I haven’t looked.

    A thing I like about lazylibrarian is that it just keeps rerolling until success. You probably miss good files just because LL couldn’t parse the folder structure or something, but it’s just set and forget.

    Perhaps this could be modified to work. Like time is set to zero and file size to zero.

    Not that I use LL, I just think it’s neat… From a purely onlooker POV.





  • +1

    I’m running a media/back up server for 4 households on a single n100 mini pc and a couple USB drives. It’s a “good enough”, low cost, high wife acceptance factor entry point into self-hosting. It’ll happily age into a firewall if I want to build a better box later on.

    It’s revealing what I do/don’t need vs what I want. It’s teaching me what people use, what they don’t and where I might want to go in the future.

    If I could go again I’d probably get a n100 2Bay Ugreen thing. Then it’d age into a local back up and I wouldn’t have to deal with USB drives.


  • I must have been having more basic problems than you. I found LLMs to present the most common solution, and generally the most common way of setting it up is the “right-way”, At least for a beginner. Then I’d quiz it on what docker compose environments do, what “ports: ####:####” meant, how I could route one container through another. All very basic stuff. Challenge: ask gpt

    what does "ports:

    -####:####" mean in a docker compose?

    Then tell me it doesn’t spit out something a hobbiest could understand, immediately start applying, and is generally correct? Beginners, still verify what gpt spits out.

    By the time I wanted to do non-standard stuff I was better equipped with the fundamentals of hobbiest deployment and how to coax an LLM into doing what I needed. It won’t write an Nginx config for you, or an ACL file, but with the documentation and an LLM you could teach yourself to write one.

    Goes without saying I’d take the output of the LLM to Google for verification, then back to the LLM for a hobbiest’s explaination, back to Google for verification… Also, all details are place holders: don’t give it your email, api-keys, domains, nothing. Learn to scrub your input there and it’ll be a habit here is a bonus too.

    Properly made software has great documentation and logs. If you know how to access those logs and read documentation (both skills in themselves)… Not to mention not all software is “properly made” some of it is bare bones and just works™. Works it do, absolutely not a criticisms for FOSS projects, I love your stuff keep making it, and I’ll keep finding ways to teach myself to use it.




  • Big words. I hope, though don’t trust, they can live up to them. But if tailscale goes, I’m just plain fucked. Thats certainly an indicator they’re worth some money to me, but there’s many a FOSS project before I get to paying a VC one.

    As an aside, an interesting service would be a fund allocation type thing. You donate £x, tick which services you use and the funds get divvied up by what you use. Only able to donate £10 but use a lot of services? Each service gets very little, too little to donate as an individual, so little the individual doesn’t. But, on aggregate (with hundreds, or dozens of users) it would add up to a worthwhile donation. I thought of "round robin"ing my donations: pihole gets 10 this month, jellyfin the next, audiobookshelf the month after that… but yikes the admin.

    Funds are donated when £x is accrued at the end of the month, and the service is maintained by earning interest on the funds held through the month. Idealistic, ripe for abuse, and out of my league to write and administrate. I promise I’d publish all the finances to keep me honest though.





  • Hardware wise I’d go AIO. A mini and a pair of mirrored USB drives is my setup. I have an off-site backup running: another mini + USB. Finally, I have an inherited laptop as a redundant network box/local backup/immich compute. I have 5 households on my network, and aside from immich spiking in resources (hence the laptop), I have overhead to spare.

    An n100 mini (or n150, n200, whatever) is cheap enough and powerful enough for you to jump in, decide if you want to spend more later. They’re small, quiet, reasonable Value for Money, easy Wife Acceptance Factor, and can age into a bunch of devices if you decided self hosting isn’t for you. I’d make a retro console out of any spare mini.

    This way, when spending £x00s on a server, you’ll have some idea on what you actually need/want. The n100 can the age into a firewall/network box/local back up/etc if/when you upgrade.

    All that said. An AIO storage-compute box is where I’m headed. I now know I need a dedicated graphics card for the immich demand. I now know I want a graphics card for generative AI hobby stuff. I know how much storage I need for everyone’s photos, and favorite entertainment, permanent stuff. I know how much storage I need for stuff being churned, temporary stuff. I now know I don’t care about high availability/clusters. I now know… Finally, the ‘Wife’ has grown used to having a server in the house: it’s a thing I have, and do, which she benefits from. So, a bigger, more expensive, and probably louder box, is an easier sell.




  • Update went fine on a bare metal install. Customising the webUI port is a little easier now, instead of editing lighttdp.conf I think you can do it in the UI.

    I struggled to find some settings, I looked for ages for the API token. Found it in all settings: expert, scroll for half a mile down the webUI API section.

    Also, struggled with adding CNAMES in bulk, I thought you could do that in the old UI. You might be able to in the new UI. I just 'one by one’d them.

    Docker update went flawlessly.

    I have an lxc and to go which is a task for another day, unless TTeck’s updater beats me to it.



  • My main storage is a mirrored pair of HDD. Versioning is handled here.

    It Syncthings an “important” folder to a local back up only 1 HDD.

    The local Backup Syncthings to my parents house with 1 SSD.

    My setup can be better, if I put the versioning on my local backup it’d free space on my main storage. I could migrate to a dedicated backup software, Borg maybe, over syncthing. But Syncthing I knew and understood when I was slapdashing this together. It’s a problem for future me.

    I’ve been seriously considering an Elitedesk G4 or Dell/Lenovo equivalent as back up machines. Mirrored drives. Enough oomph to HA the things using the “important” files: immich paperless etc.


  • My big problem is remote stuff. None of my users have aftermarket routers to easily manipulate their DNS. One has an android modem thing which is hot garbage. I’m using a combination of making their pi be their DHCP and one user is running on avahi.

    Chrome, the people’s browser of choice, really, really hates http so I’m putting them on my garbage ######.xyz domain. I had plans to one day deal with Https, just not this day. Locally I just use the domain for vaultwarden so the domain didn’t matter. But if people are going to be using it then I’ll have to get a more memorable one.

    System updates have been a faff. I’m 'ssh’ing over tailscale. When tailscale updates it kicks me out, naturally. Which interrupts the session, naturally. Which stops the update, naturally. Also, it fucks up dkpg beyond what --configure -a can repair. I’ll learn to update in background one day, or include tailscale in the unattended-upgrades. Honestly, I should put everything into unattended-upgrades.

    Locally works as intended though, so that’s nice. Everything also works for my fiancee and I remotely all as intended, which is also nice. My big project is coalescing what I’ve got into something rational. I’m on the make it good part of the “make it work > make it good” cycle.