Thanks for your response. Much appreciated. Do I understand it correctly that I’ll be able to add more drives later in JBOD mode, but I’ll simply have to power it off before adding or swapping drives?
Thanks for your response. Much appreciated. Do I understand it correctly that I’ll be able to add more drives later in JBOD mode, but I’ll simply have to power it off before adding or swapping drives?
I’m in the same situation as you, more or less… I have three new 22TB drives that need an enclosure, preferably for JBOD (no hardware RAID needed) but I can’t figure out which ones are actually good products… I don’t mind using a random-brand product if it’s actually solid.
I find it very difficult to figure out which ones will support my 22TB drives. And for some of them, it seems, it’s impossible to add new drives to empty slots later (because of hardware RAID, I guess?), which has made me hesitant in buying one with more slots than I have drives, in case they can’t be utilized later on anyway…
I was looking at the QNAP TR-004 which was mentioned by someone else somewhere on Lemmy some months ago, but IIRC it would be impossible to use the fourth slot later if the drive isn’t included in the hardware RAID configuration…
EDIT: I have also been looking into so-called “backplanes” as an alternative, since they seem to do the job and are cheaper, but I’m unsure if I’ll need a PC chassis/case/tower for that to actually work?
If you find something good (products or relevant info), feel free to share it with me.
Kodi/LibreELEC + JellyCon add-on works great!
When looking up my static ip, the location I get is the one of my ISP, not my address. Do you happen to live nearby some central infrastructure of your ISP? (If it seems otherwise, I’m not trying to debunk what you said - I’m just asking curious questions!)
Got any recommendations for a good DAS with a fair price, either new or used?
I’m looking for something to place three 22 TB drives in, eventually to be expanded in the future, so I’m looking for a DAS with at least three 3.5" bays.
It refers to some old forum signature iirc. I saw an explanation of it in some other Lemmy thread some time ago, though I don’t exactly remember where or when.
Yes, I think that’s the way to go. If the paperless-ngx team doesn’t believe in following that path, someone else will probably fork the project and do it, or build something with similar capabilities “from scratch”. Then, it’ll be interesting to see what’s coming forth of open-source models with capabilites similar to GPT-4Vision… . . . . 🤯
a “tl,dr” bot would probably not even need high end hardware, because it does not matter if it takes ten minutes for a summary.
True, that’s a good take. Tl;dr for the masses! Do you think an internal or external tl;dr bot would be embraced by the Paperless community?
It could either process the (entire or selected) collection, adding the new tl;dr entries to the files “behind the scenes”, just based on some general settings/prompt to optimize for the desired output – or it could do the work on-demand on a per-document basis, either based on the general settings or custom settings, though this could be a flow-breaking bottleneck in situations where the hardware isn’t powerful enough to keep up with you. However, that only seems like a temporary problem to me, since hardware, LLMs etc. will keep advancing and getting more powerful/efficient/cheap/noice.
a chat bot do not belong into paperless
Right – but, opposingly to that, Paperless definitely do belong into some chatbots!
I’m not interest in sending my documents to open AI.
You wouldn’t have to. There are plenty of well-performing open-source models that work with an API similar to the Open AI standard, with which you can simply substitute OpenAI models by using a different URL and API-key.
You can run these models in the cloud, either selfhosted or “as a service”.
Or you can run them locally on high-end consumer-grade hardware, some even on smartphones, and the models are only getting smaller and more performant with very frequent advancements regarding training, tuning and prompting. Some of these open-source models are already claiming to be outperforming GPT-4 in some regards, so this solution seems viable too.
Hell, you can even build and automate your own specialized agents in collaborating “crews” using frameworks, and so much more…
Though, I’m unsure if the LLM functionality should be integrated into Paperless, or rather implemented by calling the Paperless API from the LLM agent. I see how both ways could fit some specific uses.
(with scan to SMB)
So the scanner saves the file in SMB-share(s), then Paperless(-xng) will automatically process it?
Maybe Paperless, with an LLM API integration to chat with the documents, using the power of referring to and verifying against Paperless’ concrete results, would be somehow useful.
Edit: Oh, this is already being discussed on their GitHub. Of course it is!
Interesting.
For now, my old 3rd party reddit apps on Android still work with the simple workaround of being a mod (of my own hidden subreddit), since some mods needed 3rd party apps to do their work, so, apparently, reddit kept it open for all mod accounts.
Are you saying, theoretically if I had 100s of TB (I don’t… yet!) on mounted drives (local or NFS shares), I could back it all up to Crashplan, and keep the retention as long as the files still exist on my device(s)? Sounds amazing, but what’s the cost of restoring the data? They’re not being very loud about that part on their website.
Interesting! Which dongle did you mod to connect it to the radio?
What did you do/use to turn it into a Bluetooth receiver/speaker?
When I cast from the Jellyfin app on my phone (or the webapp) (to either the Jellyfin app on my Android TV box) or to Kodi (through Jellycon/Jellyfin addon or DLNA), the content is playing independently of my phone. This means that if I disconnect from the device I’m casting to in the Jellyfin app, the content will keep playing. It’s not streaming through my phone, but I can reconnect to regain remote control. I guess it’s the same case for Linux clients. If not, you can use Kodi with Jellycon addon (and not the Jellyfin addon, since that will sync the library to Kodi, which is unnecessary here). You will need a screen to set it up, but once that’s done + auto-launch Kodi at boot if you wish to, it will work headlessly if necessary as a client to cast to. Another reason to use Kodi is the very wide variety of formats it supports.
It seems it has a single widget, “status widget”.