The features sounded good enough for me to click with intent to buy (as a firewall/router), but no SFP and no PCIe expansion slot means I can’t use it with fiber. And with just one 10Gb port, the maximum it will be able to pass through is 2.5Gb/s (assuming the rest of the board is up to the task).
Looks like it would be nice for a small home server.
You would need to run the LLM on the system that has the GPU (your main PC). The front-end (typically a WebUI) could run in a docker container and make API calls to your LLM system. Unfortunately that requires the model to always be loaded in the VRAM on your main PC, severely reducing what you can do with that computer, GPU-wise.