There is, check out the Music Assistant add-on for Home Assistant.
👽Dropped at birth from space to earth👽
👽she/they👽
There is, check out the Music Assistant add-on for Home Assistant.
The dev of this developed Caddy? Hmm… at least there’s talent behind it. I’m a little worried about creating that sort of record, but this guy seems earnest in wanting to liberate personal data.
The OS can’t get to the point of loading cpu microcode without that outdated, embedded microcode. The reason it can persist is because there aren’t a lot of good ways to see what that UEFI microcode actually is once it’s installed. Plus, only the UEFI tells you that it has successfully updated itself. There is no other more authoritative system to verify that against. So the virus could just lie and say it’s gone and you would never know. Hence needing to treat it as the worst case scenario, that it never leaves.
Hey, that’s really fair, thanks for being honest :)
Except that doesn’t at all explain the wider recall of 100 million units. Not every single one of those airbags were faulty. First of all, how could we know? Testing an airbag is a potentially dangerous thing to do, let alone on an enormous scale that would require under-qualified persons to run the tests. Secondly, it’s not a 100% failure rate. If it were, it would have been picked up far sooner than it would take to sell 100 million units. If it happened just as severely no matter the unit’s age, it would have been picked up during crash-testing. What actually happened was an analysis of statistical averages that showed a far higher rate of failure than there should have been.
The similarities to me come from a comparison to Schrödinger’s cat. In the airbag example, you don’t know if the unit in front of you is going yo fail until you “open the box” by crashing. With the AMD vulnerability, you don’t know if ur motherboard has been infected by any virus/worm/etc until a “crash” or other signs of suspicious behaviour.
In both cases, the solution to the vulnerability removes that uncertainty, allowing you to use the product to it’s original full extent.
Look at it this way, imagine if this vulnerability existed in the ECU/BCU of a self-driving capable car. At any point someone could bury a piece of code so deeply you can’t ever be sure it’s gone. Would you want to drive that car?
Sorry, I reread it and I understand now that you were referencing the AMD chip in a comparison. I guess I still would compare it most to the Takata airbag situation. You’re right that nothing happens on it’s own, but once you’ve “crashed the car” then it kind of is a lot like an airbag not going off. It infects your computer on a hardware level, and any future OS running off that motherboard is potentially vulnerable in a way that’s impossible to tell.
“this window only breaks if you’ve already crashed the car”
No, it’s usually more like “this thing will break and cause a car crash” or “this thing will murder everyone in the vehicle if you crash”. And companies still will not fix it. Look at the Ford Pinto, executives very literally wrote off people’s deaths as a cost of doing business, when they’d turn into fireballs during even low speed rear-end collisions. Potentially burning down the car that hit them too.
Edit: I mean, just look at the Takata airbag recall. 100 million airbags from 20 different carmakers recalled because they wouldn’t activate during a crash.
I always did? A friend pointed out to me once the “correct” pronunciation. I like this way more.
Ah okay, thank you heaps for clarifying :) That’s awesome that you’ve been able to limit the overhead like that, I’m excited to test it out!
That doesn’t necessarily seem to be the case:
Does this automatically use nvidia-patch in the container drivers to unlock as many NVENC streams as possible? I believe, from their documentation, that it’s possible to use the patch with docker, with an unpatched host.
Otherwise, is this something that could be implemented? I’m happy to submit a feature request if needed :)
Yeah, I think that’s the general idea. They are seperate instances of Steam that could be signed into different accounts. So yeah, if you’re doing multiplayer of one game, each account would need to own it. That would be the exact same limitation at a LAN party anyway. This just lets you host said LAN party on a single beefy box, and use thin clients for each gamer, like an RPi4, a tablet or even an Apple/Android TV.
As far as I can tell, it’s creating container VMs that have Steam installed inside separately.
How do you imagine that geoblocking content works if IP addresses don’t expose where you live?
And better get off the internet right now if your concern is exposing your ip cause it was never secret to begin with.
qaz could be using any of dozens of different methods to obfuscate their IP from the wider internet to write their comment, Tor or a VPN to name just a couple.
Same with email.
You’re forgetting that the card would still be receiving it’s 75W of power from the PCIe bus. This is what powers cards that don’t have extra power connectors.
In my experiences, Sunshine/Moonlight are a little bit more performant. But what’s nicer about them is they are far more configurable, at the disadvantage of being less ready to go out of the box.
Edit: By this I mean you can do things like run bat/bash scripts on connect as well as disconnect. You can also launch straight into games themselves rather than need to connect to big picture mode first.
Okay, so full disclosure, I haven’t used Netris at all yet, but I have used Sunshine/Moonlight extensively.
Moonlight is an app that’s compatible with the Nvidia Gamestream protocol. You can stream directly from Geforce Experience to Moonlight, but Nvidia have deprecated it. Thankfully, an open source implementation of the Gamestream server exists called Sunshine, that is fully compatible with Moonlight (I don’t know how much of this you already know but other people will read it too). However, due to limitations in the design by Nvidia, the Gamestream protocol is a 1:1 connection. You get the display out from your PC and Geforce/Sunshine handles launching the app. So if you want a single card to handle two different gamers at once, you have to split it and create VMs, then install Sunshine individually to each one. These resource partitions are often also static.
Netris on the other hand is based off of GeForce Now. Nvidia based it off of Gamestream, insofar as the connection between client device and server. But in terms of the software Nvidia runs on their servers, it’s designed to handle dynamic scaling of hardware to accomodate multiple clients. It handles getting however many 720p or 1080p or 4K streams out of a specific card, and can often split them unevenly when needed. As well it handles syncing of cloud saves and the creation and destruction of VMs. So to me it seems Netris is the full package needed for sticking a 3080 in a server and having 4-5 users all be able to utilise the one card to game concurrently.
This will hopefully grow to become an excellent choice for smaller-time cloud providers to compete with Nvidia. And self-hosting it with a beefy CPU setup and SSD storage so it can handle multiple gamers at once. However, if you just want to stream a single PC for a single gamer (or even two seats using a VM running on your desktop) then Sunshine & Moonlight are going to be the better choice.
I think it entirely depends on your use case and hardware. I have a rack server, I need the extra power relatively frequently, as well as the 16x 2.5" bays and the 4 NICs. A rack server is a fairly power efficient package to get all those features in. However, it means that I am limited to discrete graphics, as Xeons don’t have Intel QSV. There’s also no monitor connected, and no 3D rendering happening, so the card is gonna idle at >5W and probably only use 20-30W while transcoding. Compared to a system that’s idling at ~250W that’s nothing.
It does still have some issues, but it is being heavily worked on and has been for 12-18 months at this point. Has taken huge strides, and if you’re in the beta channel you’ll see lots of work being done.