Lets assume we develop the capacity to create virtual worlds that are near indistinguishable from the real world. We hook you up into a machine and you now find yourself in what effectively is a paraller reality where you get to be the king of your own universe (if you so desire). Nothing is off limits - everything you’ve ever dreamt of is possible. You can be the only person there, you can populate it with unconscious AI that appears consciouss or you can have other people visit your world and you can visit theirs aswell as spend time in “public worlds” with millions of other real people.
Would you try it and do you think you’d prefer it over real world? Do you see it as a negative from individual perspective if significant part of the population basically spend their entire lives there?
I question the ethics of ruling over AI subjects and the premise of “anything goes”.
Me too.
Who I am doesn’t really change based on the perceived humanity of other humanoids. I can’t even complete the Dark Brotherhood quests in Skyrim.
No way am I up for getting all Westworld on an AI.
This is where we start getting into the realm of philosophy as it relates to science fiction esq “true” Artificial Intelligence.
Taking the post at face value these AI persons that populate your individual pocket dimension would be, for all intents and purposes, sentient artificial minds, or at least controlled by 1 central mind.
So does that AI deserve human rights? Do laws apply to the and interaction had with them? If all they know is humanity then are they also “human”? Is this theoretically infinitely intelligent super computer even capable of truly understanding humanity, emotions, life in all of its facets?
I fully accept that I am getting too deep into this funny internet post but there have been hundreds upon thousands of books, thought experiments, and debates over this EXACT premise. Short answer is there is no answer. It’s Schrodinger’s morality lol
That’s why I said AI that seems consciouss
What’s the difference seeming conscious and being conscious?
We literally have no idea and have not figured out a good way to test this.
We do know. Consciousness is what you’re experiencing now. Then again general anesthesia is what non-consciousness feels like. Nothing. It by definition cannot be experienced
What we don’t know is how to measure it. There’s no way to confirm that something is or isn’t consciouss
That’s true from my pov, but I can’t really prove it. Its kinda like the biggest “Trust me bro” that we all assume is true.
Not digging into the ethics, just the ideas are fascinating.
Yeah I agree. The only thing one can be 100% sure of is that they’re consciouss themselves
Consciousness means that you’re capable of having a subjective experience. It feels like something to be you.
If you only seem consciouss then you can’t experience anything. You could aswell not exist at all.
I guess it depends on how realistic the fake consciousness is. Is it indistinguishable from real consciousness? Or would I be acutely aware that every relationship I create is fake? I mean, I guess if we’re claiming it absolutely is not real, then I’ll always know that and it kinda taints the whole idea. It kind of makes me wonder about the whole concept. Like, if we did find a way to determine consciousness somehow, could that knowledge interfere with building an emotional relationship with a indistinguishable but fake conscious AI?
It’s not fake consciousness per se but a character that acts as it was consciouss despite the fact that it’s not. So called “philosophical zombie”
You could have real relationships with other real people in the simulation. AI could be your barista, driver, random people in the city etc.
How do you test that? How do you know that people around you actually have conscious and not just seem to have? If you can’t experience anything, how do you fake conscious? And is this fake conscious really any less real than ours? I think anything that resembles conscious well enough to fool people could be argued to be real, even if it’s different to ours.
I don’t think it matters in this case. I decided that they are not consciouss and only seem to be because I didn’t want this thread to turn into debate about wether it’s immoral to abuse AI systems or not.
I think it matters a great deal! I would like to believe that not only would I not use such a system, I would actively fight to have it made illegal.
Why? That’s like making it illegal to kick your roomba
No. I’m very certain that my Roomba is not conscious. But If we can’t tell whether or not these people are conscious or not, then I don’t think it’s right to have this power over them. A better parallel than a Roomba would be an animal.