• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: July 15th, 2023

help-circle


  • Like, it feels like this should be the kind of money to put a real dent in the problem…but I worry that the corruption of local governments and the associated contractors will probably soak up a lot of this on tangential things (e.g. lead pipes crosses under this really old road at one point; guess we’ll need to tear up the road for 10 miles in each direction of the cross under point and then repave the whole thing, just to be sure)

    Edit: modifying example for clarity.



  • Au contraire Mon ami, I think the community has mistaken what I mean (probably my fault, I didn’t think my original comment through very thoroughly and accept responsibility if I communicated it poorly).

    I mean to imy that Biden has made good choices in his appointments, but that his ability to speak and his general lack of charisma are the reason he’s not trumping Trump (pun intended) in the polling for the upcoming election.

    I would define Trump as a strong, but bad, leader due to his charisma and ability to take ownership for his people’s actions (even if he takes “liberties” in defining who “his people” are). In my workplace, I would want someone who speaks highly of the actions of me and my team.

    I don’t see that from Biden.

    As such, I would not describe Biden as a strong leader, but with the caveat that “Good” and “strong” exist on independent axes of the “leadership chart”.


  • I would argue that what you’ve described is a good leader.

    To me a strong leader is someone who gets out front of their team and acts as a strong face for their team. That means that the team is getting all the accolades and recognition for their good work, that is keenly running damage control for their mistakes, and that is talking up their team at all points.

    I feel Biden is failing as that strong leader.

    Unless you’re chronically online, you probably aren’t aware of the recent actions of the NLRB nor of some of the other wins the people Biden appointed over the term of his presidency. He’s not out there blasting some of the absolute W’s his team has gotten, and I think that’s showing in the lackluster polling Biden is getting atm.

    The implication of what I’ve said that I want to be clear on: a strong leader isn’t necessarily a good leader, nor is a good leader necessarily a strong leader.

    The downvotes I’m getting says the wider community disagrees with this assessment, and in my mind that is what it is. I feel that not recognizing this distinction makes one more inclined to overlook how their voting peers can be swayed towards strong but bad leaders (e.g. Trump) and will thusly make said person less able to influence their voting peers to change their vote.








  • Liquor Bottle by Herbal T. Has a nice faux-upbeat rhythm with jazzy kinda beats, but lyrics.are dark. Definitely helps me keep a sane face on the dark days:

    And that’s why / I keep a

    A liquor bottle in the freezer ♪

    In case I gotta take it out ♫

    Mix me a drink

    To help me

    Forget all the things

    In my life that I worry about ♪ ♫


  • Right.

    I don’t mean to say that the mechanism by which human brains learn and the mechanism by which AI is trained are 1:1 directly comparable.

    I do mean to say that the process looks pretty similar.

    My knee jerk reaction is to analogize it as comparing a fish swimming to a bird flying. Sure there are some important distinctions (e.g. bird’s need to generate lift while fish can rely on buoyancy) but in general, the two do look pretty similar (i.e. they both take a fluid medium and push it to generate thrust).

    And so with that, it feels fair to say that learning, that the storage and retrieval of memories/experiences, and that the way that that stored information shapes our sub-concious (and probably conscious too) reactions to the world around us seems largely comparable to the processes that underlie the training of “AI” and LLMs.


  • Thats not what I intended to communicate.

    I feel the Turing machine portion is not particularly relevant to the larger point. Not to belabor the point, but to be as clear as I can be: I don’t think nor intend to communicate that humans operate in the same way as a computer; I don’t mean to say that we have a CPU that handles instructions in a (more or less) one at a time fashion with specific arguments that determine flow of data as a computer would do with Assembly Instructions. I agree that anyone arguing human brains work like that are missing a lot in both neuroscience and computer science.

    The part I mean to focus on is the models of how AIs learn, specifically in neutral networks. There might be some merit in likening a cell to a transistor/switch/logic gate for some analogies, but for the purposes of talking about AI, I think comparing a brain cell to a node in a neutral network is most useful.

    The individual nodes in neutral network will have minimal impact on converting input to output, yet each one does influence the processing of one to the other. Iand with the way we train AI, how each node tweaks the result will depend solely on the past I put that has been given to it.

    In the same way, when met with a situation, our brains will process information in a comparable way: that is, any given input will be processed by a practically uncountable amount of neurons, each influencing our reactions (emotional, physical, chemical, etc) in miniscule ways based on how our past experiences have “treated” those individual neurons.

    In that way, I would argue that the processes by which AI are trained and operated are comparable to that of the human mind, though they do seem to lack complexity.

    Ninjaedit: I should proofread my post before submitting it.


  • Yes? I think that depends on your specific definition and requirements of a turing machine, but I think it’s fair to compare the almagomation of cells that is me to the “AI” LLM programs of today.

    While I do think that the complexity of input, output, and “memory” of LLM AI’s is limited in current iterations (and thus makes it feel like a far comparison to “human” intelligence), I do think the underlying process is fundamentally comparable.

    The things that make me “intelligent” are just a robust set of memories, lessons, and habits that allow me to assimilate new information and experiences in a way that makes sense to (most of) the people around me. (This is abstracting away that this process is largely governed by chemical reactions, but considering consciousness appears to be just a particularly complicated chemistry problem reinforces the point I’m trying to make, I think).


  • and exercise caution when you’re unsure

    I don’t think that fully encapsulates a counter point, but I think that has the beginnings of a solid counter point to the argument I’ve laid out above (again, it’s not one I actually devised, just one that really put me on my heels).

    The ability to recognize when it’s out of its depth does not appear to be something modern “AI” can handle.

    As I chew on it, I can’t help but wonder what it would take to have AI recognize that. It doesn’t feel like it should be difficult to have a series of nodes along the information processing matrix to track “confidence levels”. Though, I suppose that’s kind of what is happening when the creators of these projects try to keep their projects from processing controversial topics. It’s my understanding those instances act as something of a short circuit where (if you will) when confidence “that I’m allowed to walk about this” drops below a certain level, the AI will spit out a canned response vs actually attempting to process input against the model.

    The above is intended ad more a brain dump than a coherent argument. You’ve given me something to chew on, and for that I thank you!


  • I have to say no, I can’t.

    The best decision I could make is a guess based on the logic I’ve determined from my own experiences that I would then compare and contrast to the current input.

    I will say that “current input” for humans seems to be more broad than what is achievable for AI and the underlying mechanism that lets us assemble our training set (read as: past experiences) into useful and usable models appears to be more robust than current tech, but to the best of my ability to explain it, this appears to be a comparable operation to what is happening with the current iterations of LLM/AI.

    Ninjaedit: spelling



  • Point of further support: Hawaiians have a weird (to us haoles) love of Las Vegas, going as far as to call it “the Ninth Island”. I mean, if you live on a tropical paradise, where are you supposed to go for vacation?

    And Elvis is (or at least has a rep as being) super popular and iconic in Vegas. I could definitely see some of that influence back flowing from the Ninth Island back to Hawaii.