• 8 Posts
  • 425 Comments
Joined 2 years ago
cake
Cake day: March 19th, 2024

help-circle





  • What happens if someone refuses to do any chores in a shared household? There are already plenty of situations where people do work for free because it’s in your own interests. In groups like households people take turns taking out the bins and cleaning. In a communist society people will take turns doing the necessary work. If someone refuses, maybe something is wrong in their life, and they need help. At the end of the day, there’s no economic coercion in a classless society. If one in a million people don’t work for no understandable reason (disability, depression, personal issues, etc) then let them. What else are you going to do? Work or starve? Incarceration? The point of the universal emancipation that communism brings is to do away with those evils.


  • communism@lemmy.mltoAsklemmy@lemmy.mlWhy is all of Lemmy politics?
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    15 days ago

    Depends on what you define as “politics” but aside from “everything is politics”, my Lemmy feed is mostly tech stuff. Just subscribe to communities that fit your interests. That being said, many interests will be under-represented on Lemmy as I think the user base skews either technical or political or both.



  • Notesnook notebook with whatever info I need to be able to administrate the system. e.g. what different ports are used for and why the firewall policies are what they are, sometimes write-ups after a troubleshooting session, etc.

    The Notesnook instance is self-hosted too, but if the server goes down, the notebook will still be available locally.


  • The relevance for me personally is whether or not they can be useful for programming, and if they’re accessible to run locally. I’m not interested in feeding my data to a datacentre. My AMD GPU also doesn’t support ROCm so LLMs run slow as fuck for me. So, generally, I avoid them.

    LLMs consistently produce lower quality, less correct, and less secure code than humans. However, they do seem to be getting better. I might be open to using them to generate unit tests if only they would run faster on my PC. I tried deepseek, llama3.1, and codellama; all take like an hour+ to answer a programming question given that they are just using my CPU, as my GPU doesn’t support ROCm. So really not feasible for anything.

    Depending on what you count as AI, I think some of the long-existing predictive ML like autosuggestions based on learning your input patterns are fine and helpful. And maybe if I get a supported GPU I won’t mind using local LLMs for some things. But generally I’m not dying to use them. I can do things myself.









  • My favourite unusual one is sichuan pepper powder on garlic bread. Originated in me rummaging through my spices for stuff to add to my garlic bread and I really liked this. I now add it to garlic bread, pizzas, that sort of thing.

    Cumin is also a great all purpose spice I put on many things. Cumin+turmeric for curry-flavoured things, but also cumin+salt+pepper+rosemary+garlic granules for anything roasted.


  • That’s concerning. If it was “I generated a function with an LLM and reviewed it myself” I’d be much less concerned, but 14k added lines and 10k removed lines is crazy. We already know that LLMs don’t generate up to scratch code quality…

    I won’t use PostgreSQL with ntfy, and keep an eye on it to see if they continue down this path for other parts of ntfy. If so I’ll have to switch to another UP provider.