• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • Bro he’s saying that you’re supposed to realize how fucked up it is (and ideally be revolted) that corporations - who don’t give a shit about you or anyone else - team up to prevent bright young adults from having a career and affording to live as payback for exposing their inhumanity/making them look foolish.

    Instead you’re over here like “yeah I lick corporate boot and will gladly accept being stepped on if I get to keep my career.” This girl is a hero for standing up to the likes of cloudflare and we should all aspire to have her courage.



  • Depends on your hardware and how far you’re willing to go. For serious development I think you need at least 12-16 GB of VRAM, but there’s still some things you can do with ~8. If you just have a cpu, you can still test some models but generation will be slow.

    I’d recommend trying out the oogabooga webui. This should work with quite a few models on hugging face. Hopefully I don’t get in trouble for recommending a subreddit but r/localllama has a lot of other great resources and us a very active community. They’re doing exactly what you want.

    As far as your other questions…

    1. Accessing it on your phone is going to be tricky. You would most likely want to host it somewhere but I’m not sure how easy that is for someone without a bit of software background. Maybe there is a good service for this, huggingface might offer something.

    2. Cross thread referencing is an interesting idea. I think you would need to create a log store of all your conversations and then embed those into a a vector store (like milvus or weaviate or qdrant). This is a little tricky since you have to decide how to chunk your conversations, but it is doable. The next step is somewhat open ended. You could always query your vector store with any questions that you are already sending your model, and then pass any hits to the model along with your original question. Alternatively, you could tell the model to check for other conversations and trigger a function call to do this on command. A good starting point might be this example, which makes references to a hardware manual in a Q&A style chatbot.

    3. Using an LLM with stable diffusion: not especially sure what you are hoping to get out of this. Maybe to reduce boilerplate prompt writing? But yes you can finetune a model to handle this and then have the model execute a function that calls stable diffusion and returns the results. I am pretty sure langchain provides a framework for this. Langchain is almost certainly a tool you will want to become familiar with.


  • There is absolutely alt-right content on lemmy. That said, it is mostly drowned out. I saw several alt-right communities when I joined lemmy 3 weeks ago with only 1 member. These people were trying to build echo chambers by themselves. Other lemmy users would come in and start posting articles from regular media which pretty much shut down. I think it’s great that users here don’t want to allow it to be a safe haven for that kind of stuff, unlike the numerous other “free speech” websites that actively encourage it.


  • I think mods don’t want to lose the subreddits they’ve built up. It’s hard to onboard users into the fediverse, and migrating would mean those communities take a big hit. Perhaps it’s hard for the mods to onboard with lemmy too? But I agree that everything that protest did was ultimately toothless. Now reddit is just removing mods and installing their own pro-reddit mods.

    It’s all kind of unfortunate. Reddit controls a massive, mature set of communities that are ultimately very convenient and easy to access. Lemmy, in comparison, is a little tricky to get started with. That said, I love the smaller communities with less trolls, no ads, and no bots. I plan to heavily reduce my reddit usage and hopefully transition more and more to lemmy.