he/him

Alts (mostly for modding)

@sga013@lemmy.world

(Earlier also had @sga@lemmy.world for a year before I switched to @sga@lemmings.world, now trying piefed)

  • 39 Posts
  • 354 Comments
Joined 11 months ago
cake
Cake day: March 14th, 2025

help-circle



  • as someone who is doing some kind of science - titles are a lot more fancier and designd for absurdity. Often, the decision to perform something is a lot more logical than dciding random animals to test from. for example, some of the people from their group may already have been studying that specific frog line for some reason (maybe for it’s gut only), for example, they may have observed that these frogs live a long life or something, then they decided to find why is that, and may hav ecome to conclusion that it is this gut bacterium. or maybe they may hav eknown of this bacterium, and found out where they could source more of this.

    but sometimes, it is totally random luck, lik you accidentally messed up experiement, and spilled some unrelated gut juice from a frog from a separate experiment, and it just so have happened to worked, so you now studied it closely.

    I have absolutely no idea what may have happened in this one, and i am not a biologist, so do not know what is the usual way, but it is usually among these.



  • sga@piefed.socialtoLinux Phones@lemmy.mlAny Kiwix client?
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    You can install a browser eextension (i think name is kiwix only) which can load offline zim files. problem is that ux is very bad (you need to load a zim file each time, or manually change). on desktop, my answer was to manually unpack all zim files (using zimutils) and then arrange them in a controlled dir structure, then recompress them into a mountable file format, and separately, maintain a list of all files in side, and while using, i have something hand rolled to mount the archive, select suitable file, and open in browser - yes it is a lot of work, but i do kinda have a offline search engine now.






  • i rarly use it, mostly to do sentiment/grammar analysis for some formal stuff/legalese. I kinda rarely use llms (1 or 2 times a month)(i just do not have a usecase). As for how good, tiny models are not good in general, but that is because they do not have enough knowledge to store info, so my use case often is purely language processing. though i have previously used it to do some work demo to generate structured data from unstructured data. basically if you provide info, they can perform well (so you can potentially build something to fetch web search results, feed into context, and use it(many such projects are available, basically something like perplexity but open)).



  • sga@piefed.socialtoScience Memes@mander.xyzHD 137010 b
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    adding to this comment, the best way that we currently know how to extract this energy is using spinning black holes, with theoretical efficiency of ~42% (answer to the universe)(src: a minute physics video precisely on this). the naive solution to just touch them gets like 0.01-0.1% of total energy, so in bad case, we need trillion years.





  • further clarification - ollama is a distribution of llama cpp (and it is a bit commercial in some sense). basically, in ye olde days of 2023-24 (decades in llm space as they say), llama cpp was a server/cli only thing. it would provide output in terminal (that is how i used to use it back then), or via a api (an openai compatible one, so if you used openai stuff before, you can easily swap over), and many people wanted a gui interface (a web based chat interface), so ollama back then was a wrapper around llama cpp (there were several others, but ollama was relatively main stream). then as time progressed ollama “allegedly enshittified”, while llama cpp kept getting features (a web ui, ability to swap models during run time(back then that required a separate llama-swap), etc. also llama cpp stack is a bit “lighter” (not really, they both are web tech, so as light as js can get), and first party(ish - the interface was done by community, but it is still the same git repo) so more and more local llama folk kept switching to llama cpp only setup (you could use llama cpp with ollama, but at that point, ollama was just a web ui, and not a great one, some people prefered comfyui, etc). some old timers (like me) never even tried ollama, as plain llama.cpp was sufficient for us.

    as the above commenter said, you can do very fancy things with llama cpp (the best thing about llama cpp is that it works with both cpu and gpu - you can use both simulataneously, as opposed to vllm or transformers, where you almost always need gpu. this simultaneous thing is called offloading. where some of the layers are dumped in system meory as opposed to vram, hence the vram poor population used ram )(this also kinda led to ram inflation, but do not blame llama cpp for it, blame people), and you can do some of them on ollama (as ollama is wrapper around llama cpp), but that requires ollama folks to keep their fork up to date to parent, as well as expose the said features in ui.




  • Not me personally, but at mu uni, a small gate which lead to nearest subway station and saved about 2-3 mins as opposed to going the longer route receently started requiring a entry at exit. A record entry at arrival kinda makes sense, because uni premises are generally treated as a “safe space” and I am not currently speaking of surveillance nature of this, that is a separate discussion. they require a entry at exit. Which does not make sense for many reasons - almost everyone inside is there for some reason, so what is the point of asking for this. students/staff have a pass to get in without entry just by showing pass from afar, but they are not even accpeting that. Their reason - apparently admin wants to close that door (because it is a door, someone needs to keep watch of it), so they want a record of how many people use it. and instead of doing a simple counting, or using the already nearby cctv cams to get rough headcount, they want people to register themselves going out - defeating the main point of that door - it was quicker. Now it takes just as long (there is a queue now as people have to wait for others to finish registering themselves. I timed myself going the longer route, and if i would have stood there, i would have taken longer. It to me is absurd.