🃏Joker@sh.itjust.works to Technology@lemmy.worldEnglish · 1 day agoMake illegally trained LLMs public domain as punishmentwww.theregister.comexternal-linkmessage-square172fedilinkarrow-up11.46Karrow-down140file-text
arrow-up11.42Karrow-down1external-linkMake illegally trained LLMs public domain as punishmentwww.theregister.com🃏Joker@sh.itjust.works to Technology@lemmy.worldEnglish · 1 day agomessage-square172fedilinkfile-text
minus-squareFaceDeer@fedia.iolinkfedilinkarrow-up1·11 hours agoI’ve been working with local LLMs for over a year now. No guardrails, and many of them fine-tuned against censorship. They can’t output arbitrary training material verbatim. Llama 3 was trained on 15 trillion tokens, both the 8B and 70B parameter versions.. So around 1:1000, not 1:7.
I’ve been working with local LLMs for over a year now. No guardrails, and many of them fine-tuned against censorship. They can’t output arbitrary training material verbatim.
Llama 3 was trained on 15 trillion tokens, both the 8B and 70B parameter versions.. So around 1:1000, not 1:7.