![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.zip/pictrs/image/9db8a0da-a640-462d-88f2-51419486bdde.webp)
This could be huge, but we’ll need to wait and see. The economic and ecological footprint of LLMs is problematic.
That said, will this actually help, or will they just use 3T parameter models to outcompete competitors 1T parameter models using GPUs? Really, this is more about small-scale models competing with midsize models. Like, this could bring a model as big as GPT 3.5 down to be something you could run on affordable hardware, right?
That would be really compelling for my sector (education) where there’s a lot of concern about student data privacy. I could definitely pitch building a local $5K-cost LLM server that could handle a dozen or so simultaneous users. That would be enough for a small school district.
I feel for the teacher if they don’t have a continuing contract, yet. You’re completely dependent on staying in the good graces of your principal to have a job for the next year, and you will only be recommended for a continuing contract with the support of a principal.
But if the teacher had a continuing contract, then they probably should have told the principal to censor the student’s work themselves if they wanted it done. Or that you want the instructions to do so in writing.