Avram Piltch is the editor in chief of Tom’s Hardware, and he’s written a thoroughly researched article breaking down the promises and failures of LLM AIs.

  • maynarkh@feddit.nl
    link
    fedilink
    arrow-up
    1
    ·
    10 months ago

    No, repeated extrapolation results in eventually making everything that ever could be made, constant interpolation would result in creating the same “average” work over and over.

    The difference is infinite vs zero variety.