It’s honestly really sad what’s been happening recently. Reddit with the API pricing on 3rd party apps, Discord with the new username change, Twitter with the rate limits, and Twitch with their new advertising rules (although that has been reverted because of backlash). Why does it seem like every company is collectively on a common mission of destroying themselves in the past few months?

I know the common answer is something around the lines of “because companies only care about making money”, but I still don’t get why it seems like all these social media companies have suddenly agreed to screw themselves during pretty much the period of March-June. One that sticks out to me especially is Reddit CEO, Huffman’s comment (u/spez), “We’ll continue to be profit-driven until profits arrive”. Like reading this literally pisses me off on so many levels. I wouldn’t even have to understand the context behind his comment to say, “I am DONE with you, and I am leaving your site”.

Why is it like this? Does everyone feel the same way? I’m not sure if it’s just me but everything seems to be going downhill these days. I really do hope there is a solution out of this mess.

  • PabloDiscobar@kbin.social
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    1 year ago

    For reddit and twitter it’s also induced by the threat of AI. Twitter and reddit host a lot of content, organized, sorted, coherent. It’s invaluable for training an AI and these companies don’t want to let it go for free. They want control over it, therefore they are making it very hard for AI companies to farm their content. The fact that it’s happening now is because AI companies are probably rushing to copy as much data as possible before laws are voted to put a limit over them.

    It will be the same for the fediverse, our content will be scanned by AI’s. Our content is freely visible, organized, sorted and scored. We should be careful about that. If you are not a professional publisher or a public person then you should probably think about rotating your username as often as possible.

    edit: But also, with the rise of tiktok, a lot of countries are now suspicious about the soft power of those apps, and are ready to legislate against them. The EU already did, they did vote fines against them and are regularly getting money out of them. The taboo is gone, you can attack those companies, it works. They were supposed to be out of reach, but they are not.

    Also there is no genius in Twitter, as far as I know they have no patent over anything. If someone manages to become more popular than them on the same principle then twitter is done. Gravity will do the rest and users will move to a different platform. People are using it because people are using it. So the model is fragile and the value is questionable.

    • fearout@kbin.social
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      What’s so bad about giving AI models something to learn on? Add LLM-tier accounts to your social media company and have at it. And fix data/traffic issues by giving users the ability use their own tokens/api keys/whatever to limit bandwidth without affecting end users as significantly as they did with current decisions.

      That way you could detect and address rogue scrubbers while still working with LLM creators who are open to an honest training integration. And if your company can’t really detect the difference between users and LLM crawlers after implementing something like this, well, then those crawlers don’t really affect the company as much as the CEOs would like to pretend.

      • ExistentialOverloadMonkey@kbin.social
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        1 year ago

        The fuckwits at reddit and twitter HQs think they own that data. Data they didn’t create, or even contribute to. They imagine that by providing server space, they somehow own the content. As if the government owned the cars that use the roads, or if an airline claimed they owned the travelers’ baggage. Greedy bastards without shame.

      • PabloDiscobar@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 year ago

        What’s so bad about giving AI models something to learn on?

        From a user point of view? A lot. So far the AI has made itself the champion of the creation of fake. Fake news, fake pictures, fake videos, fake history, fake identity. Do you think that the AI will be used for your own good? Do you think that your private data are farmed for you own good? I don’t.

        I posted an example about fake identities and fake posters on Twitter. This is the end goal. This is where the money generated by the AI will come from.

        That way you could detect and address rogue scrubbers while still working with LLM creators who are open to an honest training integration. And if your company can’t really detect the difference between users and LLM crawlers after implementing something like this, well, then those crawlers don’t really affect the company as much as the CEOs would like to pretend.

        Twitter and Reddit probably want to be their own LLM creators. They don’t want to leave this market to another LLM. Also it doesn’t take a lot of API calls to generate the content that will astroturf your product.

        Anyway the cat is out of the bag and this data will be harvested. The brands will astroturf their products using AI processes. People are not stupid and will realize the trick played on them. We are probably heading toward platforms using full authenticated access.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          From a user point of view? A lot. So far the AI has made itself the champion of the creation of fake. Fake news, fake pictures, fake videos, fake history, fake identity. Do you think that the AI will be used for your own good? Do you think that your private data are farmed for you own good? I don’t.

          That’s addressing whether the mere existence of LLMs is “good” or not. That’s not going to be affected by whether someone changes their username every couple of months or whether some particular social media site makes their content annoying to scrape. LLMs exist now, and they’re only getting better; attempting to staunch the flow of genies out of the bottle at this stage is futile.

          Personally, I’m actually rather pleased that my comments on Reddit over the years are factoring in to how LLMs “think.” All that ranting about the quality of Disney’s Star Wars movies not only convinced all the people who read it (I assume) but will now also convince our future AI overlords too.