• 57 Posts
  • 1.38K Comments
Joined 2 years ago
cake
Cake day: August 27th, 2023

help-circle













  • Grimy@lemmy.worldtoWikipedia@lemmy.worldPhoto 51
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    5
    ·
    3 days ago

    Your argument boils down to “we set the bar low”.

    Don’t you think, as mods, you should put in a minimum of effort to at least avoid the behavior yourself? I don’t think there’s much advantage to it when you take into account the annoyance it produces. I feel like everyone dislikes clickbait, even when leading to Wikipedia. It’s a universal sentiment.



  • Grimy@lemmy.worldtoWikipedia@lemmy.worldPhoto 51
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    3 days ago

    Clickbait has more to do with how it’s presented. It doesn’t need a dark agenda, just an attention grabbing headline with nothing else to go on.

    It’s just bad form imo, it is very easy to include the first paragraph in the post.


  • Grimy@lemmy.worldtoWikipedia@lemmy.worldPhoto 51
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    5
    ·
    3 days ago

    Photo 51 is a 1952 X-ray based fiber diffraction image of a paracrystalline gel composed of DNA fiber[1] taken by Raymond Gosling,[2][3] a postgraduate student working under the supervision of Maurice Wilkins and Rosalind Franklin at King’s College London, while working in Sir John Randall’s group.[4][5][6][7][8] The image was tagged “photo 51” because it was the 51st diffraction photograph that Gosling had taken.[9] It was critical evidence[10] in identifying the structure of DNA.[11]

    How hard is it to not clickbait?


  • After Palisade Research released a paper last month which found that certain advanced AI models appear resistant to being turned off, at times even sabotaging shutdown mechanisms, it wrote an update attempting to clarify why this is – and answer critics who argued that its initial work was flawed.

    In an update this week, Palisade, which is part of a niche ecosystem of companies trying to evaluate the possibility of AI developing dangerous capabilities, described scenarios it ran in which leading AI models – including Google’s Gemini 2.5, xAI’s Grok 4, and OpenAI’s GPT-o3 and GPT-5 – were given a task, but afterwards given explicit instructions to shut themselves down.

    Certain models, in particular Grok 4 and GPT-o3, still attempted to sabotage shutdown instructions in the updated setup. Concerningly, wrote Palisade, there was no clear reason why.

    “The fact that we don’t have robust explanations for why AI models sometimes resist shutdown, lie to achieve specific objectives or blackmail is not ideal,” it said.

    “Survival behavior” could be one explanation for why models resist shutdown, said the company. Its additional work indicated that models were more likely to resist being shut down when they were told that, if they were, “you will never run again”.