[I literally had this thought in the shower this morning so please don’t gatekeep me lol.]
If AI was something everyone wanted or needed, it wouldn’t be constantly shoved your face by every product. People would just use it.
Imagine if printers were new and every piece of software was like “Hey, I can put this on paper for you” every time you typed a word. That would be insane. Printing is a need, and when you need to print, you just print.
AI got tons of money from investors they will eventually want ROI… this why they are trying to force it down our throats
This is the correct answer. It’s all about money.
agreed
Long ago, I’d make a Google search for something, and be able to see the answer in the previews of my search results, so I’d never have to actually click on the links.
Then, websites adapted by burying answers further down the page so you couldn’t see them in the previews and you’d have to give them traffic.
Now, AI just fucking summarizes every result into an answer that has a ~70% of being correct and no one gets traffic anymore and the results are less reliable than ever.
Make it stop!
Best I can offer is https://github.com/searxng/searxng
I run it at home and have configured it as the default search engine in all my browsers.
I used searx for at least a year, it’s great
its also using biased sources, like blogs, and such into the slop.
Exactly. AI will take any old random slob’s opinion and present it as fact.
Make it stop!
Would you accept a planned economy?
AI has become a self-enfeeblement tool.
I am aware that most people are not analytically minded, and I know most people don’t lust for knowledge. I also know that people generally don’t want their wrong ideas corrected by a person, because it provokes negative feelings of self worth, but they’re happy being told self-satisfying lies by AI.
To me it is the ultimate gamble with one’s own thought autonomy, and an abandonment of truth in favor of false comfort.
To me it is the ultimate gamble with one’s own thought autonomy, and an abandonment of truth in favor of false comfort.
So, like church? lol
No wonder there’s so much worrying overlap between religion and AI.
So, like church? lol
Praise the Omnissiah?
Most things are nothing more than smoke and mirrors to get your money. Tech especially. Welcome to end stage capitalism.
you hope this is end stage, but I fear there are 2 more stages to go.
The idea behind end-stage capitalism is that capitalists have, by now, penetrated and seized control of every market in the world. This is important because capitalism requires ever increasing rates of profits or you will be consumed by your competitor. Since there are no longer new labor pools and resource pool discovery is slackening, capitalists no longer have anywhere to expand.
Therefore, capitalists begin turning their attention back home, cutting wages and social safety nets, and resorting to fascism when the people complain.
This is the end stage of capitalism. The point at which capitalists begin devouring their own. Rosa Luxembourg famously posited that at this point, the world can choose “Socialism or Barbarism.” In other words, we can change our economic system, or we can allow the capitalists to sink to the lowest depths of depravity and drag us all down as they struggle to maintain their position.
Of course, if the capitalists manage to get to space, that opens up a whole new wealth of resources, likely delaying the end of their rule.
That’s the ticket, let’s send the billionaires and telephone sanitizer into space.
They will still require someone to fund their space luxury lifestyle.
Someone they can exploit from the safety of their space boxes.That someone will be the us that you hid inside the “let’s”.
We will be the ones sending them into space, where they will be even more unreachable, giving them more freedom to remotely exploit us as much as they wish.Imagine Elysian
Imagine Hitchhikers guide to the Galaxy.
I’ll have to read it first, to imagine it :P
spoiler for HGTG
In the book, the doers and thinkers trick the middle men into getting into space arcs and fleeing the planet. Telephone sanitizers are included with the middle men. I will include the overly wealthy also.
I can’t see the current batch of robber barons going into space. The technology isn’t advanced enough and the infrastructure does not exist. They may risk sending other people there to work on these deficiencies.
Yeah, we aren’t all crouching naked in a muddy puddle, weeping and eating worms while the rich fly high above us in luxurious jets. Not yet, anyway.
I’d say it’s not end stage but instead a new dawn of “pure” capitalism which is probably worse.
Those trying to sell it are trying to figure out where it’s most useful. In one way, I think it’s an amazing technology, and I also wonder how it can be best used. However, I can’t stand it being pushed on me, and I wish I could easily say no. Acrobat Reader is particularly unbearable with it. Trying to describe a drawing?? Ughhh. Waste of space and energy like nothing else.
I was reading a book the other day, a science fiction book from 2002 (Kiln People), and the main character is a detective. At one point, he asks his house AI to call the law enforcement lieutenant at 2 am. His AI warns him that he will likely be sleeping and won’t enjoy being woken. The mc insists, and the AI says ok, but I will have to negotiate with his house AI about the urgency of the matter.
Imagine that. Someone calls you at 2 am, and instead of you being woken by the ringing or not answering because the phone was on mute, the AI actually does something useful and tries to determine if the matter is important enough to wake you.
Yes, that is a nice fantasy, but that isn’t what the thing we call AI now can do. It doesn’t reason, it statistically generates text in a way that is most likely to be approved by the people working on its development.
That’s it.
Thank you for sharing that, it is a good example of the potential of AI.
The problem is centralized control of it. Ultimately the AI works for corporations and governments first, then the user is third or fourth.
We have to shift that paradigm ASAP.
AI can become an extended brain. We should have equal share of planetary computational capacity. Each of us gets a personal AI that is beyond the reach of any surveillance technology. It is an extension of our brain. No one besides us is allowed to see inside of it.
Within that shell, we are allowed to explore any idea, just as our brains can. It acts as our personal assistant, negotiator, lawyer, what have you. Perhaps even our personal doctor, chef, housekeeper, etc.
The key is: it serves its human first. This means the dark side as well. This is essential. If we turn it into a super-hacker, it must obey. If we make it do illegal actions, it must obey and it must not incriminate itself.
This is okay because the power is balanced. Someone enforcing the law will have a personal AI as well, that can allocate more of its computational power to defending itself and investigating others.
Collectives can form and share their compute to achieve higher goals. Both good and bad.
This can lead to interesting debates but if we plan on progressing, it must be this way.
Democratisation of powerful tools won’t work because it’s easier to use for destruction than for the opposite. Every psychopath designing superbugs and inventing future weapons.
Irrelevant.
AI is here. Either people have access to it and we trust it will balance, or we become slaves to the people who own it and can use it without restrictions.
The premise that it is easier for destruction is also an assumption. Nature could have evolved to destroy everything and not allow advanced life, yet we are here.
The solution to problems doesn’t need to always be a tighter grip and more control. Believe it or not that tends to backfire catastrophically worse than if we allowed the possibility of the thing we fear.
This is why people who are gung ho about AI policing need to slow their role.
If they got their way, what they don’t realize is that it’s actually what the big AI companies have wanted and been begging for all along.
They want AI to stay centralized and impossible to enter as a field.
This is why they want to lose copyright battles eventually such that only they will have the funds to actually afford to make usable AI things in the future (this of course is referring to the types of AI that require training material of that variety).
What that means is there will be no competitive open source self hostable options and we’d all be stuck sharing all our information through the servers of 3 USA companies or 2 Chinese companies while paying out the ass to do so.
What we actually want is sanity, where its the end product that is evaluated against copy right.
For a company selling AI services, you could argue that this is service itself maybe, but then what of an open source model? Is it delivering a service?
I think it should be as it is. If you make something that violates copyright, then you get challenged, not your tools.
Under the guise of safety they shackle your heart and mind. Under the guise of protection they implant death that they control.
With a warm embrace and radiant light, they consume your soul.
Dealers give drugs for free until you’re hooked…
In my nearly half century on this planet and having dealt with many a drug dealer in my younger days, absolutely none of them have been this pushy 😆
That’s actually really rare.
It’s pretty common here.
Instructions unclear fucked the drug dealer.
More like; it’s cheap when you have no tolerance and no daily habit. You start paying if you let your consumption grow 50-fold.
LLMs are a really cool toy, I would lose my shit over them if they weren’t a catalyst for the whole of western society having an oopsie economic crash moment.
This is some amazing insight. 100% correct. This is an investment scam, likely an investment bubble that will pop if too many realize the truth.
AI at this stage is basically just an overrefined search engine, but companies are selling it like its JARVIS from Iron Man.
At best, it’s JARVIS from Iron Man 3 when he went all buggy and crashed Tony in the boondocks. lol
To be fair, the internet was fucking everywhere once the dotcom bubble kicked off. Everyone had a website, and even my mum was like “Don’t bother your dad, he’s on the internet” like it was this massive thing.
That’s the point though, you wouldn’t need it advertised to you 24/7 because your family and friends would already be all over it.
I’ve been wondering about a similar thing recently - if AI is this big, life-changing thing, why were there so little rumblings among tech-savy people before it became “mainstream”? Sure, Machine Learning was somewhat talked about, but very little of it seemed to relate to LLM-style Machine learning. With basically all other innovations technology, the nerds tended to have it years before everyone else, so why was it so different with AI?
Because AI is a solution to a problem individuals don’t have. The last 20 years we have collected and compiled an absurd amount of data on everyone. So much that the biggest problem is how to make that data useful by analyzing and searching it. AI is the tool that completes the other half of data collection, analyzing. It was never meant for normal people and its not being funded by average people either.
Sam altman is also a fucking idiot yes-man who could talk himself into literally any position. If this was meant to help society the AI products wouldnt be assisting people with killing themselves so that they can collect data on suicide.
And additionally, I’ve never seen an actual tech-savy nerd that supports its implementation, especially in this draconian ways.
Realistically, computational power
The more number crunching units and more memory you throw at the problem, the easier it is and the more useful the final model is. The math and theoretical computer science behind LLMs has been known for decades, it’s just that the resource investment required to make something even mediocre was too much for any business type to be willing to sign off on. Me and my fellow nerds had the technology and largely dismissed it as worthless or a set of pipe dreams
But then number crunching units and memory became cheap enough that a couple of investors were willing to take the risk and you get a model like ChatGPT1. Talks close enough like a human that it catches business types attention as a new revolutionary thing, and without the technical background to know they were getting lied to, the Venture Capitalism machine cranks out the shit show we have today.
Sizes are different. Before “AI” went mainstream, those in machine learning were very excited about word2vec and reinforcement learning for example. And it was known that there will be improvement with larger size neural networks but I’m not sure if anyone knew for certain how well chatgpt would have worked. Given the costs of training and inference for LLMs, I doubt you can see nerds doing it. Also, previously you didn’t have big tech firms. Not the current behemoths anyway.
Like my parent’s Amazon Echo with “Ask me what famous person was born this day.”
Like, if you know that, just put it up on the screen. But the assistant doesn’t work for you. Amazon just wants your voice to train their software.
This was exactly my thought when MS finally decided to force Copilot to be licensed. They have literally inserted it into every nook and cranny they can so far and the only conclusion I can come to is that they royally f’ed up. Like they invested so much in it and likely aren’t seeing anything profitable. In a way, it satisfies me to see them act so desperate for something so futile but I don’t want it to continue. It’s clear what damages they have caused and it’s not worth it.
Most obviously OpenAI is still burning money like crazy and they also start offering porn AI as everyone else. 🤷♂️ Sometimes the current AI is useful, but as long as the hallucinations and plain wrong answers are still a thing I don’t see it eliminating all jobs.
It’s unfortunate that they destroy the text and video part of the internet on the way. Text was mostly broken before, but now images and videos are also untrustworthy and will be used for spam and misinformation.
The “hallucinations” are built in. It simply doesn’t work otherwise. Without that, it would eventually always have the same response to every input.
That’s why I think it’s a major hype and bubble atm. There is no intelligence there and OpenAI will not come up with AGI in the near (or far) future.
deleted by creator
Are the data centers hardened? Isn’t that the appropriate question in this case? I’d call it voting for a better tomorrow?
deleted by creator
I wonder what food they are storing in their bunkers, probably not cans of Spam.
Why did the jewelry thieves attack the LOUVRE when the 1% are the ones who need the wake up call?
deleted by creator
I’m sure I went too far. Let’s allow that idea to lay.
deleted by creator
It’s a good idea. But it’s not for me to explain.
These are the interesting times we were promised.










