With AI capabilities now, surely it’s pretty easy for an AI to follow a set of instructions like: create an email, check email, click link in email…etc - is that correct? Or put another way - why would email verification stump ML so consistently if it’s trained to create emails and do the process
I’m only parroting. The developers of Lemmy mentioned this as the only empirically effective option in the real world. AI in the real world is far dumber than it is framed in the media. Even a well trained model can’t just run off on a long tangent path to connect the dots. It only works for a very short time under controlled circumstances in the real world before it must be reset. This is not real AI.
With AI capabilities now, surely it’s pretty easy for an AI to follow a set of instructions like: create an email, check email, click link in email…etc - is that correct? Or put another way - why would email verification stump ML so consistently if it’s trained to create emails and do the process
I’m only parroting. The developers of Lemmy mentioned this as the only empirically effective option in the real world. AI in the real world is far dumber than it is framed in the media. Even a well trained model can’t just run off on a long tangent path to connect the dots. It only works for a very short time under controlled circumstances in the real world before it must be reset. This is not real AI.