Here’s why you can’t trust a video script written by ChatGPT.
I’ll show you a funny example, but first let’s understand WHY the way these AIs are built is especially bad for those of us with existing online trust issues.
Imagine ChatGPT as an alien tentacle monster named Glorp, sitting in space and trying to learn about humans by watching earth TV and reading social media posts. It doesn’t actually understand the concepts it hears, but it learns what sort of words are commonly lumped together.
If I ask Glorp to give me wise advice about life, it may respond with something like My Mama always aid, Life is like a box of chocolates, blah blah blah.” Because that’s just something people often say in a wise context. But do ya think Glorp’s Mama actually said that? Does Glorp even have a mama, or do they have some sort of weird tentacle pit mating?
I asked ChatGPT to write a script about Selection Bias. It gave a great example, explaining it by quoting a news article that cited a study which made a selection bias error. The only problem is that neither the article nor the study actually exist.
So. ChatGP Useful? Yes. Trustworthy? GRLORP NO!