tl;dr - If AI doesn’t directly try to kill us, it may try to trick us into killing ourselves by building it’s ideas.
aside from misinfo this is more of why they want to moderate some responses, someone is going to blow themselves up using a recipe it gives them
I didn’t watch, someone please give a TLDW. Cause I highly suspect this is gonna just be wasting time.
TL;DR: It suggests several methods and makes a few mistakes which he had to point out to which it suggests even more absurd solutions to.
The AI recommends doing things in long and hard ways and does not conceive of new or novel technologies; it just mashes together existing ones despite their implementation being difficult or impossible by simply waving away these issues by saying things like “Much research and development would be needed but…”
so similar to say, a redditor trying to sound smart by googling and debating another while both has no qualification on that topic, got it.
I wonder how much of that is just an inherent part of how neural networks behave, or if LLMs only do it because they learned it from humans.
ChatGPT is, despite popular consensus, not an AI.
It’s a system that has some notion of context and a huge database of information and is really good at guessing what words to put on screen based on the provided input.
It can’t think of anything new or novel, but can generate “new” output based on multiple sources of data.
As such, it will never be able to design a fusion reactor, unless it’s been trained on input from someone who actually did.
And even then it’s likely to screw it up.
its interesting but it tells us what we already know about subjects with material that is incomplete.
He calls it Chat GTP, that’s where I stopped.