A recent study by OpenAI and Georgia Tech study suggests that large language models hallucinate because when they’re being created, they’re incentivized to guess rather than admit they simply don’t know the answer. So until that anomaly is resolved, I also do not believe it to be reliable for our use case.
Now, for general knowledge questions like, "how much does aircraft fuel weigh per gallon", or "what are the torque specs on an AN4 bolt", where there is a solid known-quantity dataset, it can provide much value in my opinion (with verified answers).
~Chris
Now, for general knowledge questions like, "how much does aircraft fuel weigh per gallon", or "what are the torque specs on an AN4 bolt", where there is a solid known-quantity dataset, it can provide much value in my opinion (with verified answers).
~Chris
Comment