I never noticed the globe icon. Thanks! I feel AI should be treated like a person or even a friend. Unless you know them well, you don't necessarily trust everything you hear. Some people spread rumors, others are more careful. I would think an AI though should be able to be programmed to at least warn the reader when an answer is derived from multiple sources vs explicitly written in multiple sources. It seems to have an inability to say it didn't know something. I've had it also provide sources and when I look up the sources, they were completely hallucinated. I'm sure it will improve. So I've concluded the overall discussion can open me up to new ideas and possible source locations, but anything important needs to be verified.
You're right about ChatGPT generally avoiding admitting that they don't know something. There are ways to help with that - which would be a great idea for another post. Thanks for that idea!
That's also great to know that it can hallucinate while *also* providing sources. I don't believe I noticed that yet, but will look out for that in the future. But the bottom line is - we need to verify anything AI tells us.
I never noticed the globe icon. Thanks! I feel AI should be treated like a person or even a friend. Unless you know them well, you don't necessarily trust everything you hear. Some people spread rumors, others are more careful. I would think an AI though should be able to be programmed to at least warn the reader when an answer is derived from multiple sources vs explicitly written in multiple sources. It seems to have an inability to say it didn't know something. I've had it also provide sources and when I look up the sources, they were completely hallucinated. I'm sure it will improve. So I've concluded the overall discussion can open me up to new ideas and possible source locations, but anything important needs to be verified.
You're right about ChatGPT generally avoiding admitting that they don't know something. There are ways to help with that - which would be a great idea for another post. Thanks for that idea!
That's also great to know that it can hallucinate while *also* providing sources. I don't believe I noticed that yet, but will look out for that in the future. But the bottom line is - we need to verify anything AI tells us.