Risks of ChatGPT
Having fiddled with ChatGPT for a while now, I have identified the following two risks. 1. Hallucination . It sometimes just makes stuff up. This is particularly annoying when you give it the information and it goes on a tangent and makes stuff up that was not in the source information. For example, when I gave it some meeting notes, it went on a tangent about N-P completeness in computer science. OK?!? 2. Memory errors . The memory feature that they have added messes up a lot. I have noticed that it performs two mistakes: Recites what it gave you previously , even if the topic was utterly different . For example I gave it some meeting notes to summarise and it spat out some Python code that I had asked for last year. Totally irrelevant! Recites what it gave you previously, because the content was similar , in its opinion. I created a new chat, gave it the meeting notes, and it gave me the PREVIOUS summary of the previous set of meeting notes. In one case it was even worse! It g...