Risks of ChatGPT

 Having fiddled with ChatGPT for a while now, I have identified the following two risks.

1. Hallucination. It sometimes just makes stuff up. This is particularly annoying when you give it the information and it goes on a tangent and makes stuff up that was not in the source information. For example, when I gave it some meeting notes, it went on a tangent about N-P completeness in computer science. OK?!?

2. Memory errors. The memory feature that they have added messes up a lot. I have noticed that it performs two mistakes:

  1. Recites what it gave you previously, even if the topic was utterly different. For example I gave it some meeting notes to summarise and it spat out some Python code that I had asked for last year. Totally irrelevant!
  2. Recites what it gave you previously, because the content was similar, in its opinion. I created a new chat, gave it the meeting notes, and it gave me the PREVIOUS summary of the previous set of meeting notes. In one case it was even worse! It gave me a synopsis of the second Dune movie, because I'd given a youtube transcript to it and asked it to summarise it (the transcript was very long, and the review video was even longer, so I didn't feel like watching it or even reading the transcript). Then, when I gave it some meeting notes to summarise, on a topic not related at all to sci-fi, it just digressed about Dune! 

Workarounds:

  • I've found that it's important to read what it produces very carefully and compare to what you expected. For example, if you give it meeting notes to summarise, rather create a new chat to summarise the next batch.
  • Also, turn off the memory feature and delete what it knows about you (go into settings and under memory there's an option to delete its memory). It's better to make it recreate everything every time. Otherwise it gets repetitive.

Popular posts from this blog

The risks of Deepfakes and a proposal to combat them - Edited

What is Generative AI?

How to do Proper Online Research