This is the second of two posts on Generative AI. Late last year (November), OpenAI released ChatGPT to the public. However, it took a few months for the internet at large to catch on and realise the potential of this new tool. The trend started to take off in December, and by February everyone was talking about it. ChatGPT is an example of a "Generative AI". To recapitulate, let's see what this means. For convenience, let's call AIs that are not generative, "regular" AIs. Regular or "Narrow" AI (Artificial Intelligence): Regular AI, often referred to as "narrow" or "specific" AI, focuses on building systems that can perform specific tasks or solve specific problems. These AI systems are designed to operate within predefined boundaries and excel at specialised tasks. For example, a regular AI could be created to classify images, play chess, or process natural language. Another example is the AI that chooses your words in predict
Author note. I wrote this article below after seeing the "Sassy Justice" deepfake online in 2021. Since then the matter has become much more serious since the proliferation of online AI tools in the last year. Update : The proposal has been updated to include copyright of one's own voice in the wake of the Stephen Fry incident and another example of cyberstalking with facial recognition. Update 2: The USA is considering legislation like this. https://petapixel.com/2023/10/16/no-fakes-act-seeks-to-ban-unauthorized-ai-generated-likenesses Proposal to United Nations General Assembly (UNGA) Pre-empting the problem of Deepfake videos Background We presently stand on the edge of an abyss in which social media threatens to uproot our world order and cast us into chaos, as we see with the recent attacks (6 January 2021) on the American Capitol by conspiracy theorists incentivised on social media. Deepfake videos are motion pictures that are created to look like they depict
Having fiddled with ChatGPT for a while now, I have identified the following two risks. 1. Hallucination . It sometimes just makes stuff up. This is particularly annoying when you give it the information and it goes on a tangent and makes stuff up that was not in the source information. For example, when I gave it some meeting notes, it went on a tangent about N-P completeness in computer science. OK?!? 2. Memory errors . The memory feature that they have added messes up a lot. I have noticed that it performs two mistakes: Recites what it gave you previously , even if the topic was utterly different . For example I gave it some meeting notes to summarise and it spat out some Python code that I had asked for last year. Totally irrelevant! Recites what it gave you previously, because the content was similar , in its opinion. I created a new chat, gave it the meeting notes, and it gave me the PREVIOUS summary of the previous set of meeting notes. In one case it was even worse! It gave