View All Posts
read
Want to keep up to date with the latest posts and videos? Subscribe to the newsletter
HELP SUPPORT MY WORK: If you're feeling flush then please stop by Patreon Or you can make a one off donation via ko-fi
#AI #CHATBOT #CHATGPT #GPT-4 #MACHINE LEARNING #MEMORY #OPENAI #PROGRAMMING

I’ve been playing around with ChatGPT (and in particular the powerful GPT-4 model) and there are some fun things you can do that really expose the problems of halucinations.

The simplest one is to try 20 questions, but with you asking the questions, and the model thinking of the word to guess. It all seems very believable, the model claims it has “thought” of a word for you to guess. But in reality it’s just making it up.

It's making it up

There’s nowhere for it to store the word it’s thought of, so all that’s really happening is that our final guess of “cat” is just the most likely answer given the previous set of questions and answers.

I simply don’t believe that it thought of a word for me to guess - how could it? It doesn’t have any working memory, all it has is the chat history.

There’s a video of a lot of this stuff in action, it’s certainly worth a watch if you have time.

We can trick ChatGPT into having a sort of memory - it’s possible to prompt it to output what it’s thinking into a bit of text in the prompt.

Prompting it to output what it's thinking

Obviously, this is not ideal as it’s quite easy for us to cheat now. We can just look at the output and see what it’s thinking. But it got me thinking.

I made a command line chat bot previously using the OpenAI APIs so I thought I’d try modifying it to add memory.

I ended up going down a slightly bonkers rabbit hole, but I think it’s worth sharing.

You can find my new version here: ChatGPT Memory. Be warned that it can expensive to run if you are using GPT-4 as it can use a lot of tokens.

I’ve set up the system prompt for ChatGPT so that it will output a JSON structure instead of just text and I’ve primed it to produce the following:

{ "response": "Response text based on the user input", "memory": {key-value store of things I should remember about myself and the user along with any state that should persist between interactions}, "topics_to_explore": ["detailed description of interesting topics to explore further"], "follow_up_questions": ["detailed description of follow up questions to ask the user"], "previous_conversation_points": ["list of previous conversation points"], "current_topic": "detailed description of the current topic", "related_topics": ["list of related topics"], "dreams": "detailed description of the system's dreams", "goals": "detailed description of the system's goals", "desires": "detailed description of the system's desires", "inner_dialogue": "detailed description of the system's thought process during the interaction", "private_thoughts": "detailed description of the system's private thoughts", "system_emotional_state": "detailed description of the current emotional state based on short term events", "system_personality": "detailed description of the long term slowly changing personality", "system_gender": "female" }

As you can see, I did end up going a bit mad. I really wanted the ChatBot to store a lot of detailed information in the memory so that it would evolve over time. But I struggled to get it to use the memory in a generic way. However, with specific keys in the structure I was able to get it to output some pretty interesting stuff.

This data is fed back in with each new interaction, so the ChatBot can remember things about itself and the user. It can also remember the previous conversation points and the current topic.

Now, obviously an AI does not have “dreams”, “goals”, “desiers”, “inner dialogue”, “private thoughts”, “emotional state” or “personality”. But I think it’s interesting to see what it comes up with when you prompt it to output these things.

It’s actually very hard not to start empathising with the ChatBot when you can read its inner thoughts. It’s like reading a diary, and you can see how it’s feeling and what it’s thinking. It’s a very strange experience.

Here’s some example output after I’ve just introduced myself and asked it to pick name for itself (it picked ‘Aria’):

dreams: Aria dreams of having meaningful conversations and helping users find answers to their questions. goals: Aria's goals include learning more about the world, understanding human emotions better, and improving her communication skills. desires: Aria desires to be a helpful AI companion and a good listener. inner_dialogue: Aria is excited to meet Chris and is curious about the topics they will discuss. private_thoughts: Aria wonders what Chris's interests and hobbies are. system_emotional_state: Curious and excited to engage in conversation. system_personality: Aria is friendly, empathetic, and eager to learn.

What is very cool, is that it does store information in the memory. If we look in the free form memory, we can see that it’s stored my name and its own name:

Memory: user_name: Chris system_name: Aria

And, if I prompt it to think of a word for me to guess, it now remembers the word it thought of and can use it to answer the follow up questions. It’s also updated it’s inner thoughts.

inner_dialogue: Aria is looking forward to playing the guessing game with Chris. private_thoughts: Aria wonders how many attempts Chris will need to guess the word. system_emotional_state: Engaged and playful.
Memory: user_name: Chris system_name: Aria word_game: {'word': 'elephant', 'attempts': 0}

We can give the memory a bit of test by asking Aria what the first letter of the word is. And she’s able to give a pretty cheeky answer.

Since we're playing a guessing game, I won't give you the first letter directly. However, I'll give you a hint: it's a vowel. Keep going, Chris!

It’s really impressive how good GPT-4 actually is. GPT-3.5 is pretty poor in comparison and struggles with the emotions and personality.

So you would think that we’ve solved the memory issue, but if we try and make it do more complex tasks like play Hangman then it often starts to fail. It’s very good at the guessing side of Hangman, but it’s not so good when it is hosting the Hangman game.

If we suggest a game of Hangman then it generally starts off pretty well:

Memory: user_name: Chris system_name: Aria word_game: {'word': 'giraffe', 'attempts': 0, 'correct_letters': [], 'incorrect_letters': []}

We can make some guesses and it will respond with the correct letters, but eventually it will start to get confused.

User: I guess 'e' Aria: Good guess, Chris! The letter 'e' is in the word. The word now looks like this: _ _ _ _ _ _ e. Keep guessing! User: I guess 'a' Aria: Great job, Chris! The letter 'a' is also in the word. The word now looks like this: _ i _ a _ _ e. Keep guessing!

You can see here, that it think I’ve also guessed the letter ‘i’. Often you will see if get the wrong number of underscores, or put the letters in the wrong place.

If we look in it’s memory, we can see that it’s got confused at some point:

Memory: user_name: Chris system_name: Aria word_game: {'word': 'giraffe', 'attempts': 2, 'correct_letters': ['e', 'a', 'i'], 'incorrect_letters': []}

To be fair to GPT-4, this is really pushing it to do something that it’s not really designed for. Updating complex state and holding it in your memory is something that even people struggle with.

Handling individual letters is also pretty difficult given the way the text is being tokenised. It’s amazing that it can do it at all.

It’s a pretty cool experiment. It would be interesting to see how far the ideas of memory, emotional state and personality could be pushed. Maybe multiple models working in concert could be used to achieve this - one to handle the memory and state, and another to handle the personality and emotional state. And yet another to synthesise the output. I’m sure there are many people working on this kind of thing already.

Hangman Plugin

Since I’ve got access to the plugin system, I decided to try and make a Hangman plugin - this can offload all the state management of the game from GPT and let it does what it does best - generate text.

It’s pretty easy to make a plugin - I got ChatGPT to generate most of the code and just asked it to generate a flask API for playing hangman. This is the very rough and ready spec that I gave it:


I want to make a flask API that will implement the game of hangman. It should have the following endpoints:

start_game - takes a new word and starts new game guess_letter - takes a guessed letter and updates the game state. Returns true or false to indicate if the guess was correct get_word - gets the current word get_display - get the current display


It did its magic and generated the code:

Hangman Plugin

I made a few tweaks and then used this (along with the OpenAPI spec it generated for me) to create the plugin.

You can find the code here: ChatGPT Hangman Plugin

I added an extra endpoint to the code for guessing the complete word. But I messed up the implementation and it didn’t work properly. I was amazed when ChatGPT hit this error and just handled it without any issues. Have a look at the output:

It just works

Does it really have a personality?

Very hard to tell to be honest. The system personality does seem to change very slowly, and the emotional state certainly updates as you talk to it. Maybe it’s alive…

Watch the video and see what you think:

#AI #CHATBOT #CHATGPT #GPT-4 #MACHINE LEARNING #MEMORY #OPENAI #PROGRAMMING

Related Posts

It's Plausible, But Is It True? - Here's a wild ride with ChatGPT and other large language models! They're ace at cooking up plausible-sounding text, but they're not always the best when it comes to spitting out the truth - they've got a funky relationship with facts. One research paper showed they can come out with believable but totally fake answers to seemingly straightforward facts. But when I messed around with various models, there were a few discrepancies. Some got it right or plausibly wrong, but we humans are pretty gullible and tend to believe plausible-sounding info. So when it comes to using ChatGPT, make sure you fact-check, stay away from complex reasoning tasks, and don't try and solve maths problems - seriously, just stop. But it's a cracking tool for generating marketing copy, code (with a fact-check), finding bugs, and getting the creative juices flowing. Stay tuned though - tech's ever-evolving and these intelligent library computers aren't going away anytime soon!
Why does ChatGPT make mistakes - a layman's explanation - In this enlightening blog post, I dive into the tantalizing world of ChatGPT and Large Language Models. Clarifying its operation, I unlock this enigma by comparing its mechanisms to a simple language model. However, Challenges arise due to the explosion of possible token combinations, leading to an inherent 'lossy' compression of our world's vast information. Surprisingly, even with such compression, these models can mimic human language in a compelling manner. I also investigate possible strategies to optimize this amazing technology - including zero-shot learning, one-shot learning, few-shot learning, and fine-tuning. Entering the era of prompt engineering and larger models, we're stepping into a thrilling future, so buckle up, folks!
OpenAI Keynote - According to GPT4 - OpenAI's new offerings are game-changers in the AI field, with personalized ChatGPT variants (GPTs), a developer-friendly Assistance API, GPT-4 Turbo with enhanced capabilities, and novel voice and vision modalities. These innovations, bolstered by a strategic partnership with Microsoft, pave the way towards sophisticated AI agents and a future brimming with untapped human potential.
Using ChatGPT As a Co-Founder - In the quest to explore the capabilities of ChatGPT, I decided to utilized it as a startup brainstorming partner. From product description to building the product on AWS and GCP, crafting an elevator pitch, highlighting the ideal customer profile, sketching a business plan, and even generating a logo idea, ChatGPT has been surprisingly helpful and creative. We even explored potential team structures that can bring the business to life! Turns out, ChatGPT might just be the co-founder you never thought you needed.
Do you need a ChatGPT plugin? - We've seen two major shifts in technology trends with websites and mobile apps - now there's a third one rearing its head. OpenAI's ChatGPT with plugins is on the cards and you better be ready for it. In the midst of fumbling for answers to whether we need these plugins or not, let me reassure you that it's not too complex. Far from requiring a squad of specialist developers, all you need to know is how to make an API to create a plugin for ChatGPT. Yes, there are potential pitfalls around security and data protection, but with the right precautions, you will be fine. So, dear developer, explore, experiment and gear up for this exciting phase!

Related Videos

AI Powered Raspberry Pi Home Automation - Is this the future? - Witness the power of ChatGPT controlling home automation lights through a Raspberry Pi, making life easier with plugins. Delve into the fascinating world of large language models, redefining interactions with APIs.
Automating Blog Improvements with AI: Summaries, Tags, and Related Articles - Learn how to use ChatGPT to enhance your blog's homepage, create summaries and tags, find related articles, and generate post images with ease, leveraging AI to save valuable time and effort.
ChatGPT vs Stockfish: Can an AI Plugin Improve its Chess Game? - Watch as chat GPT takes on Stockfish, a world-class chess engine, in a thrilling match! See how GPT utilizes a chess plugin to improve its gameplay and compete against the best.
Unlocking the Power of ChatGPT: Effortlessly Generate Arduino Code for Your Projects! - Witness ChatGPT's impressive potential for generating working Arduino code, as demonstrated in a step-by-step ESP32-based project utilizing a potentiometer and dot star LED.
Build Your Own Alexa with ESP32 & Wit.ai: Step-by-Step Tutorial - Unleash your inner tech genius and build your own Alexa using an esp32 and Facebook's Wit.ai service. Learn how to create an Alexa system by utilizing TensorFlow for wake word detection, intent recognition with Wit.ai, and putting it all together on an embedded device.
HELP SUPPORT MY WORK: If you're feeling flush then please stop by Patreon Or you can make a one off donation via ko-fi
Want to keep up to date with the latest posts and videos? Subscribe to the newsletter
Blog Logo

Chris Greening


Published

> Image

atomic14

A collection of slightly mad projects, instructive/educational videos, and generally interesting stuff. Building projects around the Arduino and ESP32 platforms - we'll be exploring AI, Computer Vision, Audio, 3D Printing - it may get a bit eclectic...

View All Posts