top of page
  • Writer's pictureQuinten Miller

Will AI have the last word?

While out walking the other day a random thought entered my consciousness, will AI always have the last word? 


On reflecting on interacting with different chat AI's I realised that they always respond, they always have the last word. Now that is by intent, you ask it a question and it responds, However, it was an interesting thought that did make me think. Will AI's always have the last word, or will they evolve too mimic humans more closely, picking up on how we converse and eventually on conversation cues. As they evolve they will need to consider more than just what they say, but also how they say it and when or if they say anything. 


But first, before the thoughts, here is how the conversation played out with a popular LLM chat. Not taking away from how useful these chat bots are and knowing that this is not what they are designed for, it didn't stop me experimenting and having some fun. No AI chat bots were harmed in this experiment. 

Again, I want to reiterate, its not fair on these sorts of chat AI's to expect the sort of behaviour I was looking for, its counter intuitive to what they are designed for. However, as we adopt these tools more widely it does bring up the question of how we interact with the tools. 


As humans converse, our exchanges ebb and flow, naturally reaching conclusions through both verbal and nonverbal cues. We learn not only what to say, but also when to speak up and when to hold back. When considering the underlying assumptions of chat AI programming, we see an expectation of someone (or something, in the context of an API) asking a question and the AI providing a response. However, upon reflecting on my experiences with various chat AIs, it became clear that there were moments where I anticipated a response, and others where my input could have neatly concluded the conversation.


As these tools evolve and personas are adopted, mimicking human behavior becomes a possibility. Imagine chatbots equipped to discern conversation endings through subtle cues like sentiment analysis, silence detection, or contextual understanding. It's important to note that the deployment of LLMs is a dynamic landscape. While browser-based user-initiated agents exist, diverse use cases could see chat agents initiating conversations based on specific triggers. We can't even begin to image how chat AI's will be deployed in the future, 10 or 20 years from now. 


Circling back to the thoughts that prompted this meander, which is that AI will need to not only learn what to say but it will also need to learn when to say it, and more importantly, when not to speak. What that looks like for different incarnations of chat AI will be interesting to see in the future. 

The future of AI chat holds immense potential, and evolving beyond mere content generation is crucial. By mastering the art of timing, AI chatbots can transition from scripted responses to natural, human-like interactions. This necessitates advancements in areas like sentiment analysis, context awareness, and even proactive conversation initiation in specific scenarios. As AI chat continues to integrate seamlessly into our lives, its ability to "read the room" will become paramount in fostering trust and creating truly engaging experiences.


About me: I'm Quinten, Innovation Director at SixPivot. I work with software teams and business owners everyday. Let's think different and creatively about software development.





bottom of page