I reacted especially to it when I tested the new video feature in Gemini live this week but the phenomenon is not really new. The basic question is about what we want from AI, credible answers or something else.
When I started Google's Gemini live the first time, its first word became “what a nice living room you have”. Yes, of course, living room was correct, but some kind of style assessment was nothing I had asked for. Gemini continued and then said that “the curtains match the flowers in the window” and without considering neither myself nor Gemini as interior design experts, I can say that that claim did not strengthen my confidence in the AI assistant. It felt most like baseless flattery.
It is clear that AI has been trained not to say anything offensive or controversial. Rather factual errors than insults. As a journalist, I have learned to question people's answers as an interview technique to test their conviction. “Can't it be like this instead, or are you absolutely sure of your thing?”
I reacted especially to it when I tested the new video feature in Gemini live this week but the phenomenon is not really new. The basic question is about what we want from AI, credible answers or something else.
When I started Google's Gemini live the first time, its first word became “what a nice living room you have”. Yes, of course, living room was correct, but some kind of style assessment was nothing I had asked for. Gemini continued and then said that “the curtains match the flowers in the window” and without considering neither myself nor Gemini as interior design experts, I can say that that claim did not strengthen my confidence in the AI assistant. It felt most like baseless flattery.
It is clear that AI has been trained not to say anything offensive or controversial. Rather factual errors than insults. As a journalist, I have learned to question people's answers as an interview technique to test their conviction. “Can't it be like this instead, or are you absolutely sure of your thing?”
When I ask Gemini about specifications on one of the phones I recently test, I quickly get an answer, which is based on the formulations even seem directly taken from the text I wrote and published on Mobil.se. So those specifications are correct. When I, despite this, wrongly claim to Gemini that what it just said is wrong, it apologizes directly and apologizes “for the wrong answer” which was right and then explains the specifications in a different way.
I don't want an AI who is just me at the time. I would rather it check its facts better and value its sources. Total invertebrate flattery leads nowhere.
This makes me think of advertising I got in social media about apps where I am invited to get my own AI partner. The arrangement feels ominous. I have to choose whether it should be a guy or girl. Good. I get customized. Ok. And then apparently the idea is that we should converse. Now I have not stuck to the offer, but whether you want a discussion partner, just feel a little lonely or have any other needs, I fear that the AI apps are the wrong path.
Thanks for the offer AI, but I don't like you that way.