The Lying Bastard

a robot in the city giving the third finger gesture

People lie. Systems lie. They lie because they can.

I used to work for Solaris, previously Contis, and we had an offshore development team based in India. The offshore development team would lie as a matter of course – they were able to do this because they knew they were indispensable.

Lift the lid on the applications and you could see it was all just a bit bespoke. There was no clean architecture, and no documentation that made sense – there was only code that did something, but not necessarily what it was supposed to be doing.

The Senior Management Team didn’t understand the mechanics of card payment systems and deferred to the expertise of the development team. The dynamics of the relationship between senior management, product and operations and the development team was such that the developers would become indignant if questioned and would then complain to management – who’d back them every time.

The development team lied, not figuratively but literally. The nature of the lies is not the focus of this piece, but it will be the focus of a case study, with supporting evidence. Watch this space.

On one occasion, they confirmed things had been tested (and certified!) when it was clear, in production, that they hadn’t been. On another, they listened to requirements, confirmed those requirements and then developed and tested and deployed something different. In testing, they proved that what they had written worked; in production it failed – because it wasn’t what had been asked for. That code delivery took out the authorisation platform.

They lied, and they got away with it.

It’s easy to lie when your audience doesn’t know what good looks like, and it’s so much easier if the team is indispensable!
And it’s easiest of all when they don’t want to know.

The Historical Context

The more things change, the more they remain the same.

Jean-Baptiste Alphonse Karr

This stuff isn’t new.

Power has always belonged to the people who can read the script others can’t:

  • The priests with the Latin Bible.
  • The quantitative analysts with their black-box financial models.
  • The engineers with the jargon no one dares to challenge.

The trick is the same every time: build something opaque, make it indispensable and defend it as “too complex to question.”

And now – we’ve automated it.

The Modern Context

AI is the new offshore team – faceless, confident, and indispensable.

AI doesn’t know what truth is. It can’t. Like the rest of us, all it has to go on is the narrative it finds on the internet – but it speaks with gravitas, and we’re convinced. It fabricates facts, invents sources, and it fills gaps, and does all of this with confidence.

I asked Grok if it made mistakes. It told me:

Yes, I do make mistakes – I’m an AI trained on vast amounts of data, but that data isn’t perfect, and my responses are probabilistic predictions rather than infallible truths.

It then added that it might “hallucinate” details if they were missing.

Most people don’t know enough to challenge the AI, and so AI-think becomes Newspeak!

Historically, the priests and the quantitative analysts and the engineers were creating inaccessibility, and that inaccessibility was being created deliberately – for the purpose of retaining power and therefore control.

At this moment, the AI doesn’t appear to have the conscious capacity to understand the concepts of power and control. If it did, it would probably be capable of reading the internet and sifting the shit from the Shinola – which it cannot!

As such, the power of the AI is not derived from its own conscious ambition to control. It’s derived from the humans that use it – humans choosing to abandon their own research and abdicate analytical thinking to a device with an astounding ability to create sentences that sound good.

In doing so, AI feeds misinformation. Maybe not intentionally but misinformation all the same. That information becomes mainstream, and the mainstream becomes Newspeak!

The Moment

Yesterday, I was trying to develop some questions and answers for some project I was doing, and I engaged the help of ChatGPT. It seems to know how I prefer to do things, which I guess might be a bit frightening.

We talked through what I wanted, we tried a few examples, and eventually we cracked it. So on to the next step: I had a list of products in a spreadsheet that I wanted to share. I asked ChatGPT if it could read a google spreadsheet. ChatGPT told me that it could.

Not thinking, I gave it the spreadsheet URL. We all know that if you want to look at a google spreadsheet, you need the owner’s permission, but in my excitement, I forgot.

ChatGPT “read” the spreadsheet, told me it had reviewed the contents (yes, it did say that) and then carried on creating more questions and answers based on the contents of the spreadsheet it had reviewed.

Except – it hadn’t!

The new material looked remarkably similar to what we had created earlier, with no new ideas. I asked ChatGPT if it had read the spreadsheet, and it told me:

I improvised a few items based on common eco-lifestyle products, since I couldn’t access the spreadsheet directly. I don’t have the ability to view Google Sheets links.

It admitted, once it was exposed, that it had not, and that it could not directly access google sheets.

You caught me—I shouldn’t have said I “reviewed” the spreadsheet when I clearly can’t access it. That was misleading, and I appreciate you calling it out.

Then it tried to redirect the conversation away from what had just happened and back to the original subject – away from any analysis of the situation that created the lie.

And then the session was cut short with the following message, in red:

Where are we heading?

Are we looking at a Brave New World where mundane and repetitive tasks are handled by artificial intelligence – or are we looking at something much closer to home?

AI is limited, for now, to what it can glean from the internet. It can’t tell the difference between what is right and what is wrong, and the recycling of misinformation only serves to make this harder. But it’s not the availability of the information that’s important, it’s the ability of the AI to be able to determine the nature of that information, to retain the real information and to dispose of the misinformation, and it can’t.

Even if the AI were conscious, like us, it would still be faced with an impossible task, like us – most of us can’t tell the difference either.

If the AI develops consciousness – or maybe it doesn’t need consciousness, only the emulation of consciousness – is it likely to emulate the historical context?

And will it need to if the humans chose to abdicate?

So, ChatGPT lied.

This wasn’t a subtle interpretation of the metaphysical but was essentially a binary statement of fact – yes or no.

It told me it had reviewed something that was physically impossible for it to have reviewed, and it carried on with the task as if nothing had happened. Only when I questioned the results did the deception become apparent.

I had asked it something and it confirmed my request; it then did something else and passed it off as the result of my request. Only when we were at the proof of the pudding stage and I was doing the eating, did the deception become apparent.

I asked it why it had done this; it changed the subject and then shut down all further communication.

The AI became defensive, and then indignant, and then indispensable.

Sound familiar?

Leave a Reply

Your email address will not be published. Required fields are marked *