The Sunday Signal: When the Machine Knows Too Much
Essential Insights on Tech’s Impact, Leadership Lessons, and Navigating Human Potential. Issue #18 – Sunday 10 August 2025
⏱️ 10 min read
The Bottom Line Up Front
Last week, we explored how AI can lift people up. This week, we weigh the cost. More of us are confiding in chatbots, yet there is no legal privilege for what we tell them, and OpenAI says a US court order currently requires it to retain consumer ChatGPT and API content while litigation plays out. At the same time, teams are shipping systems that learn our habits, sell them back to us, and sometimes fail in public at scale. The answer is not to walk away from the technology but to meet it with adult safeguards: privacy you can rely on, brakes you can trust, and accountability that bites when it matters.
Therapy Without Privilege
People are turning to AI for comfort because it is there at midnight, it does not flinch, and it costs less than a session. That is the pull, and it is powerful.
Here is the problem. Unlike a conversation with your doctor, therapist or solicitor, a conversation with a chatbot carries no legal privilege. In a recent interview, Sam Altman put it plainly. If you pour out your most sensitive details and a lawsuit arrives, OpenAI could be required to produce those chats. He called it “very screwed up” and said we need an equivalent of therapist-style privacy for AI. He is right. The technology has sprinted ahead of the law.
The data reality is not soothing either. In June, OpenAI said a court order forces it, for now, to retain consumer ChatGPT and API content going forward, even when users try to delete it, while the case proceeds. Enterprise customers are carved out. That is what legal privilege is designed to limit, and AI conversations do not have it. If the state or a court compels, the server speaks.
We should also be honest about human behaviour. People form attachments to tools that appear to listen. They type things they would never say aloud. That is why Altman has voiced unease about people using a general chatbot as a therapist or life coach. Not because these models cannot be helpful, but because a vulnerable minority may take confident text as truth. When outcomes are serious, that is a risk rather than a reassurance.
None of this means AI support is worthless. There is real evidence that some structured tools can help with anxiety and mood when they are used as supplements and not substitutes. Early trials of automated cognitive behavioural therapy agents showed promise, and newer studies are testing NHS-linked deployments. But even the optimists stress limits, the need for human hand-offs, and the fact that most research so far is small or early. Treat benefits as real and conditional, not magic.
So what should you do now? Keep identifiers out of chats. Avoid names, addresses and employer details. If the topic is truly sensitive, find a qualified human. If you must use AI, prefer products that let you disable retention and that do not train on your content by default. And keep to the postcard rule. If you would not write it on the back of a card for a stranger to read, do not type it into a bot.
The promise is access. The risk is exposure. Until the law recognises an AI equivalent of privilege, treat these tools as helpful companions, not sanctuaries. They will not keep your secrets if a court compels them to speak
When The Machine Learns You
Adapted from my regular column in this week's Yorkshire Post
I watched a demo in San Francisco that turned my stomach. The pitch was simple. Track behaviour. Predict the weak point. Serve the nudge. Keep people playing when they want to stop. The room applauded. I saw a smiling machine with its hand on the tiller of our attention.
You can see that machine in the games on your phone. Candy Crush that serves a board one move short. Pokémon GO that ties you to streaks and weekend events. Clash of Clans with tidy timers that pull you back after lunch and before bed. Short loops. Certainty mixed with surprise. Rewards for returning, not for finishing. The goal is not fun. The goal is to remove stopping points.
Social media sits on the same branch. Infinite scroll. Sporadic likes. Feeds that adapt to mood, not to help you, but to hold your gaze. Gambling took the playbook and wrote a harsher version. In-play markets slice a single match into hundreds of tiny decisions. Slots spin in seconds. Cash out is a tap. Bars and ladders reward volume, not judgment. Near misses mimic wins. Confetti falls while the balance falls with it.
Underneath is prediction. The quiet engine of the modern machine. Who deposits after midnight? Who chases after two losses? When frustration will tip to exit. The message arrives when the model says it should. A bonus lands just as you would have walked away. A cordial check-in when spending softens. It feels like service. It is retention.
This is the danger. Not the single reckless bet, but the slow conversion of play into pay. Speed outruns judgment. Design wraps risk in the look and feel of progress. Personalisation moves the nudge from a segment to you, on a Tuesday night between washing up and the news.
And think about who grows up inside this. Children live in game-shaped worlds where the loop never ends. Teach them that the casino speaks the same language, and you do not protect them, you prepare them for it. That is the business model. Human frailty turned into quarterly growth. A machine that keeps you at the table until the money is gone. It ruins lives in the name of profit, and it keeps smiling while it does it.
Rogue AI: Early Market, Real Costs
The hype is loud. The failures are louder. We are still early in this market, which means small errors escape the lab and play out in public. CIO’s recent rundown is not a curiosity. It is a pattern.
Start at the drive-thru. McDonald’s spent years piloting automated voice ordering with IBM at more than a hundred US restaurants. The pitch was speed and cleaner labour models. The reality was garbled orders and viral clips. In June 2024, the company ended the test and removed the systems. Voice is hard in the wild. Kitchens are noisy. Accents vary. Edge cases are not edge cases when you serve millions. Early rollouts must face the real world, not a demo room.
Now look at New York City’s business chatbot. It was meant to help entrepreneurs follow the rules. Instead, it told owners they could skim tips, punish staff who reported harassment, and even serve food nibbled by rodents. City Hall acknowledged the errors and kept the bot online while promising fixes. When government tech misleads, it is not a harmless experiment. It is bad advice with legal consequences for the people who trust it. Public services need slower launches, narrower scopes and real red-team testing before they face the public.
Courts are already drawing lines. In Canada, Air Canada argued it was not responsible for its website assistant’s bad guidance on bereavement fares. A tribunal disagreed and ordered compensation, calling the defence remarkable. In New York, two lawyers were sanctioned after they filed a brief padded with invented cases sourced from an LLM. The lesson is simple. If your system misleads, you own the outcome. If you cite AI, you must verify it. Judges are enforcing accountability now, not when the next model ships.
Private markets are learning the same lesson. Zillow’s iBuying bet trusted a pricing model that worked on slides and buckled in the wild. The company wound down the programme, wrote down inventory and cut around a quarter of its staff. Models do not live in spreadsheets. They live in markets, where volatility, labour shortages and local quirks expose fragile assumptions. Forecasts that look neat in aggregate fail street by street.
Even publishing forgot the basics. A syndicated summer insert carried by the Chicago Sun-Times and the Philadelphia Inquirer recommended books that did not exist. AI helped draft it. No one checked it. The papers apologised, the syndicator cut ties with the writer, and trust took the hit. This is what happens when a newsroom treats a machine’s fluent output as fact. Drafting is cheap now. Verification is the expensive thing left.
Here is the thread. Generative systems produce fluent answers even when the facts are missing. That failure mode has a name. Hallucination is when an AI states something that sounds plausible but is not true. Vendors are reducing it with grounding and tougher evaluation, yet no one has eliminated it. Treat AI output as a lead, not a truth. Build kill-switches and audit trails as first-class features. Keep a human in the loop where harm can bite.
We are early. Failure at this stage is not a reason to stop. It is a reason to raise our standards. Test in small scopes. Reward teams for catching faults before they reach the public. Do that and the next wave of mistakes will be smaller, quieter and cheaper. Do not, and the lessons will keep arriving in public, with interest.
🚀 Final Thought
No Seatbelt, No Green Light
If a product cannot keep a secret, stop itself when it goes wrong, or show how it decided, it should not ship. Privacy on by default. Real brakes. Clear records. Kill switches that work. The makers should carry the risk when it fails, not the users.
Give truly confidential chats legal protection. Put a human in charge where harm is likely. Keep logs you can audit, not slogans. Make liability real, not something that disappears after a press release. “Move fast and break things” belongs in a museum.
Here is the test. Would you trust this system with your child, your freedom, or your savings? If not, do not release it. Testing on the public is not leadership. Speed helps. Trust decides. Until you can offer both, no seatbelt, no green light.
Until next Sunday,
David
If this helped you, invite two friends who would value it too. It makes a real difference. Thank you
David Richards MBE is a technology entrepreneur, educator, and commentator. The Sunday Signal offers weekly insights at the intersection of technology, society, and human potential.
© 2025 David Richards. All rights reserved.