Digits and Andromynas

AI and the real world still don't talk to each other

The more sophisticated technologies become, the more they become evident in their inability to function in a reality full of incompatibilities and ecosystems that do not communicate with each other.

A boy using a smart speaker.
21/09/2025
4 min

We live surrounded by promises about how artificial intelligence (AI) will revolutionize our lives. Every day, new chatbots appear that claim to understand us better than we understand ourselves, virtual assistants that promise to anticipate our desires, and algorithms that will supposedly make our lives easier than we imagined. The rhetoric is seductive: we are on the threshold of an era in which machines will think for us and automate even the smallest details of our daily lives.

The reality, however, is much more prosaic. It is true that AI is making its way into multiple areas of our lives, but with one fundamental limitation: it only works relatively well in those activities and services that are developed entirely in the form of software and data. And even in this seemingly favorable territory, the limitations are more noticeable than they would have us believe. If I may, I will give you two personal examples.

First, although I no longer contribute to Catalunya Ràdio's news services, and not by choice, every morning I listen to the newscast on Catalunya Informació—the current radio branch of the new 3CatInfo—using the various connected speakers I have scattered around the house. Since I'm not at all interested in sports, I always—always—give the order to turn off the device or switch to background music for the last five minutes of the broadcast, when the sports news is broadcast. This ritual has been repeated every day, without exception, for more than two years.

A true smart speaker should be able to detect this very obvious pattern of behavior and put it into practice automatically. After all, we have assistants that boast about knowing our musical tastes, that claim to detect when we're in a good mood, or that suggest movies based on highly sophisticated algorithms. But none of my speakers—neither Alexa nor Google Home—is capable of learning this simple preference and executing it autonomously. AI, it seems, has a surprisingly selective memory.

If the limitations in the purely digital realm are already evident, AI's disconnection from the physical world is even more blatant. This is illustrated by my second example: the difficulty of creating a seemingly simple automation like managing a smartwatch's alarm clock based on the weather forecast early in the morning.

I usually go out for a run on the paths around my house around 7 a.m., aiming to be able to sit down in front of my computer to work by 9 a.m. To avoid disturbing my bedmate, I wake up with the silent vibration of my connected watch. Now, some mornings, if I look out the window and see it's raining, I go back to bed for another hour. But it's not the same anymore. What I want is for the clock to only wake me up at 7:00 if it's not expected to rain for the next two hours, and instead let me sleep until 8:00 if I can't go out.

On paper, this could be automated by combining existing technologies: weather forecasting, data analysis, and device control. Nothing beyond the reach of current AI. But in practice, it's not that simple.

Apple's problems

In the Apple ecosystem, the situation is disappointing. Despite the supposedly seamless integration between iPhone and Apple Watch, such automation is impossible. The iOS Shortcuts app can check the weather and even create custom voice commands, but it lacks the ability to set alarms automatically for "security reasons." The result? I could receive a notification on my iPhone recommending an early alarm, but I'd have to keep an eye open to set it manually. So much integration for nothing.

Android, being more open, offers more possibilities, but they're not simple. The automation I need only works with Samsung's Galaxy Watch 8 and requires me to install and pay for several apps: Tasker (€3.49), AutoApps (free but required as a license manager), and AutoWear (€1.69). But this is just the beginning of the ordeal.

The actual setup is a maze. First, I have to obtain a free OpenWeatherMap API key and locate the exact GPS coordinates of my house. Next, in Tasker, I have to create a "profile" that runs every day at 6:45 AM, schedule an HTTP GET action that queries the aforementioned weather API using a complex URL containing latitude, longitude, and an authentication key; then I have to write JavaScript code that parses the JSON response, determines if the chance of rain is above 50%, set up variables to calculate the appropriate alarm time (7:00 AM if it shouldn't rain, 8:00 AM if it will), and finally send the command to the watch via AutoWear to create a tag.

This whole process involves navigating dozens of cryptic menus, enabling special system permissions (Accessibility, Device Management), disabling battery saver for all the apps involved, setting up Tasker global variables, setting up logical conditions, and hoping my fingers that the Bluetooth connection between the phone and the phone works. And even then, if the watch is from another brand like Withings, Garmin, or even an older Samsung, you can forget about it: each ecosystem is a bubble, unable to communicate with the others.

This technological labyrinth reveals a frustrating paradox: the more sophisticated technologies become, the more evident their inability to function in the real world, full of incompatibilities, differing standards, and ecosystems that don't talk to each other. AI can write poetry, generate spectacular images, and hold seemingly coherent conversations, but it fails miserably when it comes to interacting with devices that don't belong in its own walled garden.

The result is that, despite having assistants that claim to understand natural language and anticipate our needs, we are unable to program basic tasks without becoming systems engineers. AI can simulate human conversations with astonishing fluency, but it can't set an alarm based on the weather forecast if the user doesn't master concepts like REST APIs, JSON parsing, conditional programming, and device-to-device communication protocols.

Naturally, my two examples are just First World problems, entirely secondary. But while tech companies continue to sell us the promise of a more comfortable and automated life, true artificial intelligence—one that truly improves our lives without complicating them—is still far from reality. In the meantime, I'll have to continue manually turning off my speakers when the sports news comes on and rely on the previous evening's weather forecast to set my alarm.

stats