On the Usage of AI
Owen Silva - July 25th, 2025Perhaps Alec Watson said it best when he described the phenomenon of an ‘offloading of thought’.
In the name of convenience, it would appear we are collectively more than willing to give up our agency, privacy, and dignity. What am I talking about?
Recently, a non-technical friend of mine attempting to fix issues with their internet, spent $200 on a number of products an AI assured them would do just that. Spoiler alert: they did not (but that is besides the point)
Later on that same day, I overheard someone discussing vacation destinations, and declare they would ‘just ask AI’ where to go.
These are all seemingly innocent use cases - after all, the same questions have been answered by technology (specifically by Google Search) for years, albeit a little bit less efficiently.
However, hearing this rhetoric today has me incredibly concerned.
If AI fulfills roughly the same purpose that a Google Search once did, why is it’s rapidly increasing ubiquity so disquieting? The answer, as far as I see it, is the two are fundamentally different.
While yes, pure Google Search is still an algorithm, (and AI at that - no, not generative AI, sturdy old-machine learning) and one whose incentivisation structure runs counter to the unadulterated pursuit of knowledge,
it still is nothing more than just an aggregator.
That is the key: One still needed to click through the results, to rely on individual, independent websites for the actual information. While Google (or any search provider) could control what results we did or did not see, they still lacked some power - they could not yet control the content.
Out of necessity, we still had the privilege of interpreting information on our own. However, today, when we turn to AI to answer those same questions, we are allowing a corporation to think for us. The result of this increasing reliance? Google, Microsoft, OpenAI (same thing), [insert corporation here], have achieved total dominion over the information we see.
That is what is so concerning about this trend: rather than researching, considering a variety of sources, and coming to our own conclusions, we are being outright told what to think. And we are all complacent in this - because it’s so convenient.
Seemingly never before have so few corporations had such control over how we perceive the world. To those saying this has always been the case, maybe even pointing to the past, before internet when information was consumed in print and dictated by a handful of media conglomerates, my response is only this: past generations need not engage with that content.
They had a choice about what newspaper to read, if at all. Today however, we take away our own choice by conditioning ourselves to instantly turn to AI whenever we have the slightest curiosity.
And who can blame us? AI is the ‘hot new buzzword’ that has got companies and investors pushing it in every product. The news is being summarised by AI , our communications are being both penned and digested by AI
, our phone calls, our calendars, our photos - all of our intake of information about the real world is, for the most part, being consolidated and controlled by these companies. I am not claiming that Meta, nor Google, nor OpenAI, are using this control to subtly influence you (yet), however the possibility, the capacity for them to do exactly that, is terrifying.
I think that should startle all of us. Among the myriad of reasons not to use (generative) AI, each of which warrant their own discussions, from it’s outsized environmental impact, to it’s theft of intellectual property, or it being used to justify invasive data collection, how it is destroying academia and
how we do not truly learn when using AI , I urge you to avoid it at all costs, at least for this reason.
And, to be clear, my issue here is not with the technology itself. I do find generative AI to be revolutionary in it’s capacity to produce comprehensible output from natural language input. I do not doubt that it is the future, whether I like it or not. My issue is with the trend forming around it.
With how normalised ‘offloading thought’ is becoming, and how much every company is shoehorning unwanted AI integrations into their products to please shareholders. Most of all, I am worried about our willingness as a society to accept all of this. Make no mistake: this tendency is not limited to just generative AI.