AI is probably Evil. No one cares.
AI is a looming inevitability. Every developer I've spoken to recently has AI lingering in the back of their mind like some waking nightmare. As developers, however, there's little room for overt skepticism. We have a job to do, and that job is understanding and building tech. No one wants to come out and say "I'm afraid that AI will take my job", but that fear is palpable, especially in tech.
Let's talk Efficiency
Every time I see some article or comment that mentions that AI won't cost anyone their job, it will merely "make coders more efficient" I want to scream. Well, not really, more like a psychic roar, because it's a really asinine idea at face value for a mess of reasons. First off, no one cares about the scientific definition of the word 'efficiency', but let's think about that definition for a moment:
the ratio of the useful work performed by a machine or in a process to the total energy expended or heat taken in.
It's not that complicated. A process is more efficient if it uses less energy relative to the work produced. By that definition, AI is among the least efficient resources invented. You can't swing a digital hammer without finding articles about the vast, vast amount of resources required to train models at the scale of ChatGPT or Gemini. Huge sums of energy, water, physical elements pulled from the planet to create armies of GPUs, all of which emit unending rivers of heat.
When you ask GPT for help bootstrapping an application, it's not efficient. It's the very opposite of efficient, because AI tools don't care about the ratio of energy compared to the amount of work, they care about speed.
The human brain is orders of magnitude faster and more efficient than any computing system. Our asynchronous, self-healing neural nets can be powered by freakin' corn for a tiny fraction of the cost of generative AI. Deploying these tools are making your coders vastly less efficient according to the technical definition of the word. As I said, no one gives a shit about the technical definition of the word...but if you're going to pretend to be an objective scientist when asserting that AI isn't coming for our jobs, get your terms correct.
Isn't AI Just a Fad?
Hang on...before we talk about how many people are going to be screwed out of a job, let's take a step back. Isn't generational AI just one of many fads that will go the way of NFTs and Blockchain? Sometimes there's this idea that technology has an inevitable vector given enough time and money. People say that about Apple's Vision Pro, that sure the first generation isn't there, but it'll get better. So too will AI.
If that's true, why has Google been working on self-driving cars since 2009? Even with well over a billion dollars invested in the project, as of 2024 it doesn't seem like self-driving cars will be ready for general release anytime soon. If anything, there's a good possibility that they will be scaled back, as the program has proven buggy and unpopular. It may be another twenty years before fully autonomous cars are really viable, if ever. My point is that tech isn't inevitable.
AI isn't a fad, though, because the first deep learning neural net was created back in the 1950s. Machine learning has been around just as long. This isn't like NFTs, where the value proposition was shaky to begin with, these are tried-and-true concepts that have already had seventy years of iteration and use.
In 2019 (well before GPT's fame), three computer scientist were awarded the Turing Award for their work on deep learning networks: Geoffrey Hinton, Yoshua Bengio, and Yann LeCun. Now sometimes called the "godfathers" of AI (which is annoying in its own right, come on), two of these experts are now worried that AI won't be safe.
Consider Bengio's perspective written in 2024:
In summary, it appears that consciousness is physically-grounded and within the reach of science. The temptation to build it into AIs is strong, and it might even improve their abilities to the benefit of all. But consciousness is intrinsically intertwined with moral status in human psychology, and we need to tread both carefully and slowly before creating minds that are at once similar and alien to our own and could threaten our societies and our future.
His assertion is that consciousness (or something like it) is likely within reach of science. What I find interesting is this idea that improving the AI might "benefit all", because to me that seems a bit naive. If science can create a synthetic thinking being, as one of the experts in the field asserts is likely, what do you really think it will be used for? Solving the world's many problems...or making money for the corporations paying huge sums of money for the hardware?
Gee, I wonder.
The reality is that there are still firms making money off NFTs and Web3 shennanigans. Bitcoin is still a valued asset despite it being worthless as a real currency. If the state of investor hype has proven anything, it's that it can lend nascent tech almost unlimited inertia so long as investors "see the potential". There are stories every day about generative AI disasters that are truly hilarious, but that also reveals how eager people are to use it in production, despite the risks or mishaps.
A Horrible Feedback Loop
Honestly, let's put aside the paranoia about world-ending or species-threatening AI behaviors. We've all seen at least one terminator movie, we get the idea. Let's focus instead on the immediate effects of GPT-like capabilities on our society, because let's be realistic...the idea of generational AI is revolutionary. Consider the impact automation has had on industrial processes across the world in the last century. Machines that can automate physical tasks have absolutely changed the nature of our world.
Computer programmers have been useful because they can create even more abstract forms of automation dealing in digital data, and that has revolutionized the world in the form of search engines, social media, and every bit of internet-based automation we depend on. Now, AI 'promises' to take things even one step further, essentially allowing neural nets to "automate thought".
Thinks about the feedback loops working even now as AI is relatively "primitive". Large tech layoffs mean more and more people pushing their code onto github as samples, which means more data available for AIs to train against (legally or not), which means better AI tools and therefore more people looking for jobs.
Further, it's well know that big layoffs lead to more big layoffs because corporations already can't be bothered to think for themsleves. This "copycat effect" gets worse when you consider things like AI tooling, because as soon as one firm believes that AI tools are good enough that they can cut headcount, others will absolutely follow. They already are, like sheep with no shepherd, because that's how this thing called capitalism works.
If an AI can program, it can probably market, too. It's already doing that. If it can market, it can do customer support, and we all know how AI chatbots are being (infamously) pushed out even before they're ready. So...this same feedback loop can apply to industries beyond tech, and it will, eventually, if the tool is proven effective.
As firms cut head counts and deploy AI across-the-board because "everyone else is doing it" (truly, that is a powerful enough reason), what happens to the economy as a whole? Companies will race to cut costs and cut costs, therefore ensuring that no one can afford to buy, well, anything, and therefore putting more and more pressure on firms to cut more and more costs.
The entire thing seems like a big yarn of stupid, but that's...kind of the point. Humans have a bad, bad, bad history of being wise with technology and an even worse track record with prioritizing short-term competition even when it leads, ultimately, to mutual ruin. It doesn't even matter if AI is actually good as a product, once the inertia starts flowing towards these tools, it can't be stopped.
But AI will create jobs!
Another tiring cliche I hear is that AI will create more jobs than it destroys. Okay, that's a nice thought....with absolutely no evidence. It's also somewhat irrelevant, because the issue isn't job creation, it's job loss. It isn't as if the entire population is a fluid and will simply spill from one bucket into another as trends change, especially when AI has the potential to shrink so many buckets at the same time. It's hard to take estimates on this topic too seriously, but it also isn't a huge stretch to suggest it could affect as much as 40% of all jobs as the IMF has.
I also sometimes hear silly things like, "you will still find work if you're good!", which is true...but also not. How do firms know if you are good or aren't? More and more hiring pipelines are turning to...you guessed it...AI! There's entire universes between a firm's perception of you based on your resume and interview and how good you actually are and that universe only gets larger as automated tools are deployed.
You aren't safe for being skilled at your job, and the problem with this statement is that it implies that everyone else is "bad". Also, if being "good" means being the top 0.05% of applicants (it isn't that uncommon to see jobs with over 2,000 applicants), something is very wrong with the industry. The one in 24 odds of the Hunger Games seem not so bad, eh?
These concerns about job loss aren't new, though, as this MIT article explores. However, there's an important point that is missed in cold economic analysis -- that there's a difference between unemployment and, well, crap employment. Economists do not care about that as a metric, and the entire way unemployment is measured is sharply criticized by many experts. It's not enough to say that "people will find work again" when one of the major issues with our economy is wealth inequality. The type of job matters; not every job provides an actual livelihood that empowers consumers to spend money.
It's a fact that wealth inequality has only increased since the 30s, and it doesn't take a Nobel Prize economist to understand how tools like AI will lead to lower wages even as inflation crawls upward. Further, it's kind of silly to insist that just because similar fears around tech existed historically, the outcome will be exactly the same. That's not "economic science", that's wild optimism that relies on the lazy implication that all technological advancement affects economies in the same way. It pretends that this advancement is nothing new and therefore the impact of such technology is predictable.
Frankly, this article hasn't aged well and it was written in 2024! This idea that "we" choose the future of AI and how it impacts jobs is really strange and naive. How this tech is used won't be decided by democratic assent, it will be decided by employers, as recent and ongoing layoffs so easily prove. Capitalism is not a form of democracy despite what some people insist. For someone so concerned about treating AI like a "magic genie", this assertion that job losses "won't be so bad" isn't based on anything material beyond a apples-to-oranges historic view.
No More Voice
There's another impending effect that AI will have in the way we perceive content: the ability to steal our voices. Eh, our metaphorical voices, because it can already mimic our physical ones. It's already a fact that anything you've ever posted to Reddit will be consumed by Google's AI in exchange for $60 million (since they own the content and you never did). This post will be consumed even though it's self-hosted, like it or not, because that's how these bots work. Anything you've ever written online has the potential to be eaten by AI and spat out the other end as it mashes your words through its algorithms.
This is an extra twist of the knife for anyone that posts about tech online. Articles are written for people, not robots, but now the robot can consume that knowledge and repackage it (sometimes for a fee). In essence, it can put people out of work...and rub salt in the wound by outright stealing anything they've ever written about tech, too. But good luck actually proving this murky new world of infringement, and even if you could...who knows how similar lawsuits will work out. You might be out of luck as AI scrapers claim the entire Internet is "fair use".
Why all this focus on jobs...?
Let's remember what not having work actually means in our society: it means you probably die. The nature of our system is that we need companies to pay us to do work, and that relationship has always been...well, conflicted, because obviously firms don't want to pay money out, they want to rake money in. It's also been a system of mutual benefit, though, because workers with solid paychecks are workers that can buy things.
This can easily become a snowball that keeps building until it's out of control. No firm really cares about the wider economy, they care about their survival and their bottom line. If everyone sheds "human costs" and shunts more work into AI, everyone will end up losing...even as corporations slash ever more head counts, they'll wonder why no one is buying their stuff.
As computer programmers, we're often supposed to be clinical and technical and embrace all tech because it's important we showcase this zeal to employers. Of course I want to build things using whatever x, y, z technology! However, at a certain point maybe we do need to take a moral stance and look to the future we want to build together. That future requires human beings, and sadly even that basic idea isn't a given, anymore.
Don't get me wrong, I'm not saying I wouldn't want to work with AI, machine learning or deep learning, given the opportunity. This is what makes having an opinion about the topic so dangerous -- an employer might read this and think, "this guy has too many opinions, and I might need them to work on some deep learning problem"...which seems to be another feedback loop driving the tech forward. Engineers are so afraid of AI taking their job, they want to work on AI so they can have a job.
At the very least, can we all take a step back and appreciate the absurdity of it all...? Yeah, I appreciate it about that much, too.