I recently read the first entry in a blog that I have zero additional context for. I don't know who is writing it. I don't know the general gestalt the author intends. However, based on the strength of the single post there so far, I added it to my daily blog aggregator agent. The post is The Machines Are Fine. I'm worried about us. and it is, ultimately, a reflection on the lurking hazard posed by artificial intelligence to the future of academia.
The author begins with a hypothetical for our consideration. Two PhD astrophysics students are given similar tasks and guided through the process of writing their first paper for publication. They both appear to work through the process and at the end they both deliver quality, publishable work. On the surface, both students encounter typical obstacles one would expect while learning how to "do astrophysics". At the end of the scenario we learn that one student performed the work in the traditional method. The other leveraged AI, specifically an LLM agent, to do the work.
The gist of the cautionary tale is that the first student has learned how to "science". The second student has learned how to prompt. From the outside, they both satisfy the requirements of the task. They've both contributed to the body of academic work. However, only one of them has developed the tools to continue to do the work of astrophysics unsupervised.
The author continues exploring the ramifications of this problem for the future of academia and, to no one's surprise, comes to the conclusion that if we follow the trajectory of the second student, we will lose the quality of understanding the work. If the goal is product, AI and LLMs are the ticket. If the goal is knowledge, then we need to be intentional about how we utilize these tools.
That's four paragraphs to get you to where I'm at in this moment and what I wanted to discuss.
In the last year I've watched AI agents revolutionize my workplace. The sheer amount of code generated by my teammates and myself has increased by an order of magnitude. I, myself, have integrated agentic coding into my workflows at a level I would have scoffed at previously; mostly because I did not believe that the tooling was capable of this work.
This publishing platform was taken from concept to functional application in an accelerated manner. As a single coded working on a hobby project in my free time, the amount of learning and development work behind Lurchbox would have taken me at least another six months to accomplish. In the six months I did spend working on this I learned and implemented automated pipelines that deploy the application as a container. I learned the dotnet entity framework for data persistence. I learned the Blazor framework. All of this was accomplished with the help of an agentic AI generating several thousand lines of code. And there's a metric ton of documentation written along the way.
However, I can also explain to you the underlying architecture of the application. I can walk you through the various services and models that make up the moving parts. I can discuss the tradeoffs made along the way and the reasoning behind the choices I made. Lurchbox is informed by my decades (geeze, I'm frickin' old) of experience as a software engineer. If agentic agents and LLMs were to disappear from the face of the Earth, I would be able to continue to maintain and develop additional features--albeit at a slower pace.
In the last year I've also watched junior developers lean into agentic coding. I have also spent a great deal of my days unwinding the inexplicable implementations they've committed to version control. I've spent entire weeks fixing production issues that were introduced by developers without the ability to critically assess the code they've written. Agent-generated code can often fail in spectacularly inscrutable ways. The LLM does not understand code in the way a human does. The LLM does not always anticipate the peculiarities of a specific operational environment.
In the last year I've watched management blindly advocate wholesale adoption of coding agents at the expense of learning how to code. I can read the writing on the wall. We're not developing junior developers into senior developers any longer. The obvious expectation is that the LLM is going to be able to turn a collection of junior developers into a fleet of architects. The company I work for, for example, is exclusively hiring entry level developers in affordable markets outside the United States. If I'm still employed as a software engineer two years from now, I will be totally surprised.
I know I'm a middling engineer. I have no pretense as to my own abilities. The value I bring to the table, however, is the years of experience. I can smell bad code from a distance. Inefficient, illogical architecture is something I can recognize at a gut level. When I leverage a coding agent, this is only the first step for me. I review the output. I refactor what the agent gives me. I enforce sensible architecture. I write tests for edge cases. This is a skill born from endless hours of toil and frustration; of cleaning up messes in production.
There are no shortcuts here. While the tools are here to stay, and there are tremendous benefits to them, they are not a replacement for the education and experience fhat thought leaders in the industry wish they were. Vibe coding is a path to learning, sure, but if it is treated as a substitute for learning and experience, then we will have lost understanding. We may know which buttons to press, but we won't know why we need to press them. And we will have a lifetime of learning to catch up on if we hope to fix things when the wrong buttons are pressed.
I understand that this blog post probably comes across as alarmist and self-serving. But when I read about other fields experiencing the same thing then I think we probably do need to have a serious conversation. We need to be intentional about what we expect from LLMs and AI. We need to understand what the trade-offs to unprecedented productivity (as it is being sold) mean to our future ability to maintain what we've built. We need to plan for a future where we still understand the product the tools we're using generate and be able to evaluate what the quality of that product.