Using LLM in June 2025
This is a page about »Vibe Coding in June 2025«.
Vibe coding and the use of generative chat models at this point of time (June 2025) is probably a norm for a significant portion of the public. From workplace to personal use, it is quite common for one to hear “why don’t you ask XXX (insert your preference LLM model) for YYY?” A non exhaustive use of these models including,
- drafting and redrafting documents
- searching for inspiration
- preliminary “fact” checking or “fact” index gathering
- coding
- satisfy emotional needs
Responsibility
It is undeniably a very powerful tool to utilize nontheless. With great power comes with great responsibility, or does it? Which entity is resposible for the content generated? Well, it is probably not the model or the company owned the model weights and train dataset, especially when things went south 1. Actually an interesting way to think about whose responsibility of the model output is, it is basically a collective responsibility of all humans that have access to the internet and any content author who willingly or unknowingly had their content uploaded and crawled – since LLM is just a stochastic parrot. Jokes aside, it is probably the entity that decides the content is appropriate to be published. Reality might be a bit different, but at individual level this should be true.
It is interesting looking at how certain event plays out as a bystander. AI plagiarism is an example. How would one react,
- when their content is actually hand crafted but deemed as AIGC? or
- when their content is AIGC but is deemed as hand crafted?
The latter probably will have to do mismatching seller-buyer expectations, especially for those who values hand crafted highly. At this point of time, it is still possible to distinguish AIGC from hand crafted content if one spends lots of time looking at both content and have access to ground truth. However, for prior it will be an uphill battle. How would one proof the AIGC discriminator (another model or even a human) is giving incorrect conclusion? 2
Until the next breakthrough in technology or the law or even social consensus, it will likely be a case by case situation.
Impact
Individual
Vibe coding is at the very least, alleviating some of the cognitive load. Most simple to medium tasks can be handled quite well through prompting and patience. It allows those without experience to deploy a profiting website 3, but at what cost? From the two recent experience described below, hopefully they paint a better picture of how to coesists with vibe coding for now.
the author mainly works with python for web development, deep learning and automation, and has varying degree of experience in C, C#, golang and javascript;
The static gantt render. This is a project that aims to render gantt chart, similar to how static site generator works and acts as an extension to the author’s hugo static site for tracking projects. The design is simple, there is a web component for user to key in events and upon new changes to the database will trigger a rebuild of the gantt charts. Simple enough, except for the author never actually get familar with go beyond brute force googling + stackoverflow searching. Having a LLM helps significantly especially on the json marshall/unmarshall, testing, structuring the project and in general, developing the renderer. Is the app working? Yes. Have the author gotten better with go? Probably no. There are things that requires practice to actually get good. Tabbing - modify - run - debug is not a good cycle. For beginners, tabbing is disproportionately short to even last long enough in the short term memory. However issues will not be surfaced until it is too late as one can get away pretty far unnoticed. It is a significantly addictive behavior to get validation in matter of seconds.
this feels different from copying from solutions from a classmate workbook for some reason. might be due to copying by hand is still a relatively slow process and one can still force oneself to think.
The analytics cli. This is a project aims to develop a pipeline for signal
processing and delivering insights. It is written in python however signal
processing is rather an unexplored area for the author. Working with discrete
fourier transform and numpy
feels like starting into the void. What should go
where? Why the coefficient 20
is used for calculating the magnitude spectrum
in that implementation? How should the visualization be interpreted? Despite
having some very superficial understanding of 1D fourier transform, it does
not translates well into writing something working. LLM here functions more as
a search than dumping code onto the author face as, the questions are now more
specific and the author have some working experience with numpy
.
still, DFT concept was slightly solidified after watching 3B1B, book’s complementary youtube video 4
Conclusion? Only start vibe coding when one is certain one can ask specific questions otherwise search. Vibe coding is too early for inexperience users, users that had not went through the “manual” process. Being the supposed replacement of search, it is even worse than search in the sense where it is almost efforless. The effect of search is still not fully studied and contained and next revolution is here 5.
never trust the saying “don’t remember things that you can search for.” make your brain work, to a certain degree, as there is also a saying “use it or lose it”; it will be interesting to see 5 equivalent study in LLM era
Job Security
There has been many rounds of big tech layoffs 6 since 2022, even before the release of chaggpt in Nov 2022. Together with other examples of sustenance of a company with only 20% of the headcount and rapid adoption of big tech in modifying their product’s codebase with LLM, it paints a gloom picture to especially inexperienced engineers. It is a strong signal that suggest “it is probably okay to replace some human with AI, by having other human to check, review and be accountable of the generated code”. As a strong believer of the importance of hands-on, it is natural to ask “how should one close the experience gap?”. There were two gaps between someone who do not know anything about writing code and someone who can deploy useful and working code. One to learn to think like a computer and another to be able to translate requirements into code (and potentially learning some domain knowledge). LLM as an accelerator quickly fill the first gap and to certain extend the second. The size of second gap varies depending on domain. LLM is likely able to fill the gap for information available publically but not in highly specialized domain e.g. computational lithography 9. On the other hand, with the rest of the industries absorbs the tech talents, it could potentially bring a modernization on tech infrastructure. Again, having context and domain knowledge is critial to be effective.
the notion of replacing human with AI doesn’t stop at the tech industry; all talent pipeline will be affected by varying degree, it is worthwhile to think of the value proposition one can bring to a company, it should slowly be a differentiator between a good and unfit candidate.
In the short to mid term, having deep understanding in a highly specific domain seems to be beneficial in job securing. It is also a good news for those who wanted to do more, never have been one be able to test something out within matter of seconds.