Business central blog
Subscribe to blog and get news about new posts.

AI Insanity

Previously, we thought that in the future robots would sweep the streets while humans engaged in creative activities. Now the future has arrived, but it's the robots that are being creative, and humans are sweeping the streets. This text is unusual for me. Typically, I write about the technical aspects of Business Central, preferring to use programming languages and technical analysis. This article, however, will be more of a general reflection on my view of AI as a whole.


The idea of AI is far from new. Science fiction writers have been writing about intelligent machines and artificial intelligence for several hundred years. The first working neural networks appeared as far back as the 1950s and 1960s! But why has the topic become so hyped in recent years? Much credit goes to OpenAI. Their work and the release of ChatGPT to the public in 2022 generated a new wave of interest from humanity. Of course, this topic has periodically been in the limelight, like the landmark event of AlphaGo's victory over Lee Sedol in 2016, or when AlexNet in 2012 showed the best results in image recognition. At the same time, the topic of AI is full of speculations, such as that AI will soon replace us all. Optimists expect the emergence of AGI within five years, while pessimists wait for their jobs to be taken over by machines.
AI is whatever hasn't been done yet.
Larry Tesler
What really amazes me is the inflation of the concept of what AI really is. Each new practical success in this field pushes the boundary of our ignorance and raises our expectations. It's becoming harder for us to call everyday tools 'AI'—a phenomenon known as the AI effect. Moreover, critics of generative models claim they are no different from T9 autocomplete, arguing that the technology will amount to nothing. Yes, of course, models predict the next words with a certain probability, but they do so based on thousands of connections and relationships between words and objects embedded in the model during training, which is actually similar to how the human brain works. Thus, such arguments are not valid.
No one can deny reality now. Copilot, ChatGPT, Deep Blue, AlphaGo, Gemini, DALL-E, MidJourney, Sora, and other visible/invisible AI models have already entered our lives. Unfortunately, such rapid development in recent years leaves extensive room for speculation, both intentional and unintentional.
So, will AI soon take our jobs, replace us, and throw us out? I am confident this will not happen for a very long time, and here are my thoughts on why I believe this.
One thing humans do very poorly is predict the future. Often, predictions are made by wise and respected minds of humanity who possess the most advanced knowledge of their times. Yet, they are constantly mistaken, merely proving that the world is too random for any predictions. Randomness is too difficult for the human mind to grasp. Of course, mathematics, which deals fairly well with probabilities, comes to mind, but it still cannot predict complex probabilities due to an infinite number of input parameters and the amount of interdependent randomness spread over time. Even the best scientists still cannot solve the simple three-body problem, let alone something more complex. Here are some interesting unfulfilled predictions from experts and specialists:
When the Paris Exhibition closes, electric light will close with it and no more be heard of.
Erasmus Wilson
X-rays will prove to be a hoax.
Lord Kelvin
Radio has no future
Lord Kelvin
Fooling around with alternating current (AC) is just a waste of time. Nobody will use it, ever
Thomas Edison
Does that ring any bells?
AI will take over coding, making learning optional
Lensen Huang, CEO of Nvidia
We will get there within 5 years (when to expect human level AI
Lensen Huang, CEO of Nvidia
Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, and we will have multiplied the intelligence – the human biological machine intelligence of our civilization – a billion-fold
Ray Kurzweil
The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. 10 years at most
Elon Musk
By no means do I wish to doubt the expertise of experts; on the contrary, I want to acknowledge that these individuals are absolute authorities. Sometimes their quotes are taken out of context, as if ignoring all other deliberations. Partly, such statements are deliberately made to stir interest. But this is how human love for hype and sensation works. The most scandalous and shocking thoughts are picked up to broadcast the next sensation, claiming that soon programmers will be unnecessary! Unfortunately, the public tends to not delve deeper into the issue, and the news resonates with each other, only increasing the tsunami of madness around AI.
In reality, no one has any real understanding of what will happen next. And it's not just about randomness (although that's also important). It's about the misunderstanding of how AI and intellect actually work. For example, why do generative AIs choose certain words, and why do weights take particular values. There are no guarantees that a next significant increase in the size of LMMs will proportionally improve quality. Perhaps we are already nearing a plateau where quality will only slightly improve. For instance, it would be interesting to observe the results of the new dense LLAMA with 405 billion parameters from Meta. Notably, it was trained on 15 trillion tokens and most importantly, 10 million human annotations, which seems to be the highest number among the known LLMs used for training. It is still in training.
There is also the problem of data quality. As known, LMMs require a huge amount of data for training, and they are far from exhausting the data supply. However, the problem is that data vary in utility and the amount of truly high-quality data is limited. An even more critical issue is energy consumption. Currently, training such large neural networks requires an enormous amount of electricity. Moreover, this problem already affects the development of AI, and no solution has been found yet.
These are the issues related to the development of AI that together indicate that a slowdown in development is very likely, and soon. But even when the main problems are solved, it does not mean that AI will replace humans, at least not immediately. In fact, some professions will simply transform into new ones, just as a carriage driver who managed a team of horses is now a car driver. Such transformations have occurred many times throughout human history. New professions replace outdated ones. This is a normal and inevitable process of progress.
Take, for example, the profession of a software engineer. Not so long ago, a certain Cognition AI claimed that their model, Devin AI, is already a full-fledged full-stack developer. They even released a video in which their CEO talked about it. But in reality, things are not quite as described; if you try their Devin AI, you will find that it significantly falls short compared to something like ChatGPT. Moreover, video analysis shows that they themselves are aware of the problem and deliberately downplay the actual level.
The software developer profession has long been a target, even without AI. I'm sure experienced developers hear this quite frequently. First, Visual Basic was supposed to simplify development to a level any schoolchild could handle; then, with low-code and no-code, supposedly every housewife could create a product without developers. Technologies and systems proliferate, yet the demand for developers only grows! It seems to me that the developer profession will be one of the last ever to be replaced. After all, even writing code is not the hardest or most significant part of the job. The main challenge in development is transforming a client's wishes into a working product, and often the client doesn't even understand what they want. We developers are precisely engaged in formalizing functional requirements using programming languages.
I increasingly encounter code on the internet generated by generative models, mainly as advice to questioners on resources like StackOverflow, LinkedIn, Twitter, and so forth. Often, this code is subpar, sometimes it's simply erroneous. And this isn't a problem of generative AI, but a mistake of people who mindlessly copy "solutions" without delving into the problem. The internet already lacks useful and quality content. It feels like due to the mindless use of AI, trash content will displace the few crumbs that remain. This applies not only to code. Of course, there's already plenty of poor content, just consider the prevalence of low-quality marketing texts on websites. But it doesn't mean that anyone is better off if there is more of such content. By the way, generative AIs can already easily create tons of such soulless texts at the click of a finger.
First, it's to realize that AI is already here and we have to deal with it. You need to decide for yourself whether you are a participant or a spectator? It's best to calmly perceive technology as a tool, a kind of assistant. For example, generative AIs can help with routine tasks, fix a file, write a regular expression, summarize, suggest an answer to a question, and so on. But the result must definitely be validated by oneself, as systems can make mistakes and hallucinate.
Next, it wouldn't hurt to understand the terminology, such as what AI, neural network, machine learning, generative AI, AGI, and LLM mean.
It's also important to keep track of progress in the AI field to get a more complete picture of what's happening and at what level the technologies currently stand. For this, I recommend reading and watching leaders in the AI field, and studying the blogs of OpenAI, Microsoft, Meta, Google.

I would like to highlight, a place where different generative text models compete against each other. People choose the best answers using a blind method, and based on this, a ranking of models is built. Moreover, you can try any model for free right there. For example, have you heard about Claude 3 Opus from Anthropic? It's indeed a powerful model that is already at the level of the latest GPT-4 releases and perhaps even better in some tasks! In general, lmsys is a valuable resource for getting acquainted.
April 23, 2024