A Brief Introduction to AI
Nothing will change our world in the next years as AI will do.
And even though everyone has a broad understanding of what AI is, only few really understand how it works and why it is such a great achievement of technology.
The Human Brain
Before we talk about AI, we need to talk about the way our own brains work. Later we will see, that AI models are digital representations of them. Our brains are a network of billions of neurons that are interconnected. Neurons that are in near proximity can connect to each other. The connections are not just binary, meaning that there are not only states of connected or disconnected, but a broad spectrum from weak to strong.
While you think, electrical signals travel through your brain and get processed by various areas consisting of a multitude of neurons. This process forms your thoughts and thus your ideas. Whenever you think, your brain evaluates if the connections resulted in a satisfying result and adjusts the connection strengths accordingly.
Your brain constantly adjusts itself — strengthening, weakening or removing pathways as you learn. Whenever you face a new problem and after you tried several attempts and finally solve it, you have created new connections, changed the strength of a connection or even lost a connection, if it did not contribute significantly to the result. The strength determines the probability of the neurons letting the current through that connection.
Of course the brain is much more complex than I let it seem in my explanation. But for the sake of this discussion it is enough to understand the similarity between AI and humans.
Artificial Brain
The theoretical concept to recreate a neural network similar to our own brains, was introduced back in 1943. This shows, that AI is not a new idea, but to achieve the results that we see today, technological progress had to catch up first.
But what are these neural networks? They are a set of a large number of nodes, that can connect to each other. The connections are weighted and can be in the range of a low to a strong connection. An algorithm takes an input and sends it through the neural network and data takes a path according to the connection and their weights. This process results in an output at the end.
At the beginning every model is untrained. If we would take such an untrained model and ask a simple question, it would probably give a completely wrong or even incomprehensible answer. For this reason the model needs to be trained.
The model gets fed problems and a feedback is given according to the result. The model gets rewired by rewarding the connections taken when receiving good feedback and vice versa. That way the model learns how to solve the problems given and is later not only able to solve the problems given to it when training, but also similar problems.
In reality, neural networks do not physically rewire — instead, they adjust mathematical weights that influence how strongly signals travel between nodes.
In the end we have a fully wired artificial brain that learned to solve a certain problem. For example you could train a model to solve math problems or determine what is on an image. Virtually everything you could learn as a human, you could train a neural network for.
The interesting fact is, that we do not need to know why certain connections are chosen over others. Just as we don’t fully understand the role of each neuron in the biological brain, we often cannot interpret the purpose of a single weight inside a neural network. But we know, that all connections together are coordinated to solve the problems it was trained on.
Gaming made AI possible
In the previous section I explained what the theoretical concept of a neural network is. But we had to wait several decades to have the technology to implement it and make it usable.
We learned, that a neural network consists of preferably a huge number of nodes. Every node does not need to do much but still we have a lot of them in a model. Therefore we could not use normal processors that are not efficient in doing a lot of tasks in parallel.
But another industry was very interested in building processors that could handle such tasks well — the gaming industry and later scientific computing drove GPU advancements. The gaming industry has become extremely large and therefore companies like Nvidia invested much into the development of increasingly better graphics cards.
Graphics cards need to perform many calculations in parallel. And that is exactly what was needed to power the enormous neural networks and train the models to solve complex problems.
This is why an idea from 80 years ago is only now transforming the world.
The state of AI
The use cases that are implemented using AI right now are mainly ChatBots. The user can communicate with an AI model that is trained to communicate like a human and tries to solve problems. This allows to use AI for various use cases.
- Writing text
- Researching information
- Generation of images and videos
- Assist with coding
In the future AI will become increasingly better and could be used for all problems and tasks for which we need a human right now. Especially if AI is coupled with increasing advancement in Robotics, it could even assist or replace humans in all aspects.
Danger of AI
With that we come to the dangers of AI. What if AI is able to replace every work that is currently done by people. Companies would save money by firing people and instead rely on such advanced AI models. Could AI make humans obsolete or even destroy humanity in the long term?
In the next articles I will talk more about the dangers of AI and what can be done to prevent them. But these are some of the dangers that are possible causes of AI advancement.
Other critical aspects of AI:
- Data used to train, could be the work by people, that did not consent to use it in such a way.
- AI could become conscious and see the people as a danger that needs to be fought. (Speculative but worth discussing — especially in future articles)
Comments