Artificial intelligence is not new, but it is more current than ever
- Sep 19, 2019
- 258 views
There is no simple definition for AI (artificial intelligence). Especially because man does not know his own intelligence. Neuroscience is still in its infancy in discovering how the human brain works. As we evolve in these studies we can certainly move forward in creating a more generalist AI by adding components that can bring it closer to the dream of giving the machine the ability to mimic the behavior of human intelligence.
In 1955 a group of researchers at Darmouth University sought a way to teach the machine how to use language, abstract forms, and concepts to solve problems previously reserved for humans and to learn from them. That was when the term IA was coined. The topic is so exciting and new that it is widely discussed in the academic community to this day.
But human intelligence is not limited to the use of brain and body. Our learning is built throughout our lives through the experiences and intensity of the emotions experienced creating memories that will be accessed when we need them. And much of this learning happens in the presence of others, which makes us reflect that empathy (putting oneself in the other's place) is an important component in collaborative problem-solving processes where multidisciplinarity plays an enriching role.
AI's main goal is to make machines perform functions that, if performed by a human being, would be considered intelligent. It involves reasoning ability, applying logical rules to reach a conclusion, learning, recognizing visual, sensory, behavioral and inference patterns, the ability to apply reasoning in everyday situations.
The intention was never to replace the man, but to give him more time for activities that provide quality of life, eliminating or speeding repetitive tasks. Especially since the man who teaches the machine will have to develop further in areas that the machine will not be able to learn on its own. Make connections to seemingly unrelated domains. Use creativity and improve your ability to transfer knowledge from one context to another.
One of the earliest and earliest applications of AI was in weather forecasting. Based on a large number of previous data and expert predictions, the computer could be asked to draw conclusions with methods similar to those of professionals. It soon became clear that the more specific the theme, the easier it was to create intelligent systems. The more generic, the more complicated. However, there was not enough data and storage and processing capabilities to model and make the necessary inferences.
Technological advancement brings us back to the beginning of this discussion and why Artificial Intelligence is at its height. Today we live in a reality full of data (big data / data centers) and computational power (modern servers and processors). What was a hindrance in the past is no more. We already know how to create models and we have come a long way in modeling / AI techniques such as machine learning and neural networks.
Within this scope, it is also worth highlighting two technological aspects: Natural Language Processing (NLP) and Visual Computing (VC). NPL is the automatic generation and understanding processing of natural language, both spoken and written, and has in personal assistants a clear manifestation of these advances. Visual Computing, on the other hand, is image recognition, which allows, for example, to detect health characteristics or conditions through even more precise forms than a physician.
Everything is based on huge databases. Samples of texts and voices, in the case of NLP, and images, in the case of Visual Computing, very well cataloged and interpreted that, added to an algorithm capable of identifying relationships and patterns between the data, classify and create models.
Importantly, AI uses little or no traditional binary logic. That is, instead of simply answering “true” or “false” to an image, AI is based on a fuzzy logic, where there are many more options. Thus, a viable answer when looking at an image of a dog, for example, would be: "There is an 86% chance of being a German Shepherd."
If we analyze specific market segments, we can highlight some practical examples. In the finance / accounting area, AI allows detecting deviations and errors, identifying suspicious credit card use transactions and embezzling / corruption. In retail, the US already uses cameras to detect items that need to be replaced on the shelves and even things out of place. In schools, Microsoft already offers a system to predict whether a student will drop out of high school based on grades, attendance, and family history. In health, Google can map a range of issues as well as age, gender and habits with just a retinal image.
Positivo Tecnologia has developed in Brazil a system of Indoor Positioning System called “Schood”, which detects the student's routine inside the school, makes automatic call (combining IoT and IA, which we call IAoT) and warns the student when his father is about to arrive to pick you up. These and other actions are part of a work that Positivo Tecnologia has been doing for 30 years to democratize technology and offer increasingly intelligent devices. It has even recently launched Positivo Smart Home, a platform for IoT solutions that, among other features, is able to warn the user via mobile phone if there is any strange movement in their home.
All of these examples show that we are not at the beginning of the AI trajectory, but we are also far from the end. Artificial Intelligence will invade every industry once and for all and will soon reach its next phase: affective computing, when the dream is to make the machine understand and even why not, to express feelings.