From AI to Generative AI: Understanding the Magic Behind Our Machines
Connectionism vs Symbolism: The Yin and Yang of AI
Your brain, the top energy consumer in your body, is hard at work right now as you read and interpret this text - burning more calories than a "The Office" rewatch marathon would! Good job! This little-understood organ is also the most complex, mysterious, and powerful. Since the beginning of time, we've been trying to crack its code, mimic its functions, and understand what lies behind the intelligence it possesses. How can we create an AI - a second brain - capable of understanding, learning, and reasoning like ours? Even the most casual observer can see that we need to wrap our minds around what intelligence really is before we can dream of creating an artificial version. So let's contemplate, what is intelligence? Could it be the result of a bustling network of neurons, all working in harmony through bio-electrical signals and neurotransmitters? Or maybe, it's the result of a complex symbolic representation of our world, where concepts are intertwined through logical rules and reasoning? Our quest since the first projects has erupted different schools of thought with the mission of deciphering the meaning of intelligence. These schools have evolved over time, with two making their mark in our modern day: "Connectionism," and "Symbolism." Each has its own unique characteristics, strengths, and weaknesses.
Symbolism: Reasoning Through Rules
Symbolic AI is similar to using a large book of rules to solve problems. Imagine you have a puzzle, and for every piece of the puzzle, there's a specific rule in the book that tells you where it goes. Symbolic AI works by following these rules to understand things and make decisions. It's like playing a game where you always know the rules, and you use them to figure out how to win.
This specialized school of AI focuses on organizing knowledge and using logical principles to work with it. The concept is centered around the belief that intelligence emerges from manipulating symbols according to rules and guidelines. Known as Symbolic AI or classical AI, this method serves as a foundational AI approach. It prioritizes logic, rules, and symbolic reasoning as tools for tackling problems and representing knowledge. This approach held sway from the mid-1950s to the mid-1990s, representing an era where the AI community was deeply engrossed in developing machines showcasing what could be deemed as broad intelligence.
Its early dominance was characterized by a strong belief in its potential to achieve artificial general intelligence. Notable successes, such as the Logic Theorist and Samuel's Checkers Playing Program, showcased the potential of machines to mimic human-like reasoning and decision-making processes. These developments ignited high expectations and painted an optimistic future for AI.
ℹ️ The Logic Theorist was the first program designed to mimic human-like reasoning and decision-making processes. It was developed by Allen Newell, J.C. Shaw, and Herbert A. Simon in 1955. The program was capable of proving mathematical theorems and was considered a significant milestone in the history of AI.
ℹ️ Samuel's Checkers Playing Program was developed by Arthur Samuel in 1959. The program was capable of learning to play checkers and was considered a significant milestone in the history of AI.
Despite initial enthusiasm, Symbolic AI faced significant challenges. The complexity of real-world problems and the limitations of symbolic approaches in addressing them led to periods of disappointment: The AI Winter. A second resurgence, driven by the potential of expert systems to store corporate knowledge, encountered similar issues. Challenges included knowledge acquisition, maintaining large knowledge bases, and adapting to unexpected situations. These setbacks resulted in another calm period, known as the second AI Winter, which slightly dampened the excitement for Symbolic AI.
In response to these setbacks, the field of AI began to pivot towards addressing the fundamental challenges that had hindered progress in symbolic approaches. Efforts to tackle the problems of uncertainty and knowledge acquisition led to the development of new methodologies, including hidden Markov models, Bayesian reasoning, statistical relational learning, and advances in symbolic machine learning techniques like decision-tree learning, case-based learning, and inductive logic programming. These innovations aimed to enhance the robustness and applicability of AI systems across a wider range of scenarios.
When Symbolic AI encountered a few roadblocks, it was time for AI researchers to take a step back and address the core issues slowing their progress. They didn't just sit on their hands; they jumped into action, developing new approaches and methodologies to overcome these challenges. They created tools like hidden Markov models, a statistical model that can predict future states based on past states, and Bayesian reasoning, a method that uses probability to make decisions based on incomplete information. They also brought improvements to symbolic machine learning techniques, like decision-tree learning, a method that uses a tree-like graph to make decisions, case-based learning, a method that uses past experiences to make decisions, and inductive logic programming, a method that uses logic programming to make decisions based on examples. All these innovations were designed to make AI more reliable and operational across a wider range of scenarios.
Symbolism is not exclusive to AI; it is also utilized and studied in other fields such as education, psychology, and philosophy. Looking back in history, we find that symbolism builds upon the work of philosophers and mathematicians who developed formal systems of logic to represent and manipulate knowledge. Early examples include Al-Khwarizmi, Aristotle, Leibniz, Boole, and others. Some of the most renowned symbolic AI systems are expert systems. These systems employ a set of rules to make decisions, with popular examples being Siri, Alexa, Google Assistant, and Google Nest. These systems excel at understanding and reasoning with symbols, but they are not particularly adept at learning from data. For instance, when you ask Alexa to play a podcast on Spotify, it uses a set of rules to comprehend your request: "play" signifies start playing, "podcast" refers to a series of audio episodes, and "Spotify" is the app you wish to use. It doesn't need to learn from data to understand your request.
The advantage of symbolic AI is that it's relatively straightforward to understand and debug. It's akin to having an open book of rules that you need to follow to solve a problem or make a specific decision. Consequently, trust in the system is higher. Another advantage is that the results are explainable. When a symbolic AI makes a decision, the process is transparent. Unlike the connectionism approach, a symbolic AI system can explain why it made a particular decision and doesn't require a large dataset to learn. However, the disadvantage is that it's not very good at understanding complex or messy things (e.g., Big Data). For example, identifying objects in images could be challenging for symbolic AI because this kind of task is accomplished through pattern recognition, which is a result of an extensive training process. If the same task is performed using symbolic AI, it would require an extensive set of rules that are difficult to define. Most scenarios where the AI is expected to learn from data, handle uncertainty, and scale to large problems pose a challenge for symbolic AI.
★ Symbolic AI offers the advantage of being easily comprehensible and debuggable.
Generative AI For The Rest Of US
Your Future, DecodedEnroll now to unlock all content and receive all future updates for free.
Hurry! This limited time offer ends in:
To redeem this offer, copy the coupon code below and apply it at checkout:
