The definition of AI may differ depending on who you ask. Back in the 50s, the fathers of this field would have described AI as any task performed by a machine that would have previously been considered to require human intelligence. In essence, that is true, but it’s a rather broad definition.
A better definition of the term would be the ability of a computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.
Types of AI
At the highest level, AI can be split into two categories;
Narrow AI
Narrow Ai is what we see around in computers today – Intelligent systems that have been taught or have deployed machine learning to learn how to do specific tasks without being explicitly programmed to do so.
This particular type of AI is evident in the fields of speech and language recognition synonymous with virtual assistants on mobile devices and in vision-recognition systems onboard autonomous cars.
Unlike humans, these systems can only learn how to or be taught how to perform defined tasks.
General AI
General AI is a different kettle of fish altogether. It is the type of adaptable intellect we have – a flexible form of intelligence capable of learning how to do vastly different tasks. Tasks could be anything from hairdressing to programming or reasoning about a wide variety of topics based on past experiences.
You probably have seen this kind of AI in movies most famous of all, The Terminator. It’s all fiction though and doesn’t really exist. How soon we can achieve general AI remains a rather sore topic among AI experts.
AI has focused on five fields of human-like characteristics in the development of intelligent systems.
AI Abilities
Learning
AI systems employ different forms of learning with the simplest being trial and error.
For instance, take an intelligent system from some AI chip company trying to solve the mate-in-one chess problem. It might try to move at random till a mate is found or achieved. The system will then store the solution with the position so that the next time it is faced with such a problem, it recalls the solution.
This simple memorization of individual items and procedures is called rote learning – it is rather easy to implement on a computer I might add.
Reasoning
Reasoning is basically drawing inferences from a set of circumstances. Reasoning could be deductive or inductive.
Inductive reasoning is common in science where data is collected and tentative models developed to describe and predict future behavior.
Deductive reasoning on the other hand is common in mathematics and logic where elaborate structures of foolproof theorems are built from a small set of rules.
Problem Solving
In AI, problem solving could be defined as the systematic search through a range of possible actions in order to reach some predefined goal or solution. The problem-solving method could be general purpose for a wide range of problems or special purpose for a particular problem.
Perception
Perception involves the environment being scanned by various sensory organs and the scene being decomposed into separate objects in spatial relationships.
Language
Writing computer programs that seem able in restricted contexts to respond in human language to questions and statements is relatively easy. These programs may not actually understand the language but in principle, they may reach a point where their command of a language is near-identical to humans.
Challenges in AI
Still Not as Accurate as Humans
Machine learning algorithms depend entirely on themselves to learn and develop. A simple error in early stages may render all future inferences inaccurate. The amount of fine-tuning and optimization in addition to the data and algorithms that would be required to come up with a system as accurate as humans is unfathomable. A human can tell the difference between a dog and a cat 99% of the time – could be a little confusing with Egyptian sphinxes though.
Way Too Many Resources
In a rather large AI system, there may be an issue of a lot of data to be processed and tons of resources to be managed. All this just slows down the system and we don’t want that.
With edge AI computing however, you can get round that.
Edge AI Computing
This particular technology allows you to deploy AI applications near the user at the edge of the network hence the name ‘edge’. This is where the data is located, which makes processing faster and consequently the whole system gets faster compared to when the data is centrally stored in a cloud computing facility. Edge computing is made possible by devices like an AI box.
An AI box is a small, low-power device designed for easy deployment, on-site or on the move. It provides real-time secure computer vision AI monitoring for a variety of applications.
Lack of Technical Knowledge
Unless you are an AI expert or AI chip company, you probably don’t know exactly how it all works. To integrate and deploy AI applications in your enterprise, you need to have knowledge on current AI trends and developments as well as the shortcomings. The lack of the necessary technical knowledge may hinder you from going all in.