top of page

Artificial Intelligence

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal.

 

The term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving" (known as machine learning). As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition.

1
2
3
4
Human-Evolution
Artificial-Intelligence
5
6
AI
7
8
9
70
survey

For instance, optical character recognition is no longer perceived as an example of "artificial intelligence", having become a routine technology. Capabilities currently classified as AI include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go), self-driving cars, intelligent routing in content delivery networks, and interpreting complex data.

​

We are the complete solution provider of your problem related to Artificial Intelligence. We also, provides gateway to learn and work on this technology which changes lives of the people.

​

The central problems (or goals) of AI research include reasoningknowledgeplanninglearningnatural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is among the field's long-term goals. Approaches include statistical methodscomputational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimizationlogicmethods based on probability and economics.

The AI field draws upon computer sciencemathematicspsychologylinguisticsphilosophyneuroscience and artificial psychology.

The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it". This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by mythfiction and philosophy since antiquity. Some people also consider AI a danger to humanity if it progresses. Attempts to create artificial intelligence have experienced many setbacks abandonment of perceptrons in 1970, the Lighthill Report of 1973, the second AI winter 1987–1993 and the collapse of the Lisp machine market in 1987.

​

In the twenty-first century, AI techniques, both hard (using a symbolic approach) and soft (sub-symbolic), have experienced a resurgence following concurrent advances in computer power, sizes of training sets, and theoretical understanding, and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science. Recent advancements in AI, and specifically in machine learning, have contributed to the growth of Autonomous Things such as drones and self-driving cars, becoming the main driver of innovation in the automotive industry.

​

Goals

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.

​

Reasoning, problem solving

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions (reason). By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.

For difficult problems, algorithms can require enormous computational resources—most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical for problems of a certain size. The search for more efficient problem-solving algorithms is a high priority.

​

Human beings ordinarily use fast, intuitive judgments rather than step-by-step deduction that early AI research was able to model. AI has progressed using "sub-symbolic" problem solving: embodied agent approaches emphasize the importance of sensor motor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the human ability.

​

Cybernetics and brain simulation

In the 1940s and 1950s, a number of researchers explored the connection between neurologyinformation theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England. By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

​

Symbolic

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon UniversityStanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI". During which symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

​

Cognitive simulation

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive scienceoperations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.

​

Logic-based

Machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms. His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representationplanning and learning. Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.

​

Anti-logic or scruffy

Researchers at MIT found that solving difficult problems in vision and natural language processing required ad-hoc solutions – they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Commonsense knowledge bases  are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.

​

Knowledge-based

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications. This "knowledge revolution" led to the development and deployment of expert systems, the first truly successful form of AI software. The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

​

Sub-symbolic

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perceptionroboticslearning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems. Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

​

Embodied intelligence

This includes embodiedsituatedbehavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive. Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

​

Computational intelligence and soft computing

Interest in neural networks and "connectionism" was revived. Neural networks are an example of soft computing, they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systemsevolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.

​

Machine learning and statistics

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI's recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Critics argue that these techniques (with few exceptions are too focused on particular problems and have failed to address the long-term goal of general intelligence. There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges.

bottom of page