We suggest that any theoretical basis for AI has to reconcile the following three requirements, each of which poses separate computational challenges: There has to be some sound notion of deducing new propositions from old ones. There have to be effective mechanisms for learning from empirical data, if only to ensure that the knowledge base is robust rather than brittle. Finally, these challenges have to be met in a multi-object or relational setting, rather than a single-object propositional one. These three requirements have to be realised together with a feasible amount of computation, certainly not more than polynomial in terms of the relevant parameters.
We shall address these requirements in two complementary ways: First
we describe what we call a Robust Logic, a system that has
the required properties in the PAC sense of computational learning
theory: a class of relational rules can be learned from examples in a
strong sense of having both attribute efficiency and error resilience, and
the rules can be chained to provide a sound deduction procedure for reasoning
about individual instances. Second we describe the Neuroidal
Architecture, a proposed architecture for intelligent systems.
On the one hand the robust logic gives a semantics for the
architecture and justifies the efficacy and efficiency of its learning
and reasoning mechanisms. On the other, this architecture can be used
to develop in more detail mechanisms that address issues which any scalable
system would be expected to face, such as conflict resolution and the problems
of incomplete information.