HDS-42: de Champeaux de Laboulaye, D.M.G. (2025) Algorithms in Artificial Intelligence. Doctoral thesis, Universiteit van Amsterdam.
![]() |
Text
HDS-42-Denis-de-Champeaux.text.pdf - Published Version Download (10MB) |
![]() |
Image
HDS-42-Denis-de-Champeaux.text.djvu - Published Version Download (1MB) |
![]() |
Text (Samenvatting)
HDS-42-Denis-de-Champeaux.samenvatting.txt - Other Download (7kB) |
Abstract
A.I. has been expanding vigorously in the last 20 years, and the number of publications continues to increase. The field has become so large that a tendency has emerged to split it up into different sections Computational Linguistics, Deduction, Cognitive Science and Vision. A hidden motivation for this fragmentation may be a desire to escape from the name “Artificial Intelligence” which arouses. strong feelings in some circles. In spite of this centrifugal force the field still (1981) manages to organize conferences where all sections come together.
It is customarily pointed out that substantial progress in all sections of A.I. awaits the capability of storing large amounts of knowledge to be used for intelligent activities. This position is certainly correct, but the snag is that before a lot of knowledge can be amassed, profound insight into the activities to be supported is required, otherwise the knowledge cannot be structured in such a way that relevant facts will be found quickly. Thus we have a real chicken and egg situation.
The substance of this thesis concerns algorithms for Search, Program Verification and Deduction. These algorithms perform well without support from massive knowledge. We believe that more such algorithms can be developed. Nevertheless work in the realm of permanent and temporal knowledge representation is to be recommended. In particular, it is recommended that the main source of inspiration for knowledge representation should not be the generalization of lexicon structures, but the support of knowledge-intensive algorithms. Giving heuristic functions, as they are used in the A*-algorithm, a firm footing in general knowledge representation schemes, is an obvious example of the work to be done.
Chapter two deals with the generalization of the uni-directional A*-algorithm to the bi-directional case. A uni-directional theorem says that a shortest path will be found (without exhaustive searching) provided the heuristic has certain properties. This theorem has been generalized to the bi-directional algorithm, as well as the so called “optimality” theorem.
The results we reported about bi-directional heuristic search suggest there is still room for improvement. Shorter solution paths were found in comparison with uni-directional search, but at higher computational costs. The potential advantage of working simultaneously in both directions, as we as humans frequently do, has not yet been formally clearified. Recently we initiated a new bi-directional project to attack this problem anew.
The main results of chapter three (substitution functions coded in LISP), concern automatic verification of code with side effects. The method developed ensures correct description of side effects for a subset of nasty LISP functions, which includes our newly introduced substitution function SUBSTAD. The verification of several versions, some of which were done completely automatically, reveals that the formal description of some functions is at present practically intractable. For instance, we estimate that the formal description of loop invariants for a particular version of a support function for SUBSTAD, requires several magnitudes more text than code (bearing in mind that making up formal descriptions is certainly as difficult as programming). This imbalance suggests that the expressive power of computer languages has currently outgrown the expressive power of state-description languages.
Although we agree with De Millo et al that the present verification tools do not lend themselves to practical use, we do not share their conviction that the whole bussiness should be abandoned. Verifiers will probably always run into resource limitations, but it is premature to assume that they will never be able to use mechanisms similar to those that enable humans to circumvent, without sacrificing preciseness, some of these limitations.
Chapter four deals with algorithmic deductive modules and its theoretical results concern obvious requirements for these modules. It is reassuring to observe that when the one-way pattern matcher INSTANCE reports success, one of the arguments can be inferred from the other, which makes INSTANCE sound. It is likewise nice to know that the subproblem recognizer gives maximal problem decompositions. But still more important are the results of the implementation of the modules. A deduction complex made up of a simple supervisor for these modules, together with a definition applier and a (connection graph resolution) refutation constructor, could solve problems which were distinctly beyond the capability of the sole refutation machine. The setup is structurally similar to the Hearsay architecture, in which separate specialist modules - here even located in different machines, thus allowing parallel processing - were also cooperating. We intend to develop other deductive modules and to give more attention to elaborate supervisors as a means of pushing the deductive limitations further away.
The relative ease with which fairly complicated problems can be programmed, and the reasonable performance on a wide range of problem instances of such programs, suggest that more attention should be given to real-life application of A.I. A few years ago we managed in two months time to program a natural language input processor for a nice fragment of Dutch. This was no ad hoc program, but one which used the special ATN language that supports a wide range of natural languages. With such tools, the development of commercial products becomes feasible. The industry/ software houses should jump at these opportunities.
Although we have studied quite disparate topics in A.I. the method we employed has been consistently the same. A quick and superficial literature study quided by fresh intuitions was translated as rapidly as possible into a running program. Study of the results and the behavior of the program then led to improvements, generalizations and/or complete revision. The literature was subsequently studied more carefully and some theory eventually developed. Finally, experiments were performed using when possible problems from the literature.
This method is time consuming, and not the way to present Flashy Grand Theories. In fact, we shy away from F.G.T. because there have been too many of them in the past lending to A.I. an exotic albeit questionable reputation. We recommend this method as a way to study A.I. “seriously”.
Item Type: | Thesis (Doctoral) |
---|---|
Report Nr: | HDS-42 |
Series Name: | ILLC Historical Dissertation (HDS) Series |
Year: | 2025 |
Subjects: | Computation Logic |
Depositing User: | Dr Marco Vervoort |
Date Deposited: | 15 Apr 2025 12:25 |
Last Modified: | 15 Apr 2025 12:25 |
URI: | https://eprints.illc.uva.nl/id/eprint/2357 |
Actions (login required)
![]() |
View Item |