Monday, February 24, 2020

Design Metaphysics: An Exercise in Abductive Reasoning I


Conceptual impressions surrounding this post are yet to be substantiated, corroborated, confirmed or woven into a larger argument, context or network.

Design by nature is theoretical. 

in theoryin theory, your idea sounds great, but can it be practically applied? in principle, on paper, in the abstract, all things being equal, in an ideal world; hypothetically, theoretically, supposedly.

The concept of Design Metaphysics has been unintentionally discussed by virtue of the investigative efforts of Paul Thagard and Cameron Shelley in their paper titled, "Abductive Reasoning: "Logic, visual thinking, and coherence".  I believe that certain qualities attributable to the concept of Design in accordance with Design Metaphysics, are unknowingly veiled in the author's critique of two former models of abductive reasoning mentioned in the paper. 

I have highlighted certain comments I believe support my position concerning the concept of Design. References can be found throughout the blog sites listed below.  




Abductive reasoning: Logic, visual thinking, and coherence* 

Paul Thagard and Cameron Shelley 

Philosophy Department
University of Waterloo
Waterloo, Ontario, N2L 3G1
pthagard@watarts.uwaterloo.ca
© Paul Thagard and Cameron Shelley, 1997

Abstract

This paper discusses abductive reasoning---that is, reasoning in which explanatory hypotheses are formed and evaluated. First, it criticizes two recent formal logical models of abduction. An adequate formalization would have to take into account the following aspects of abduction: explanation is not deduction; hypotheses are layered; abduction is sometimes creative; hypotheses may be revolutionary; completeness is elusive; simplicity is complex; and abductive reasoning may be visual and non-sentential. Second, in order to illustrate visual aspects of hypothesis formation, the paper describes recent work on visual inference in archaeology. Third, in connection with the evaluation of explanatory hypotheses, the paper describes recent results on the computation of coherence.
  
There has been an explosion of recent work in artificial intelligence that recognizes the importance of abductive reasoning---that is, reasoning in which explanatory hypotheses are formed and evaluated. Many important kinds of intellectual tasks, including medical diagnosis, fault diagnosis, scientific discovery, legal reasoning, and natural language understanding have been characterized as abduction. Appropriately, attempts have been made to achieve a more exact understanding of abductive reasoning by developing formal models that can be used to analyze the computational properties of abductive reasoning and its relation to other kinds of inference. Bylander et al. [4] have used their formal model to analyze the computational complexity of abduction and show that in general it is NP-hard. Konolige [16] has used a similar formal model to derive results concerning the relation between abductive reasoning and Reiter's model of diagnosis [16]. While these formal results are interesting and useful, it would be unfortunate if researchers were to conclude that the analyses of Konolige, Bylander et al. have provided a precise understanding of abductive reasoning. We shall discuss numerous important aspects of inference to explanatory hypotheses that are not captured by one or both of the formalisms that these authors have proposed. In particular, we show how these models do not adequately capture abductive discovery using representations that are pictorial, and we argue that abductive evaluation should be conceived in terms of coherence rather than deduction.

( Section 1. The Models has no commentary ) 

2. The limitations

2.1 Explanation is not deduction

First, let us consider the nature of explanation. The analysis of Bylander et al. contains the unanalyzed function e that specifies which hypotheses explain which data. Konolige's account has the apparent advantage that it spells out this relation by using the notion of logical consequence as represented by the turnstile "vdash". Hypotheses (causes) explain data (effects) if the latter can be deduced from the former and the domain theory. Konolige is thus assuming a deductive account of explanation that has been the subject of several decades of critical discussion in the philosophy of science.

In 1948, Hempel and Oppenheim proposed what has come to be known as the deductive-nomological model of explanation [12]. On this model, an explanation is a deductive argument in which a sentence representing a phenomenon to be explained is derived from a set of sentences that describe particular facts and general laws (nomos is Greek for law). This model provides a good approximation for many explanations, particularly in mathematical physics. But it is evident that the deductive model fails to provide either necessary or sufficient conditions for explanation. See [15] for a comprehensive review of several decades of philosophical discussions of the nature of explanation, and [18] for application to artificial intelligence.

First, there are many explanations in science and everyday life that do not conform to the deductive model. Hempel himself discussed at length statistical explanations in which what is explained follows only probabilistically, not deductively, from the laws and other sentences that do the explaining. Many critics have argued that explanations in such fields as history and evolutionary biology rarely have a deductive form. In both philosophy and AI, researchers have proposed that many explanations can be understood in terms of applications of schemas that fit a phenomenon into a pattern without producing a deductive argument. (For a review of different approaches to explanation in these fields, see [27, 28].) For example, a Darwinian explanation of how a species evolved by natural selection applies a general pattern that cites biological mechanisms and historical facts to suggest how an adaptation might have come about. But the historical record is too sparse and biological principles are too qualitative and imprecise for deductive derivation. Thus Konolige's use of deduction in his characterization of abduction arbitrarily excludes many domains in which hypotheses are formed and evaluated but do not provide deductive explanations.

Second, the deductive model of explanation does not even provide sufficient conditions for explanation, since there are examples that conform to the model but do not appear to constitute explanations. For example, we can deduce the height of a flagpole from information about its shadow along with trigonometry and laws of optics. But it seems odd to say that the length of a flagpole's shadow explains the flagpole's height. Konolige's subset-minimality requirement serves to rule out some of the cases of irrelevance that philosophers have discussed, for example the explanation that a man is not pregnant because he takes birth control pills. But other examples such as the flagpole show that some additional notion of causal relevance is crucial to many kinds of explanation, and there is little hope of capturing this notion using logic alone. Contrast Pearl's [20] work on Bayesian networks and Peng and Reggia's [21] model of abduction which employ ineliminably intuitive notions of causality. A general model of abduction requires an account of explanation that is richer than deduction.

2.2 Hypotheses are layered

Bylander et al. assume that abduction can be characterized by distinguishing sets of data from sets of hypotheses, with explanation consisting of a mapping from the latter to the former. This characterization is adequate for straightforward kinds of medical diagnosis involving a given set of symptoms (data) that can be explained by a given set of diseases. But it neglects the complexity of other kinds of abductive tasks in which the organization of hypotheses and data is more complex. In particular, it rules out a large set of cases where hypotheses explain other hypotheses and selection of the best overall explanation depends on taking these relations into account. For example, in legal inference the availability of a possible motive goes a long way to increasing the chance for a murder conviction. The data are the various pieces of physical evidence, e.g., that the suspects' fingerprints were found on the murder weapon. The hypothesis is that the suspect is the murderer, and a higher level hypothesis might be that the suspect hated the victim because of some previous altercation. The plausibility of the lower-level hypothesis comes not only from what it explains, but also from it itself being explained. This kind of hierarchical explanation in which hypotheses explain other hypotheses that explain data is also found in science and medicine; it has been discussed both in the context of Bayesian models of probabilistic belief revision [20] and in the context of explanatory coherence accounts of scientific inference [26, 28].

Just as Bylander et al. assume that only data are explained, so Konolige assumes that only observed effects are explained by derivation from causes. But as both Bayesian and explanatory coherence analyses allow, causes are often themselves effects and assessment of overall acceptability of explanatory hypotheses must take this into account.

2.3 Abduction is sometimes creative

The definitions of Konolige and Bylander et al. have the unfortunate implication that abduction is always a matter of selection from a known set of hypotheses or causes. Creative abduction often involves the construction of novel hypotheses involving newly formed concepts such as natural selection or AIDS. For Charles Peirce, who coined the term ``abduction'' a century ago, the introduction of unifying conceptions was an important part of abduction [11, 25], and it would be unfortunate if our understanding of abduction were limited to more mundane cases where hypotheses are simply assembled. Abduction does not occur in the context of a fixed language, since the formation of new hypotheses often goes hand in hand with the development of new theoretical terms such as ``atom,'' ``electron,'' ``quark,'' ``gene,'' ``neuron'' and ``AIDS.''


2.4 Hypotheses may be revolutionary

The first clause of Konolige's second definition requires that a hypothesized cause A be consistent with a domain theory. While this requirement may be acceptable for mundane applications, it will not do for interesting cases of belief revision where the introduction of new hypotheses leads to rejection of previously held theories. For example, when Darwin proposed the hypothesis that species evolve as the result of natural selection, he was contradicting the dominant view that species were fixed by divine creation. Konolige's consistency requirement would prevent Darwin's hypothesis from ever being considered as part of an abductive explanation that would eventually supplant the accepted biological/theological domain theory. We can not simply delete a belief and then replace it with one inconsistent with it, because until the new belief comes in competition with the old one, there is no reason to delete the old one. Belief revision requires a complex balancing of a large number of beliefs and kinds of evidence. Algorithms for performing such balancing are available, such as Bayesian belief revision and connectionist judgments of explanatory coherence. Bylander et al. do not have a consistency condition like Konolige's.

2.5 Completeness is elusive

Bylander et al. do not count a set of hypotheses as an explanation unless it is complete, i.e., explains all the data. They admit that there might be ``partial'' explanations, but want completeness as a requirement because of the goal that all the data should be explained. This is a laudable goal, but does not justify building completeness into the definition of an explanation, since the goal is so rarely accomplished in realistic situations. From medicine to science, it is not typically the case that everything can be explained even by the best of theories. Even the greatest scientific theories such as Newtonian mechanics have faced anomalies through their histories; for example, the perihelion of Mercury was not explained until Einstein developed the theory of relativity. Doctors need to make diagnoses even when some symptoms remain puzzling. The requirement of completeness makes sense only in limited closed domains such as simple circuits where one can be assured that everything can be explained given the known causes.


2.6 Simplicity is complex
Many researchers on abduction have seen the need to use simplicity as one of the constraints on inference to the best explanation, i.e., we want not simply a set of hypotheses that explain as much as possible, but also the most economical set of assumptions possible. Hobbs et al. [13] sum this up by saying that we want the most bang (facts explained) for the buck (quantity of assumptions). But simplicity is an elusive notion that is not adequately captured by the relation of subset minimality which deals only with cases where we can prefer an explanation by a set of hypotheses to an explanation by a superset of those hypotheses. We need a broader notion of simplicity to handle cases where the competing explanations are accomplished by sets of hypotheses that are of different sizes but are not subsets of each other. Such cases are common in the history of science and require richer notions of simplicity than subset minimality [28, 25].


2.7 Abductive reasoning may use non-sentential representations

Konolige's definition, like virtually all available accounts of abduction, presumes that explanatory hypotheses are represented sententially. But we will now show that some abductive inference is better understood as using pictorial or other iconic representations.

Continued on Design Metaphysics: An Exercise in Abductive Reasoning II

Edited: 03.14.2020, 12.01.2020, 08.06.2023
Find your truth. Know your mind. Follow your heart. Love eternal will not be denied. Discernment is an integral part of self-mastery. You may share this post as long as author, copyright and URL https://designmetaphysics.blogspot.com/ is included as the resource and shared on a non-commercial no charge basis. Please note … posts are continually being edited over time. Copyright © 2023 C.G. Garant. All Rights Reserved. (Fair use notice) AI usage prohibited. You are also invited to visit https://designconsciousness.blogspot.com/  and https://sagariandesignnetwork.blogspot.com and https://www.pinterest.com




No comments: