Learning explainable concepts in the presence of a qualitative model.

Learning explainable concepts in the presence of a qualitative model.

Show full item record

Title: Learning explainable concepts in the presence of a qualitative model.
Author: Rouget, Thierry.
Abstract: This thesis addresses the problem of learning concept descriptions that are interpretable, or explainable. Explainability is understood as the ability to justify the learned concept in terms of the existing background knowledge. The starting point for the work was an existing system that would induce only fully explainable rules. The system performed well when the model used during induction was complete and correct. In practice, however, models are likely to be imperfect, i.e. incomplete and incorrect. We report here a new approach that achieves explainability with imperfect models. The basis of the system is the standard inductive search driven by an accuracy-oriented heuristic, biased towards rule explainability. The bias is abandoned when there is heuristic evidence that a significant loss of accuracy results from constraining the search to explainable rules only. The users can express their relative preference for accuracy vs. explainability. Experiments with the system indicate that, even with a partially incomplete and/or incorrect model, insisting on explainability results in only a small loss of accuracy. We also show how the new approach described can repair a faulty model using evidence derived from data during induction.
Date: 1995
URI: http://hdl.handle.net/10393/9762

Files in this item

Files Size Format View
MM11595.PDF 3.059Mb application/pdf View/Open

This item appears in the following Collection(s)

Show full item record


Contact information

Morisset Hall (map)
65 University Private
Ottawa ON Canada
K1N 6N5

Tel. 613-562-5800 (4563)
Fax 613-562-5195

ruor@uottawa.ca