TY - GEN
T1 - Radial restraint
T2 - 27th AAAI Conference on Artificial Intelligence, AAAI 2013
AU - Grosof, Benjamin
AU - Swift, Terrance
PY - 2013/12/1
Y1 - 2013/12/1
N2 - Declarative logic programs (LP) based on the well-founded semantics (WFS) are widely used for knowledge representation (KR). Logical functions are desirable expressively in KR, but when present make LP inferencing become undecidable. In this paper, we present radial restraint: a novel approach to bounded rationality in LP. Radial restraint is parameterized by a norm that measures the syntactic complexity of a term, along with an abstraction function based on that norm. When a term exceeds a bound for the norm, the term is assigned the WFS's third truth-value of undefined. If the norm is finitary, radial restraint guarantees finiteness of models and decidability of inferencing, even when logical functions are present. It further guarantees soundness, even when non-monotonicity is present. We give a fixed-point semantics for radially restrained well-founded models which soundly approximate well-founded models.We also show how to perform correct inferencing relative to such models, via SLGABS, an extension of tabled SLG resolution that uses norm-based abstraction functions. Finally we discuss how SLGABS is implemented in the engine of XSB Prolog, and scales to knowledge bases with more than 108 rules and facts.
AB - Declarative logic programs (LP) based on the well-founded semantics (WFS) are widely used for knowledge representation (KR). Logical functions are desirable expressively in KR, but when present make LP inferencing become undecidable. In this paper, we present radial restraint: a novel approach to bounded rationality in LP. Radial restraint is parameterized by a norm that measures the syntactic complexity of a term, along with an abstraction function based on that norm. When a term exceeds a bound for the norm, the term is assigned the WFS's third truth-value of undefined. If the norm is finitary, radial restraint guarantees finiteness of models and decidability of inferencing, even when logical functions are present. It further guarantees soundness, even when non-monotonicity is present. We give a fixed-point semantics for radially restrained well-founded models which soundly approximate well-founded models.We also show how to perform correct inferencing relative to such models, via SLGABS, an extension of tabled SLG resolution that uses norm-based abstraction functions. Finally we discuss how SLGABS is implemented in the engine of XSB Prolog, and scales to knowledge bases with more than 108 rules and facts.
UR - http://www.scopus.com/inward/record.url?scp=84893369922&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84893369922&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84893369922
SN - 9781577356158
T3 - Proceedings of the 27th AAAI Conference on Artificial Intelligence, AAAI 2013
SP - 379
EP - 386
BT - Proceedings of the 27th AAAI Conference on Artificial Intelligence, AAAI 2013
Y2 - 14 July 2013 through 18 July 2013
ER -