Declarative logic programs (LP) based on the well-founded semantics (WFS) are widely used for knowledge representation (KR). Logical functions are desirable expressively in KR, but when present make LP inferencing become undecidable. In this paper, we present radial restraint: a novel approach to bounded rationality in LP. Radial restraint is parameterized by a norm that measures the syntactic complexity of a term, along with an abstraction function based on that norm. When a term exceeds a bound for the norm, the term is assigned the WFS's third truth-value of undefined. If the norm is finitary, radial restraint guarantees finiteness of models and decidability of inferencing, even when logical functions are present. It further guarantees soundness, even when non-monotonicity is present. We give a fixed-point semantics for radially restrained well-founded models which soundly approximate well-founded models.We also show how to perform correct inferencing relative to such models, via SLGABS, an extension of tabled SLG resolution that uses norm-based abstraction functions. Finally we discuss how SLGABS is implemented in the engine of XSB Prolog, and scales to knowledge bases with more than 108 rules and facts.