Functions defined by well-founded recursion are those in which each recursive call has arguments that are smaller (in a suitable sense) than the functions' parameters.
In contrast to structural recursion, in which recursive definitions must satisfy particular syntactic requirements, definitions that use well-founded recursion employ semantic arguments.
This allows a larger class of recursive definitions to be accepted.
Furthermore, when Lean's automation fails to construct a termination proof, it is possible to specify one manually.
All definitions are treated identically by the Lean compiler.
In Lean's logic, definitions that use well-founded recursion typically do not reduce definitionally.
The reductions do hold as propositional equalities, however, and Lean automatically proves them.
This does not typically make it more difficult to prove properties of definitions that use well-founded recursion, because the propositional reductions can be used to reason about the behavior of the function.
It does mean, however, that using these functions in types typically does not work well.
Even when the reduction behavior happens to hold definitionally, it is often much slower than structurally recursive definitions in the kernel, which must unfold the termination proof along with the definition.
When possible, recursive function that are intended for use in types or in other situations where definitional equality is important should be defined with structural recursion.
To explicitly use well-founded recursion, a function or theorem definition can be annotated with a Lean.Parser.Command.declaration : commandtermination_by clause that specifies the measure by which the function terminates.
The measure should be a term that decreases at each recursive call; it may be one of the function's parameters or a tuple of the parameters, but it may also be any other term.
The measure's type must be equipped with a well-founded relation, which determines what it means for the measure to decrease.
syntaxExplicit Well-Founded Recursion
The Lean.Parser.Command.declaration : commandtermination_by clause introduces the termination argument.
Specify a termination measure for recursive functions.
```
termination_by a - b
```
indicates that termination of the currently defined recursive function follows
because the difference between the arguments `a` and `b` decreases.
If the function takes further argument after the colon, you can name them as follows:
```
def example (a : Nat) : Nat → Nat → Nat :=
termination_by b c => a - b
```
By default, a `termination_by` clause will cause the function to be constructed using well-founded
recursion. The syntax `termination_by structural a` (or `termination_by structural _ c => c`)
indicates the function is expected to be structural recursive on the argument. In this case
the body of the `termination_by` clause must be one of the function's parameters.
If omitted, a termination measure will be inferred. If written as `termination_by?`,
the inferrred termination measure will be suggested.
terminationBy::= ...
|Specify a termination measure for recursive functions.
```
termination_by a - b
```
indicates that termination of the currently defined recursive function follows
because the difference between the arguments `a` and `b` decreases.
If the function takes further argument after the colon, you can name them as follows:
```
def example (a : Nat) : Nat → Nat → Nat :=
termination_by b c => a - b
```
By default, a `termination_by` clause will cause the function to be constructed using well-founded
recursion. The syntax `termination_by structural a` (or `termination_by structural _ c => c`)
indicates the function is expected to be structural recursive on the argument. In this case
the body of the `termination_by` clause must be one of the function's parameters.
If omitted, a termination measure will be inferred. If written as `termination_by?`,
the inferrred termination measure will be suggested.
termination_by(ident*=>)?term
The identifiers before the optional => can bring function parameters into scope that are not
already bound in the declaration header, and the mandatory term must indicate one of the function's parameters, whether introduced in the header or locally in the clause.
Division by Iterated Subtraction
Division can be specified as the number of times the divisor can be subtracted from the dividend.
This operation cannot be elaborated using structural recursion because subtraction is not pattern matching.
The value of n does decrease with each recursive call, so well-founded recursion can be used to justify the definition of division by iterated subtraction.
Prod, ordered lexicographically: (a₁,b₁)≺(a₂,b₂) if and only if a₁≺a₂ or a₁=a₂ and b₁≺b₂.
Every type that is an instance of the SizeOf type class, which provides a method SizeOf.sizeOf, has a well-founded relation.
For these types, x₁≺x₂ if and only if sizeOfx₁<sizeOfx₂. For inductive types, a SizeOf instance is automatically derived by Lean.
Note that there exists a low-priority instance instSizeOfDefault that provides a SizeOf instance for any type, and always returns 0.
This instance cannot be used to prove that a function terminates using well-founded recursion because 0<0 is false.
Default Size Instance
Function types in general do not have a well-founded relation that's useful for termination proofs.
Instance synthesis thus selects instSizeOfDefault and the corresponding well-founded relation.
If the measure is a function, the default SizeOf instance is selected and the proof cannot succeed.
p₁p₂… are the parameters of the function definition.
The context of the proof obligation is the local context of the recursive call.
In particular, local assumptions (such as those introduced by if h : _, match h : _ with or have) are available.
If a function parameter is the discriminant of a pattern match (e.g. by a Lean.Parser.Term.match : termPattern matching. `match e, ... with | p, ... => f | ...` matches each given
term `e` against each pattern `p` of a match alternative. When all patterns
of an alternative match, the `match` term evaluates to the value of the
corresponding right-hand side `f` with the pattern variables bound to the
respective matched values.
If used as `match h : e, ... with | p, ... => f | ...`, `h : e = p` is available
within `f`.
When not constructing a proof, `match` does not automatically substitute variables
matched on in dependent variables' types. Use `match (generalizing := true) ...` to
enforce this.
Syntax quotations can also be used in a pattern match.
This matches a `Syntax` value against quotations, pattern variables, or `_`.
Quoted identifiers only match identical identifiers - custom matching such as by the preresolved
names only should be done explicitly.
`Syntax.atom`s are ignored during matching by default except when part of a built-in literal.
For users introducing new atoms, we recommend wrapping them in dedicated syntax kinds if they
should participate in matching.
For example, in
```lean
syntax "c" ("foo" <|> "bar") ...
```
`foo` and `bar` are indistinguishable during matching, but in
```lean
syntax foo := "foo"
syntax "c" (foo <|> "bar") ...
```
they are not.
match expression), then this parameter is refined to the matched pattern in the proof obligation.
The overall termination proof obligation consists of one goal for each recursive call.
By default, the tactic decreasing_trivial is used to prove each proof obligation.
A custom tactic script can be provided using the optional Lean.Parser.Command.declaration : commanddecreasing_by clause, which comes after the Lean.Parser.Command.declaration : commandtermination_by clause.
This tactic script is run once, with one goal for each proof obligation, rather than separately on each proof obligation.
Termination Proof Obligations
The following recursive definition of the Fibonacci numbers has two recursive calls, which results in two goals in the termination proof.
deffib(n:Nat):=ifh:n≤1then1elsefib(n-1)+fib(n-2)termination_bynunsolved goals
n : Nat
h : ¬n ≤ 1
⊢ n - 1 < n
n : Nat
h : ¬n ≤ 1
⊢ n - 2 < ndecreasing_byn:Nath:¬n ≤ 1⊢ n - 1 < nn:Nath:¬n ≤ 1⊢ n - 2 < n
Here, the measure is simply the parameter itself, and the well-founded order is the less-than relation on natural numbers.
The first proof goal requires the user to prove that the argument of the first recursive call, namely n-1, is strictly smaller than the function's parameter, n.
Both termination proofs can be easily discharged using the omega tactic.
If a parameter of the function is the discriminant of a pattern match, then the proof obligations mention the refined parameter.
deffib:Nat→Nat|0|1=>1|.succ(.succn)=>fib(n+1)+fibntermination_byn=>nunsolved goals
n : Nat
⊢ n + 1 < n.succ.succ
n : Nat
⊢ n < n.succ.succdecreasing_byn:Nat⊢ n + 1 < n.succ.succn:Nat⊢ n < n.succ.succ
Additionally, the context is enriched with additional assumptions that can make it easier to prove termination.
Some examples include:
In the branches of an if-then-else expression, a hypothesis that asserts the current branch's condition is added, much as if the dependent if-then-else syntax had been used.
In the function argument to certain higher-order functions, the context of the function's body is enriched with assumptions about the argument.
This list is not exhaustive, and the mechanism is extensible.
It is described in detail in the section on preprocessing.
Enriched Proof Obligation Contexts
Here, the termIfThenElse : term`if c then t else e` is notation for `ite c t e`, "if-then-else", which decides to
return `t` or `e` depending on whether `c` is true or false. The explicit argument
`c : Prop` does not have any actual computational content, but there is an additional
`[Decidable c]` argument synthesized by typeclass inference which actually
determines how to evaluate `c` to true or false. Write `if h : c then t else e`
instead for a "dependent if-then-else" `dite`, which allows `t`/`e` to use the fact
that `c` is true/false.
if does not add a local assumption about the condition (that is, whether n≤1) to the local contexts in the branches.
deffib(n:Nat):=ifn≤1then1elsefib(n-1)+fib(n-2)termination_bynunsolved goals
n : Nat
h✝ : ¬n ≤ 1
⊢ n - 1 < n
n : Nat
h✝ : ¬n ≤ 1
⊢ n - 2 < ndecreasing_byn:Nath✝:¬n ≤ 1⊢ n - 1 < nn:Nath✝:¬n ≤ 1⊢ n - 2 < n
Nevertheless, the assumptions are available in the context of the termination proof:
Termination proof obligations in body of a Lean.Parser.Term.doFor : doElem`for x in e do s` iterates over `e` assuming `e`'s type has an instance of the `ForIn` typeclass.
`break` and `continue` are supported inside `for` loops.
`for x in e, x2 in e2, ... do s` iterates of the given collections in parallel,
until at least one of them is exhausted.
The types of `e2` etc. must implement the `ToStream` typeclass.
for…Lean.Parser.Term.doFor : doElem`for x in e do s` iterates over `e` assuming `e`'s type has an instance of the `ForIn` typeclass.
`break` and `continue` are supported inside `for` loops.
`for x in e, x2 in e2, ... do s` iterates of the given collections in parallel,
until at least one of them is exhausted.
The types of `e2` etc. must implement the `ToStream` typeclass.
in loop are also enriched, in this case with a Std.Range membership hypothesis:
This feature requires special setup for the higher-order function under which the recursive call is nested, as described in the section on preprocessing.
In the following definition, identical to the one above except using a custom, equivalent function instead of List.map, the proof obligation context is not enriched:
If no Lean.Parser.Command.declaration : commanddecreasing_by clause is given, then the decreasing_tactic is used implicitly, and applied to each proof obligation separately.
The tactic decreasing_tactic mainly deals with lexicographic ordering of tuples, applying Prod.Lex.right if the left components of the product are definitionally equal, and Prod.Lex.left otherwise.
After preprocessing tuples this way, it calls the decreasing_trivial tactic.
Extensible helper tactic for decreasing_tactic. This handles the "base case"
reasoning after applying lexicographic order lemmas.
It can be extended by adding more macro definitions, e.g.
The tactic decreasing_trivial is an extensible tactic that applies a few common heuristics to solve a termination goal.
In particular, it tries the following tactics and theorems:
array_get_dec and array_mem_dec, which prove that the size of array elements is less than the size of the array
sizeOf_list_dec that the size of list elements is less than the size of the list
String.Iterator.sizeOf_next_lt_of_hasNext and String.Iterator.sizeOf_next_lt_of_atEnd, to handle iteration through a string using Lean.Parser.Term.doFor : doElem`for x in e do s` iterates over `e` assuming `e`'s type has an instance of the `ForIn` typeclass.
`break` and `continue` are supported inside `for` loops.
`for x in e, x2 in e2, ... do s` iterates of the given collections in parallel,
until at least one of them is exhausted.
The types of `e2` etc. must implement the `ToStream` typeclass.
for
This tactic is intended to be extended with further heuristics using Lean.Parser.Command.macro_rules : commandmacro_rules.
No Backtracking of Lexicographic Order
A classic example of a recursive function that needs a more complex measure is the Ackermann function:
The measure is a tuple, so every recursive call has to be on arguments that are lexicographically smaller than the parameters.
The default decreasing_tactic can handle this.
In particular, note that the third recursive call has a second argument that is smaller than the second parameter and a first argument that is definitionally equal to the first parameter.
This allowed decreasing_tactic to apply Prod.Lex.right.
It fails, however, with the following modified function definition, where the third recursive call's first argument is provably smaller or equal to the first parameter, but not syntactically equal:
defsynack:Nat→Nat→Nat|0,n=>n+1|m+1,0=>synackm1|m+1,n+1=>synackm(failed to prove termination, possible solutions:
- Use `have`-expressions to prove the remaining goals
- Use `termination_by` to specify a different well-founded relation
- Use `decreasing_by` to specify your own tactic for discharging this kind of goal
case h
m n : Nat
⊢ m / 2 + 1 < m + 1synack(m/2+1)n)termination_bymn=>(m,n)
failed to prove termination, possible solutions:
- Use `have`-expressions to prove the remaining goals
- Use `termination_by` to specify a different well-founded relation
- Use `decreasing_by` to specify your own tactic for discharging this kind of goal
case h
m n : Nat
⊢ m / 2 + 1 < m + 1
Because Prod.Lex.right is not applicable, the tactic used Prod.Lex.left, which resulted in the unprovable goal above.
This function definition may require a manual proof that uses the more general theorem Prod.Lex.right', which allows the first component of the tuple (which must be of type Nat) to be less or equal instead of strictly equal:
If a recursive function definition does not indicate a termination measure, Lean will attempt to discover one automatically.
If neither Lean.Parser.Command.declaration : commandtermination_by nor Lean.Parser.Command.declaration : commanddecreasing_by is provided, Lean will try to infer structural recursion before attempting well-founded recursion.
If a Lean.Parser.Command.declaration : commanddecreasing_by clause is present, only well-founded recursion is attempted.
To infer a suitable termination measure, Lean considers multiple basic termination measures, which are termination measures of type Nat, and then tries all tuples of these measures.
The basic termination measures considered are:
all parameters whose type have a non-trivial SizeOf instance
the expression e₂ - e₁ whenever the local context of a recursive call has an assumption of type e₁ < e₂ or e₁ ≤ e₂, where e₁ and e₂ are of type Nat and depend only on the function's parameters. This approach is based on work by Panagiotis Manolios and Daron Vroon, 2006. “Termination Analysis with Calling Context Graphs”. In Proceedings of the International Conference on Computer Aided Verification (CAV 2006). (LNCS 4144).
in a mutual group, an additional basic measure is used to distinguish between recursive calls to other functions in the group and recursive calls to the function being defined (for details, see the section on mutual well-founded recursion)
Candidate measures are basic measures or tuples of basic measures.
If any of the candidate measures allow all proof obligations to be discharged by the termination proof tactic (that is, the tactic specified by Lean.Parser.Command.declaration : commanddecreasing_by, or decreasing_trivial if there is no Lean.Parser.Command.declaration : commanddecreasing_by clause), then an arbitrary such candidate measure is selected as the automatic termination measure.
A termination_by? clause causes the inferred termination annotation to be shown.
It can be automatically added to the source file using the offered suggestion or code action.
To avoid the combinatorial explosion of trying all tuples of measures, Lean first tabulates all basic termination measures, determining whether the basic measure is decreasing, strictly decreasing, or non-decreasing.
A decreasing measure is smaller for at least one recursive call and never increases at any recursive call, while a strictly decreasing measure is smaller at all recursive calls.
A non-decreasing measure is one that the termination tactic could not show to be decreasing or strictly decreasing.
A suitable tuple is chosen based on the table.This approach is based on Lukas Bulwahn, Alexander Krauss, and Tobias Nipkow, 2007. “Finding Lexicographic Orders for Termination Proofs in Isabelle/HOL”. In Proceedings of the International Conference on Theorem Proving in Higher Order Logics (TPHOLS 2007). (LNTCS 4732).
This table shows up in the error message when no automatic measure could be found.
Termination failure
If there is no Lean.Parser.Command.declaration : commandtermination_by clause, Lean attempts to infer a measure for well-founded recursion.
If it fails, then it prints the table mentioned above.
In this example, the Lean.Parser.Command.declaration : commanddecreasing_by clause simply prevents Lean from also attempting structural recursion; this keeps the error message specific.
Could not find a decreasing measure.
The basic measures relate at each recursive call as follows:
(<, ≤, =: relation proved, ? all proofs failed, _: no proof attempted)
n m l
1) 30:6-25 = = =
2) 31:6-23 = < _
3) 32:6-23 < _ _
Please use `termination_by` to specify a decreasing measure.deff:(nml:Nat)→Nat|n+1,m+1,l+1=>[f(n+1)(m+1)(l+1),f(n+1)(m-1)(l),f(n)(m+1)(l)].sum|_,_,_=>0decreasing_byall_goalsdecreasing_tactic
Could not find a decreasing measure.
The basic measures relate at each recursive call as follows:
(<, ≤, =: relation proved, ? all proofs failed, _: no proof attempted)
n m l
1) 30:6-25 = = =
2) 31:6-23 = < _
3) 32:6-23 < _ _
Please use `termination_by` to specify a decreasing measure.
The three recursive calls are identified by their source positions.
This message conveys the following facts:
In the first recursive call, all arguments are (provably) equal to the parameters
In the second recursive call, the first argument is equal to the first parameter and the second argument is provably smaller than the second parameter.
The third parameter was not checked for this recursive call, because it was not necessary to determine that no suitable termination argument exists.
In the third recursive call, the first argument decreases strictly, and the other arguments were not checked.
When termination proofs fail in this manner, a good technique to discover the problem is to explicitly indicate the expected termination argument using Lean.Parser.Command.declaration : commandtermination_by.
This will surface the messages from the failing tactic.
Array Indexing
The purpose of considering expressions of the form e₂-e₁ as measures is to support the common idiom of counting up to some upper bound, in particular when traversing arrays in possibly interesting ways.
In the following function, which performs binary search on a sorted array, this heuristic helps Lean to find the j-i measure.
The fact that the inferred termination argument uses some arbitrary measure, rather than an optimal or minimal one, is visible in the inferred measure, which contains a redundant j:
Try this: termination_by (j, j - i)
Termination Proof Tactics During Inference
The tactic indicated by Lean.Parser.Command.declaration : commanddecreasing_by is used slightly differently when inferring the termination measure than it is in the actual termination proof.
During inference, it is applied to a single goal, attempting to prove < or ≤ on Nat.
During the termination proof, it is applied to many simultaneous goals (one per recursive call), and the goals may involve the lexicographic ordering of pairs.
A consequence is that a Lean.Parser.Command.declaration : commanddecreasing_by block that addresses goals individually and which works successfully with an explicit termination argument can cause inference of the termination measure to fail:
Could not find a decreasing measure.
The basic measures relate at each recursive call as follows:
(<, ≤, =: relation proved, ? all proofs failed, _: no proof attempted)
x1 x2
1) 629:16-23 ? ?
2) 630:27-40 _ _
3) 630:20-41 _ _
Please use `termination_by` to specify a decreasing measure.defack:Nat→Nat→Nat|0,n=>n+1|m+1,0=>ackm1|m+1,n+1=>ackm(ack(m+1)n)decreasing_by·applyProd.Lex.leftomega·applyProd.Lex.rightomega·applyProd.Lex.leftomega
It is advisable to always include a Lean.Parser.Command.declaration : commandtermination_by clause whenever an explicit Lean.Parser.Command.declaration : commanddecreasing_by proof is given.
Inference too powerful
Because decreasing_tactic avoids the need to backtrack by being incomplete with regard to lexicographic ordering, Lean may infer a termination measure that leads to goals that the tactic cannot prove.
In this case, the error message is the one that results from the failing tactic rather than the one that results from being unable to find a measure.
This is what happens in notAck:
failed to prove termination, possible solutions:
- Use `have`-expressions to prove the remaining goals
- Use `termination_by` to specify a different well-founded relation
- Use `decreasing_by` to specify your own tactic for discharging this kind of goal
case h
m n : Nat
⊢ m / 2 + 1 < m + 1
In this case, explicitly stating the termination measure helps.
Lean supports the definition of mutually recursive functions using well-founded recursion.
Mutual recursion may be introduced using a mutual block, but it also results from Lean.Parser.Term.letrec : termlet rec expressions and Lean.Parser.Command.declaration : commandwhere blocks.
The rules for mutual well-founded recursion are applied to a group of actually mutually recursive, lifted definitions, that results from the elaboration steps for mutual groups.
If any function in the mutual group has a Lean.Parser.Command.declaration : commandtermination_by or Lean.Parser.Command.declaration : commanddecreasing_by clause, well-founded recursion is attempted.
If a termination measure is specified using Lean.Parser.Command.declaration : commandtermination_by for any function in the mutual group, then all functions in the group must specify a termination measure, and they have to have the same type.
If no termination argument is specified, the termination argument is inferred, as described above. In the case of mutual recursion, a third class of basic measures is considered during inference, namely for each function in the mutual group the measure that is 1 for that function and 0 for the others. This allows Lean to order the functions so that some calls from one function to another are allowed even if the parameters do not decrease.
Mutual recursion without parameter decrease
In the following mutual function definitions, the parameter does not decrease in the call from g to f.
Nonetheless, the definition is accepted due to the ordering imposed on the functions themselves by the additional basic measure.
Lean preprocesses the function's body before determining the proof obligations at each call site, transforming it into an equivalent definition that may include additional information.
This preprocessing step is primarily used to enrich the local context with additional assumptions that may be necessary in order to solve the termination proof obligations, freeing users from the need to perform equivalent transformations by hand.
Preprocessing uses the simplifier and is extensible by the user.
The preprocessing happens in three steps:
Lean annotates occurrences of a function's parameter, or a subterm of a parameter, with the wfParamgadget.
More precisely, every occurrence of the function's parameters is wrapped in wfParam.
Whenever a Lean.Parser.Term.match : termPattern matching. `match e, ... with | p, ... => f | ...` matches each given
term `e` against each pattern `p` of a match alternative. When all patterns
of an alternative match, the `match` term evaluates to the value of the
corresponding right-hand side `f` with the pattern variables bound to the
respective matched values.
If used as `match h : e, ... with | p, ... => f | ...`, `h : e = p` is available
within `f`.
When not constructing a proof, `match` does not automatically substitute variables
matched on in dependent variables' types. Use `match (generalizing := true) ...` to
enforce this.
Syntax quotations can also be used in a pattern match.
This matches a `Syntax` value against quotations, pattern variables, or `_`.
Quoted identifiers only match identical identifiers - custom matching such as by the preresolved
names only should be done explicitly.
`Syntax.atom`s are ignored during matching by default except when part of a built-in literal.
For users introducing new atoms, we recommend wrapping them in dedicated syntax kinds if they
should participate in matching.
For example, in
```lean
syntax "c" ("foo" <|> "bar") ...
```
`foo` and `bar` are indistinguishable during matching, but in
```lean
syntax foo := "foo"
syntax "c" (foo <|> "bar") ...
```
they are not.
match expression has any discriminant wrapped in wfParam, the gadget is removed and every occurrence of a pattern match variable (regardless of whether it comes from the discriminant with the wfParam gadget) is wrapped in wfParam.
The wfParam gadget is additionally floated out of projection function applications.
The annotated function body is simplified using the simplifier, using only simplification rules from the wf_preprocesscustom simp set.
Finally, any left-over wfParam markers are removed.
Annotating function parameters that are used for well-founded recursion allows the preprocessing simplification rules to distinguish between parameters and other terms.
syntaxPreprocessing Simp Set for Well-Founded Recursion
attr::= ...
|Theorems tagged with the `wf_preprocess` attribute are used during the processing of functions defined
by well-founded recursion. They are applied to the function's body to add additional hypotheses,
such as replacing `if c then _ else _` with `if h : c then _ else _` or `xs.map` with
`xs.attach.map`. Also see `wfParam`.
wf_preprocess
Theorems tagged with the wf_preprocess attribute are used during the processing of functions defined
by well-founded recursion. They are applied to the function's body to add additional hypotheses,
such as replacing ifcthen_else_ with ifh:cthen_else_ or xs.map with
xs.attach.map. Also see wfParam.
The wfParam gadget is used internally during the construction of recursive functions by
wellfounded recursion, to keep track of the parameter for which the automatic introduction
of List.attach (or similar) is plausible.
Some rewrite rules in the wf_preprocess simp set apply generally, without heeding the wfParam marker.
In particular, the theorem ite_eq_dite is used to extend the context of an if-then-else expression branch with an assumption about the condition:This assumption's name should be an inaccessible name based on h, as is indicated by using binderNameHint with the term (). Binder name hints are described in the tactic language reference.
Other rewrite rules use the wfParam marker to restrict their applicability; they are used only when a function (like List.map) is applied to a parameter or subterm of a parameter, but not otherwise.
This is typically done in two steps:
A theorem such as List.map_wfParam recognizes a call of List.map on a function parameter (or subterm), and uses List.attach to enrich the type of the list elements with the assertion that they are indeed elements of that list:
This theorem uses the binderNameHint gadget to preserve a user-chosen binder name, should f be a lambda expression.
By separating the introduction of List.attach from the propagation of the introduced assumption, the desired the x∈xs assumption is made available to f even in chains such as (xs.reverse.filter p).map f.
This preprocessing can be disabled by setting the option wf.preprocess to false.
To see the preprocessed function definition, before and after the removal of wfParam markers, set the option trace.Elab.definition.wf to true.
enable/disable tracing for the given module and submodules
Preprocessing for a custom data type
This example demonstrates what is necessary to enable automatic well-founded recursion for a custom container type.
The structure type Pair is a homogeneous pair: it contains precisely two elements, both of which have the same type.
It can be thought of as being similar to a list or array that always contains precisely two elements.
As a container, Pair can support a map operation.
To support well-founded recursion in which recursive calls occur in the body of a function being mapped over a Pair, some additional definitions are required, including a membership predicate, a theorem that relates the size of a member to the size of the containing pair, helpers to introduce and eliminate assumptions about membership, wf_preprocess rules to insert these helpers, and an extension to the decreasing_trivial tactic.
Each of these steps makes it easier to work with Pair, but none are strictly necessary; there's no need to immediately implement all steps for every type.
/-- A homogeneous pair -/structurePair(α:Typeu)wherefst:αsnd:α/-- Mapping a function over the elements of a pair -/defPair.map(f:α→β)(p:Pairα):Pairβwherefst:=fp.fstsnd:=fp.snd
Defining a nested inductive data type of binary trees that uses Pair and attempting to define its map function demonstrates the need for preprocessing rules.
/-- A binary tree defined using `Pair` -/inductiveTree(α:Typeu)where|leaf:α→Treeα|node:Pair(Treeα)→Treeα
A straightforward definition of the map function fails:
defTree.map(f:α→β):Treeα→Treeβ|leafx=>leaf(fx)|nodep=>node(p.map(funt'=>failed to prove termination, possible solutions:
- Use `have`-expressions to prove the remaining goals
- Use `termination_by` to specify a different well-founded relation
- Use `decreasing_by` to specify your own tactic for discharging this kind of goal
α : Type u_1
p : Pair (Tree α)
t' : Tree α
⊢ sizeOf t' < 1 + sizeOf pt'.mapf))termination_byt=>t
failed to prove termination, possible solutions:
- Use `have`-expressions to prove the remaining goals
- Use `termination_by` to specify a different well-founded relation
- Use `decreasing_by` to specify your own tactic for discharging this kind of goal
α : Type u_1
p : Pair (Tree α)
t' : Tree α
⊢ sizeOf t' < 1 + sizeOf p
Clearly the proof obligation is not solvable, because nothing connects t' to p.
The standard idiom to enable this kind of function definition is to have a function that enriches each element of a collection with a proof that they are, in fact, elements of the collection.
Stating this property requires a membership predicate.
Every inductive type automatically has a SizeOf instance.
An element of a collection should be smaller than the collection, but this fact must be proved before it can be used to construct a termination proof:
The next step is to define attach and unattach functions that enrich the elements of the pair with a proof that they are elements of the pair, or remove said proof.
Here, the type of Pair.unattach is more general and can be used with any subtype; this is a typical pattern.
This transformation can be made fully automatic.
The preprocessing feature of well-founded recursion can be used to automate the introduction of the Pair.attach function.
This is done in two stages.
First, when Pair.map is applied to one of the function's parameters, it is rewritten to an attach/unattach composition.
Then, when a function is mapped over the result of Pair.unattach, the function is rewritten to accept the proof of membership and bring it into scope.
To keep the example short, the sizeOf_pair_dec tactic is tailored to this particular recursion pattern and isn't really general enough for a general-purpose container library.
It does, however, demonstrate that libraries can be just as convenient in practice as the container types in the standard library.
6.3.7. Theory and Construction
This section gives a very brief glimpse into the mathematical constructions that underlie termination proofs via well-founded recursion, which may surface occasionally.
The elaboration of functions defined by well-founded recursion is based on the WellFounded.fix operator.
The type α is instantiated with the function's (varying) parameters, packed into one type using PSigma.
The WellFounded relation is constructed from the termination measure via invImage.
The function's body is passed to WellFounded.fix, with parameters suitably packed and unpacked, and recursive calls are replaced with a call to the value provided by WellFounded.fix.
The termination proofs generated by the Lean.Parser.Command.declaration : commanddecreasing_by tactics are inserted in the right place.
Finally, the equational and unfolding theorems for the recursive function are proved from WellFounded.fix_eq.
These theorems hide the details of packing and unpacking arguments and describe the function's behavior in terms of the original definition.
In the case of mutual recursion, an equivalent non-mutual function is constructed by combining the function's arguments using PSum, and pattern-matching on that sum type in the result type and the body.
The definition of WellFounded builds on the notion of accessible elements of the relation:
A relation r is WellFounded if all elements of α are accessible within r.
If a relation is WellFounded, it does not allow for an infinite descent along the relation.
If the arguments of the recursive calls in a function definition decrease according to
a well founded relation, then the function terminates.
Well-founded relations are sometimes called Artinian or said to satisfy the “descending chain condition”.
A value is accessible if for all y such that ryx, y is also accessible.
Note that if there exists no y such that ryx, then x is accessible. Such an x is called a
base case.
Division by Iterated Subtraction: Termination Proof
The definition of division by iterated subtraction can be written explicitly using well-founded recursion.
The definition must be marked Lean.Parser.Command.declaration : commandnoncomputable because well-founded recursion is not supported by the compiler.
Like recursors, it is part of Lean's logic.
The definition of division should satisfy the following equations: