The Lean Language Reference

17.12. Bigger Examples

17.12.1. Integrating grind's features.

This example demonstrates how the various submodules of grind are seamlessly integrated. In particular we can

  • instantiate theorems from the library, using custom patterns,

  • perform case splitting,

  • do linear integer arithmetic reasoning, including modularity conditions, and

  • do Gröbner basis reasoning all without providing explicit instructions to drive the interactions between these modes of reasoning.

For this example we'll being with a "mocked up" version of the real numbers, and the sin and cos functions. Of course, this example works without any changes using Mathlib's versions of these!

axiom R : Type -- TODO: a `sorry` here was causing a run-time crash. It's unclear why. @[instance] axiom instCommRingR : Lean.Grind.CommRing R axiom sin : R R axiom cos : R R axiom trig_identity : x, (cos x)^2 + (sin x)^2 = 1

Our first step is to tell grind to "put the trig identity on the whiteboard" whenever it sees a goal involving sin or cos:

grind_pattern trig_identity => cos x grind_pattern trig_identity => sin x

Note here we use two different patterns for the same theorem, so the theorem is instantiated even if grind sees just one of these functions. If we preferred to more conservatively instantiate the theorem only when both sin and cos are present, we could have used a multi-pattern:

grind_pattern trig_identity => cos x, sin x

For this example, either approach will work.

Because grind immediately notices the trig identity, we can prove goals like this:

example : (cos x + sin x)^2 = 2 * cos x * sin x + 1 := x:R(cos x + sin x) ^ 2 = 2 * cos x * sin x + 1 All goals completed! 🐙

Here grind:

  • Notices cos x and sin x, so instantiates the trig identity.

  • Notices that this is a polynomial in CommRing R, so sends it to the Gröbner basis module. No calculation happens at this point: it's the first polynomial relation in this ring, so the Gröbner basis is updated to [(cos x)^2 + (sin x)^2 - 1].

  • Notices that the left and right hand sides of the goal are polynomials in CommRing R, so sends them to the Gröbner basis module for normalization.

  • Since their normal forms modulo (cos x)^2 + (sin x)^2 = 1 are equal, their equivalence classes are merged, and the goal is solved.

We can also do this sort of argument when congruence closure is needed:

example (f : R Nat) : f ((cos x + sin x)^2) = f (2 * cos x * sin x + 1) := x:Rf:R Natf ((cos x + sin x) ^ 2) = f (2 * cos x * sin x + 1) All goals completed! 🐙

As before, grind instantiates the trig identity, notices that (cos x + sin x)^2 and 2 * cos x * sin x + 1 are equal modulo (cos x)^2 + (sin x)^2 = 1, puts those algebraic expressions in the same equivalence class, and then puts the function applications f((cos x + sin x)^2) and f(2 * cos x * sin x + 1) in the same equivalence class, and closes the goal.

Notice that we've used arbitrary function f : R → Nat here; let's check that grind can use some linear integer arithmetic reasoning after the Gröbner basis steps:

example (f : R Nat) : 4 * f ((cos x + sin x)^2) 2 + f (2 * cos x * sin x + 1) := x:Rf:R Nat4 * f ((cos x + sin x) ^ 2) 2 + f (2 * cos x * sin x + 1) All goals completed! 🐙

Here grind first works out that this goal reduces to 4 * x ≠ 2 + x for some x : Nat (i.e. by identifying the two function applications as described above), and then uses modularity to derive a contradiction.

Finally, we can also mix in some case splitting:

example (f : R → Nat) : max 3 (4 * f ((cos x + sin x)^2)) ≠ 2 + f (2 * cos x * sin x + 1) := by
  grind

As before, grind first does the instantiation and Gröbner basis calculations required to identify the two function applications. However the cutsat algorithm by itself can't do anything with max 3 (4 * x) ≠ 2 + x. Next, instantiating Nat.max_def (automatically, because of an annotation in the standard library) which states max n m = if n ≤ m then m else n, we then case split on the inequality. In the branch 3 ≤ 4 * x, cutsat again uses modularity to prove 4 * x ≠ 2 + x. In the branch 4 * x < 3, cutsat quickly determines x = 0, and then notices 4 * 0 ≠ 2 + 0.

This has been, of course, a quite artificial example! In practice this sort of automatic integration of different reasoning modes is very powerful: the central "whiteboard" which tracks instantiated theorems and equivalence classes can hand off relevant terms and equalities to the appropriate modules (here, cutsat and Gröbner bases), which can then return new facts to the whiteboard.

17.12.2. if-then-else normalization

This example is a showcase for the "out of the box" power of grind. Later examples will explore adding @[grind] annotations as part of the development process, to make grind more effective in a new domain. This example does not rely on any of the algebra extensions to grind, we're just using:

  • instantiation of annotated theorems from the library,

  • congruence closure, and

  • case splitting.

The solution here builds on an earlier formalization by Chris Hughes, but with some notable improvements:

  • the verification is separate from the code,

  • the proof is now a one-liner combining fun_induction and grind,

  • the proof is robust to changes in the code (e.g. swapping out HashMap for TreeMap) as well as changes to the precise verification conditions.

17.12.2.1. The problem

Here is Rustan Leino's original description of the problem, as posted by Leonardo de Moura on the Lean Zulip:

The data structure is an expression with Boolean literals, variables, and if-then-else expressions.

The goal is to normalize such expressions into a form where:

a) No nested ifs: the condition part of an if-expression is not itself an if-expression

b) No constant tests: the condition part of an if-expression is not a constant

c) No redundant ifs: the then and else branches of an if are not the same

d) Each variable is evaluated at most once: the free variables of the condition are disjoint from those in the then branch, and also disjoint from those in the else branch.

One should show that a normalization function produces an expression satisfying these four conditions, and one should also prove that the normalization function preserves the meaning of the given expression.

17.12.2.2. The formal statement

To formalize the statement in Lean, we use an inductive type IfExpr:

/-- An if-expression is either boolean literal, a numbered variable, or an if-then-else expression where each subexpression is an if-expression. -/ inductive IfExpr | lit : Bool IfExpr | var : Nat IfExpr | ite : IfExpr IfExpr IfExpr IfExpr deriving DecidableEq

and define some inductive predicates and an eval function, so we can state the four desired properties:

namespace IfExpr /-- An if-expression has a "nested if" if it contains an if-then-else where the "if" is itself an if-then-else. -/ def hasNestedIf : IfExpr Bool | lit _ => false | var _ => false | ite (ite _ _ _) _ _ => true | ite _ t e => t.hasNestedIf || e.hasNestedIf /-- An if-expression has a "constant if" if it contains an if-then-else where the "if" is itself a literal. -/ def hasConstantIf : IfExpr Bool | lit _ => false | var _ => false | ite (lit _) _ _ => true | ite i t e => i.hasConstantIf || t.hasConstantIf || e.hasConstantIf /-- An if-expression has a "redundant if" if it contains an if-then-else where the "then" and "else" clauses are identical. -/ def hasRedundantIf : IfExpr Bool | lit _ => false | var _ => false | ite i t e => t == e || i.hasRedundantIf || t.hasRedundantIf || e.hasRedundantIf /-- All the variables appearing in an if-expressions, read left to right, without removing duplicates. -/ def vars : IfExpr List Nat | lit _ => [] | var i => [i] | ite i t e => i.vars ++ t.vars ++ e.vars /-- A helper function to specify that two lists are disjoint. -/ def _root_.List.disjoint {α} [DecidableEq α] : List α List α Bool | [], _ => true | x::xs, ys => x ys && xs.disjoint ys /-- An if expression evaluates each variable at most once if for each if-then-else the variables in the "if" clause are disjoint from the variables in the "then" clause and the variables in the "if" clause are disjoint from the variables in the "else" clause. -/ def disjoint : IfExpr Bool | lit _ => true | var _ => true | ite i t e => i.vars.disjoint t.vars && i.vars.disjoint e.vars && i.disjoint && t.disjoint && e.disjoint /-- An if expression is "normalized" if it has no nested, constant, or redundant ifs, and it evaluates each variable at most once. -/ def normalized (e : IfExpr) : Bool := !e.hasNestedIf && !e.hasConstantIf && !e.hasRedundantIf && e.disjoint /-- The evaluation of an if expression at some assignment of variables. -/ def eval (f : Nat Bool) : IfExpr Bool | lit b => b | var i => f i | ite i t e => bif i.eval f then t.eval f else e.eval f end IfExpr

Using these we can state the problem. The challenge is to inhabit the following type (and to do so nicely!):

def IfNormalization : Type := { Z : IfExpr IfExpr // e, (Z e).normalized (Z e).eval = e.eval }

17.12.2.3. Other solutions

At this point, it's worth pausing and doing at least one of the following:

  • Try to prove this yourself! It's quite challenging for a beginner! You can have a go in the Live Lean editor without any installation.

  • Read Chris Hughes's solution, which is included in the Mathlib Archive. This solution makes good use of Aesop, but is not ideal because

    1. It defines the solution using a subtype, simultaneously giving the construction and proving properties about it. We think it's better stylistically to keep these separate.

    2. Even with Aesop automation, there's still about 15 lines of manual proof work before we can hand off to Aesop.

  • Read Wojciech Nawrocki's solution. This one uses less automation, at about 300 lines of proof work.

17.12.2.4. The solution using grind

Actually solving the problem is not that hard: we just need a recursive function that carries along a record of "already assigned variables", and then, whenever performing a branch on a variable, adding a new assignment in each of the branches. It also needs to flatten nested if-then-else expressions which have another if-then-else in the "condition" position. (This is extracted from Chris Hughes's solution, but without the subtyping.)

Let's work inside the IfExpr namespace.

namespace IfExpr def fail to show termination for IfExpr.normalize with errors failed to infer structural recursion: Cannot use parameter assign: the type HashMap Nat Bool does not have a `.brecOn` recursor Cannot use parameter #2: failed to eliminate recursive application normalize assign (a.ite (b.ite t e) (c.ite t e)) Could not find a decreasing measure. The basic measures relate at each recursive call as follows: (<, ≤, =: relation proved, ? all proofs failed, _: no proof attempted) #1 x2 1) 1296:27-45 = < 2) 1297:27-45 = < 3) 1299:4-52 = ? 4) 1303:16-50 ? _ 5) 1304:16-51 _ _ 6) 1306:16-50 _ _ #1: assign Please use `termination_by` to specify a decreasing measure.normalize (assign : Std.HashMap Nat Bool) : IfExpr IfExpr | lit b => lit b | var v => match assign[v]? with | none => var v | some b => lit b | ite (lit true) t _ => normalize assign t | ite (lit false) _ e => normalize assign e | ite (ite a b c) t e => normalize assign (ite a (ite b t e) (ite c t e)) | ite (var v) t e => match assign[v]? with | none => let t' := normalize (assign.insert v true) t let e' := normalize (assign.insert v false) e if t' = e' then t' else ite (var v) t' e' | some b => normalize assign (ite (lit b) t e)

This is pretty straightforward, but it immediately runs into a problem:

fail to show termination for
  IfExpr.normalize
with errors
failed to infer structural recursion:
Cannot use parameter assign:
  the type HashMap Nat Bool does not have a `.brecOn` recursor
Cannot use parameter #2:
  failed to eliminate recursive application
    normalize assign (a.ite (b.ite t e) (c.ite t e))


Could not find a decreasing measure.
The basic measures relate at each recursive call as follows:
(<, ≤, =: relation proved, ? all proofs failed, _: no proof attempted)
              #1 x2
1) 1296:27-45  =  <
2) 1297:27-45  =  <
3) 1299:4-52   =  ?
4) 1303:16-50  ?  _
5) 1304:16-51  _  _
6) 1306:16-50  _  _

#1: assign

Please use `termination_by` to specify a decreasing measure.

Lean here is telling us that it can't see that the function is terminating. Often Lean is pretty good at working this out for itself, but for sufficiently complicated functions we need to step in to give it a hint.

In this case we can see that it's the recursive call ite (ite a b c) t e which is calling normalize on (ite a (ite b t e) (ite c t e)) where Lean is having difficulty. Lean has made a guess at a plausible termination measure, based on using automatically generated sizeOf function, but can't prove the resulting goal, essentially because t and e appear multiple times in the recursive call.

To address problems like this, we nearly always want to stop using the automatically generated sizeOf function, and construct our own termination measure. We'll use

@[simp] def normSize : IfExpr Nat | lit _ => 0 | var _ => 1 | .ite i t e => 2 * normSize i + max (normSize t) (normSize e) + 1

Many different functions would work here. The basic idea is to increase the "weight" of the "condition" branch (this is the multiplicative factor in the 2 * normSize i ), so that as long the "condition" part shrinks a bit, the whole expression counts as shrinking even if the "then" and "else" branches have grown. We've annotated the definition with @[simp] so Lean's automated termination checker is allowed to unfold the definition.

With this in place, the definition goes through using the Lean.Parser.Command.declaration : commandtermination_by clause:

def normalize (assign : Std.HashMap Nat Bool) : IfExpr IfExpr | lit b => lit b | var v => match assign[v]? with | none => var v | some b => lit b | ite (lit true) t _ => normalize assign t | ite (lit false) _ e => normalize assign e | ite (ite a b c) t e => normalize assign (ite a (ite b t e) (ite c t e)) | ite (var v) t e => match assign[v]? with | none => let t' := normalize (assign.insert v true) t let e' := normalize (assign.insert v false) e if t' = e' then t' else ite (var v) t' e' | some b => normalize assign (ite (lit b) t e) termination_by e => e.normSize

Now it's time to prove some properties of this function. We're just going to package together all the properties we want:

theorem declaration uses 'sorry'normalize_spec (assign : Std.HashMap Nat Bool) (e : IfExpr) : (normalize assign e).normalized ( f, (normalize assign e).eval f = e.eval fun w => assign[w]?.getD (f w)) (v : Nat), v vars (normalize assign e) ¬ v assign := sorry

That is:

  • the result of normalize is actually normalized according to the initial definitions,

  • if we normalize an "if-then-else" expression using some assignments, and then evaluate the remaining variables, we get the same result as evaluating the original "if-then-else" using the composite of the two assignments,

  • and any variable appearing in the assignments no longer appears in the normalized expression.

You might think that we should state these three properties as separate lemmas, but it turns out that proving them all at once is really convenient, because we can use the fun_induction tactic to assume that all these properties hold for normalize in the recursive calls, and then grind will just put all the facts together for the result:

attribute [local grind] normalized hasNestedIf hasConstantIf hasRedundantIf disjoint vars eval List.disjoint theorem normalize_spec (assign : Std.HashMap Nat Bool) (e : IfExpr) : (normalize assign e).normalized ( f, (normalize assign e).eval f = e.eval fun w => assign[w]?.getD (f w)) (v : Nat), v vars (normalize assign e) ¬ v assign := assign:HashMap Nat Boole:IfExpr(normalize assign e).normalized = true (∀ (f : Nat Bool), eval f (normalize assign e) = eval (fun w => assign[w]?.getD (f w)) e) (v : Nat), v (normalize assign e).vars ¬v assign fun_induction normalize with All goals completed! 🐙

The fact that the fun_induction plus grind combination just works here is sort of astonishing. We're really excited about this, and we're hoping to see a lot more proofs in this style!

A lovely consequence of highly automated proofs is that often you have some flexibility to change the statements, without changing the proof at all! As examples, the particular way that we asserted above that "any variable appearing in the assignments no longer appears in the normalized expression" could be stated in many different ways (although not omitted!). The variations really don't matter, and grind can both prove, and use, any of them:

Here we use assign.contains v = false:

example (assign : Std.HashMap Nat Bool) (e : IfExpr) : (normalize assign e).normalized ( f, (normalize assign e).eval f = e.eval fun w => assign[w]?.getD (f w)) (v : Nat), v vars (normalize assign e) assign.contains v = false := assign:HashMap Nat Boole:IfExpr(normalize assign e).normalized = true (∀ (f : Nat Bool), eval f (normalize assign e) = eval (fun w => assign[w]?.getD (f w)) e) (v : Nat), v (normalize assign e).vars assign.contains v = false fun_induction normalize with All goals completed! 🐙

and here we use assign[v]? = none:

example (assign : Std.HashMap Nat Bool) (e : IfExpr) : (normalize assign e).normalized ( f, (normalize assign e).eval f = e.eval fun w => assign[w]?.getD (f w)) (v : Nat), v vars (normalize assign e) assign[v]? = none := assign:HashMap Nat Boole:IfExpr(normalize assign e).normalized = true (∀ (f : Nat Bool), eval f (normalize assign e) = eval (fun w => assign[w]?.getD (f w)) e) (v : Nat), v (normalize assign e).vars assign[v]? = none fun_induction normalize with All goals completed! 🐙

In fact, it's also of no consequence to grind whether we use a HashMap or a TreeMap to store the assignments, we can simply switch that implementation detail out, without having to touch the proofs:

def normalize (assign : Std.TreeMap Nat Bool) : IfExpr IfExpr | lit b => lit b | var v => match assign[v]? with | none => var v | some b => lit b | ite (lit true) t _ => normalize assign t | ite (lit false) _ e => normalize assign e | ite (ite a b c) t e => normalize assign (ite a (ite b t e) (ite c t e)) | ite (var v) t e => match assign[v]? with | none => let t' := normalize (assign.insert v true) t let e' := normalize (assign.insert v false) e if t' = e' then t' else ite (var v) t' e' | some b => normalize assign (ite (lit b) t e) termination_by e => e.normSize theorem normalize_spec (assign : Std.TreeMap Nat Bool) (e : IfExpr) : (normalize assign e).normalized ( f, (normalize assign e).eval f = e.eval fun w => assign[w]?.getD (f w)) (v : Nat), v vars (normalize assign e) ¬ v assign := assign:TreeMap Nat Bool comparee:IfExpr(normalize assign e).normalized = true (∀ (f : Nat Bool), eval f (normalize assign e) = eval (fun w => assign[w]?.getD (f w)) e) (v : Nat), v (normalize assign e).vars ¬v assign fun_induction normalize with All goals completed! 🐙

(The fact that we can do this relies on the fact that all the lemmas for both HashMap and for TreeMap that grind needs have already be annotated in the standard library.)

If you'd like to play around with this code, you can find the whole file here, or in fact play with it with no installation in the Live Lean editor.

17.12.3. IndexMap

In this section we'll build an example of a new data structure and basic API for it, illustrating the use of grind.

The example will be derived from Rust's indexmap data structure.

IndexMap is intended as a replacement for HashMap (in particular, it has fast hash-based lookup), but allowing the user to maintain control of the order of the elements. We won't give a complete API, just set up some basic functions and theorems about them.

The two main functions we'll implement for now are insert and eraseSwap:

  • insert k v checks if k is already in the map. If so, it replaces the value with v, keeping k in the same position in the ordering. If it is not already in the map, insert adds (k, v) to the end of the map.

  • eraseSwap k removes the element with key k from the map, and swaps it with the last element of the map (or does nothing if k is not in the map). (This semantics is maybe slightly surprising: this function exists because it is an efficient way to erase element, when you don't care about the order of the remaining elements. Another function, not implemented here, would preserve the order of the remaining elements, but at the cost of running in time proportional to the number of elements after the erased element.)

Our goals will be:

  • complete encapsulation: the implementation of IndexMap is hidden from the users, and the theorems about the implementation details are private.

  • to use grind as much as possible: we'll preferring adding a private theorem and annotating it with @[grind] over writing a longer proof whenever practical.

  • to use auto-parameters as much as possible, so that we don't even see the proofs, as they're mostly handled invisibly by grind.

To begin with, we'll write out a skeleton of what we want to achieve, liberally using sorry as a placeholder for all proofs. In particular, this version makes no use of grind.

open Std structure IndexMap (α : Type u) (β : Type v) [BEq α] [Hashable α] where indices : HashMap α Nat keys : Array α values : Array β size_keys : keys.size = values.size WF : (i : Nat) (a : α), keys[i]? = some a indices[a]? = some i namespace IndexMap variable {α : Type u} {β : Type v} [BEq α] [LawfulBEq α] [Hashable α] [LawfulHashable α] variable {m : IndexMap α β} {a : α} {b : β} {i : Nat} @[inline] def size (m : IndexMap α β) : Nat := m.values.size def declaration uses 'sorry'declaration uses 'sorry'emptyWithCapacity (capacity := 8) : IndexMap α β where indices := HashMap.emptyWithCapacity capacity keys := Array.emptyWithCapacity capacity values := Array.emptyWithCapacity capacity size_keys := sorry WF := sorry instance : EmptyCollection (IndexMap α β) where emptyCollection := emptyWithCapacity instance : Inhabited (IndexMap α β) where default := @[inline] def contains (m : IndexMap α β) (a : α) : Bool := m.indices.contains a instance : Membership α (IndexMap α β) where mem m a := a m.indices instance {m : IndexMap α β} {a : α} : Decidable (a m) := inferInstanceAs (Decidable (a m.indices)) @[inline] def findIdx? (m : IndexMap α β) (a : α) : Option Nat := m.indices[a]? @[inline] def findIdx (m : IndexMap α β) (a : α) (h : a m) : Nat := m.indices[a] @[inline] def getIdx? (m : IndexMap α β) (i : Nat) : Option β := m.values[i]? @[inline] def getIdx (m : IndexMap α β) (i : Nat) (h : i < m.size := by get_elem_tactic) : β := m.values[i] declaration uses 'sorry'instance : GetElem? (IndexMap α β) α β (fun m a => a m) where getElem m a h := m.values[m.indices[a]]'(α:Type uβ:Type vinst✝³:BEq αinst✝²:LawfulBEq αinst✝¹:Hashable αinst✝:LawfulHashable αm✝:IndexMap α βa✝:αb:βi:Natm:IndexMap α βa:αh:a mm.indices[a] < m.values.size All goals completed! 🐙) getElem? m a := m.indices[a]?.bind (m.values[·]?) getElem! m a := m.indices[a]?.bind (m.values[·]?) |>.getD default declaration uses 'sorry'instance : LawfulGetElem (IndexMap α β) α β (fun m a => a m) where getElem?_def := sorry getElem!_def := sorry @[inline] def declaration uses 'sorry'declaration uses 'sorry'declaration uses 'sorry'declaration uses 'sorry'declaration uses 'sorry'declaration uses 'sorry'insert (m : IndexMap α β) (a : α) (b : β) : IndexMap α β := match h : m.indices[a]? with | some i => { indices := m.indices keys := m.keys.set i a sorry values := m.values.set i b sorry size_keys := sorry WF := sorry } | none => { indices := m.indices.insert a m.size keys := m.keys.push a values := m.values.push b size_keys := sorry WF := sorry } instance : Singleton (α × β) (IndexMap α β) := fun a, b => ( : IndexMap α β).insert a b instance : Insert (α × β) (IndexMap α β) := fun a, b s => s.insert a b instance : LawfulSingleton (α × β) (IndexMap α β) := fun _ => rfl /-- Erase the key-value pair with the given key, moving the last pair into its place in the order. If the key is not present, the map is unchanged. -/ @[inline] def declaration uses 'sorry'declaration uses 'sorry'declaration uses 'sorry'declaration uses 'sorry'declaration uses 'sorry'declaration uses 'sorry'declaration uses 'sorry'declaration uses 'sorry'eraseSwap (m : IndexMap α β) (a : α) : IndexMap α β := match h : m.indices[a]? with | some i => if w : i = m.size - 1 then { indices := m.indices.erase a keys := m.keys.pop values := m.values.pop size_keys := sorry WF := sorry } else let lastKey := m.keys.back sorry let lastValue := m.values.back sorry { indices := (m.indices.erase a).insert lastKey i keys := m.keys.pop.set i lastKey sorry values := m.values.pop.set i lastValue sorry size_keys := sorry WF := sorry } | none => m /-! ### Verification theorems -/ theorem declaration uses 'sorry'getIdx_findIdx (m : IndexMap α β) (a : α) (h : a m) : m.getIdx (m.findIdx a h) sorry = m[a] := sorry theorem declaration uses 'sorry'mem_insert (m : IndexMap α β) (a a' : α) (b : β) : a' m.insert a b a' = a a' m := α:Type uβ:Type vinst✝³:BEq αinst✝²:LawfulBEq αinst✝¹:Hashable αinst✝:LawfulHashable αm:IndexMap α βa:αa':αb:βa' m.insert a b a' = a a' m All goals completed! 🐙 theorem declaration uses 'sorry'getElem_insert (m : IndexMap α β) (a a' : α) (b : β) (h : a' m.insert a b) : (m.insert a b)[a']'h = if h' : a' == a then b else m[a']'sorry := α:Type uβ:Type vinst✝³:BEq αinst✝²:LawfulBEq αinst✝¹:Hashable αinst✝:LawfulHashable αm:IndexMap α βa:αa':αb:βh:a' m.insert a b(m.insert a b)[a'] = if h' : (a' == a) = true then b else m[a'] All goals completed! 🐙 theorem declaration uses 'sorry'findIdx_insert_self (m : IndexMap α β) (a : α) (b : β) : (m.insert a b).findIdx a sorry = if h : a m then m.findIdx a h else m.size := α:Type uβ:Type vinst✝³:BEq αinst✝²:LawfulBEq αinst✝¹:Hashable αinst✝:LawfulHashable αm:IndexMap α βa:αb:β(m.insert a b).findIdx a = if h : a m then m.findIdx a h else m.size All goals completed! 🐙 end IndexMap

Let's get started. We'll aspire to never writing a proof by hand, and the first step of that is to install auto-parameters for the size_keys and WF field, so we can omit these fields whenever grind can prove them. While we're modifying the definition of IndexMap itself, lets make all the fields private, since we're planning on having complete encapsulation.

structure IndexMap (α : Type u) (β : Type v) [BEq α] [Hashable α] where private indices : HashMap α Nat private keys : Array α private values : Array β private size_keys : keys.size = values.size := by grind private WF : (i : Nat) (a : α), keys[i]? = some a indices[a]? = some i := by grind

Let's give grind access to the definition of size, and size_keys private field:

@[inline] def size (m : IndexMap α β) : Nat := m.values.size attribute [local grind] size attribute [local grind _=_] size_keys

Our first sorrys in the draft version are the size_keys and WF fields in our construction of def emptyWithCapacity. Surely these are trivial, and solvable by grind, so we simply delete those fields:

def emptyWithCapacity (capacity := 8) : IndexMap α β where indices := HashMap.emptyWithCapacity capacity keys := Array.emptyWithCapacity capacity values := Array.emptyWithCapacity capacity

Our next task is to deal with the sorry in our construction of the GetElem? instance:

declaration uses 'sorry'instance : GetElem? (IndexMap α β) α β (fun m a => a m) where getElem m a h := m.values[m.indices[a]]'(α:Type uβ:Type vinst✝¹:BEq αinst✝:Hashable αm✝:IndexMap α βa✝:αb:βi:Natm:IndexMap α βa:αh:a m(indices m)[a] < (values m).size All goals completed! 🐙) getElem? m a := m.indices[a]?.bind (fun i => (m.values[i]?)) getElem! m a := m.indices[a]?.bind (fun i => (m.values[i]?)) |>.getD default

The goal at this sorry is

m : IndexMap α β
a : α
h : a ∈ m
⊢ m.indices[a] < m.values.size

Let's try proving this as a stand-alone theorem, via grind, and see where grind gets stuck. Because we've added grind annotations for size and size_keys already, we can safely reformulate the goal as:

theorem getElem_indices_lt (m : IndexMap α β) (a : α) (h : a m) : m.indices[a] < m.size := α:Type uβ:Type vinst✝¹:BEq αinst✝:Hashable αm:IndexMap α βa:αh:a m(indices m)[a] < m.size `grind` failed case grind α : Type u β : Type v inst : BEq α inst_1 : Hashable α m : IndexMap α β a : α h : a ∈ m h_1 : m.size ≤ (indices m)[a] ⊢ False [grind] Goal diagnostics [facts] Asserted facts [prop] a ∈ m [prop] m.size ≤ (indices m)[a] [prop] m.size = (values m).size [prop] (keys m).size = (values m).size [eqc] True propositions [prop] m.size ≤ (indices m)[a] [prop] a ∈ m [eqc] Equivalence classes [eqc] {Membership.mem, fun m a => a ∈ m} [eqc] {m.size, (values m).size, (keys m).size} [ematch] E-matching patterns [thm] size.eq_1: [@size #4 #3 #2 #1 #0] [thm] HashMap.contains_iff_mem: [@Membership.mem #5 (HashMap _ #4 #3 #2) _ #1 #0] [thm] size_keys: [@Array.size #3 (@values #4 _ #2 #1 #0)] [thm] size_keys: [@Array.size #4 (@keys _ #3 #2 #1 #0)] [cutsat] Assignment satisfying linear constraints [assign] m.size := 0 [assign] (indices m)[a] := 0 [assign] (values m).size := 0 [assign] (keys m).size := 0 [grind] Diagnostics [thm] E-Matching instances [thm] size_keys ↦ 1 [thm] size.eq_1 ↦ 1α:Type uβ:Type vinst✝¹:BEq αinst✝:Hashable αm:IndexMap α βa:αh:a m(indices m)[a] < m.size

This fails, and looking at the message from grind we see that it hasn't done much:

[grind] Goal diagnostics ▼
  [facts] Asserted facts ▼
    [prop] a ∈ m
    [prop] m.size ≤ (indices m)[a]
    [prop] m.size = (values m).size
  [eqc] True propositions ▼
    [prop] m.size ≤ (indices m)[a]
    [prop] a ∈ m
  [eqc] Equivalence classes ▼
    [] {Membership.mem, fun m a => a ∈ m}
    [] {m.size, (values m).size}
  [ematch] E-matching patterns ▼
    [thm] size.eq_1: [@size #4 #3 #2 #1 #0]
    [thm] HashMap.contains_iff_mem: [@Membership.mem #5 (HashMap _ #4 #3 #2) _ #1 #0]
  [cutsat] Assignment satisfying linear constraints ▼
    [assign] m.size := 0
    [assign] (indices m)[a] := 0
    [assign] (values m).size := 0

An immediate problem we can see here is that grind does not yet know that a ∈ m is the same as a ∈ m.indices. Let's add this fact:

@[local grind] private theorem mem_indices_of_mem {m : IndexMap α β} {a : α} : a m a m.indices := Iff.rfl

However this proof is going to work, we know the following:

  • It must use the well-formedness condition of the map.

  • It can't do so without relating m.indices[a] and m.indices[a]? (because the later is what appears in the well-formedness condition).

  • The expected relationship there doesn't even hold unless the map m.indices satisfies LawfulGetElem, for which we need [LawfulBEq α] and [LawfulHashable α].

Let's configure things so that those are available:

variable [LawfulBEq α] [LawfulHashable α] attribute [local grind _=_] IndexMap.WF

and then give grind one manual hint, to relate m.indices[a] and m.indices[a]?:

theorem getElem_indices_lt (m : IndexMap α β) (a : α) (h : a m) : m.indices[a] < m.size := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αinst✝¹:LawfulBEq αinst✝:LawfulHashable αm:IndexMap α βa:αh:a m(indices m)[a] < m.size have : m.indices[a]? = some m.indices[a] := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αinst✝¹:LawfulBEq αinst✝:LawfulHashable αm:IndexMap α βa:αh:a m(indices m)[a] < m.size All goals completed! 🐙 All goals completed! 🐙

With that theorem proved, we want to make it accessible to grind. We could either add @[local grind] before the theorem statement, or write attribute [local grind] getElem_indices_lt after the theorem statement. These will use grind built-in heuristics for deciding a pattern to match the theorem on.

In this case, let's use the grind? attribute to see the pattern that is being generated:

attribute [local getElem_indices_lt: [@LE.le `[Nat] `[instLENat] ((@getElem (HashMap #8 `[Nat] #6 #5) _ `[Nat] _ _ (@indices _ #7 _ _ #2) #1 #0) + 1) (@size _ _ _ _ #2)]grind?] getElem_indices_lt
getElem_indices_lt: [@LE.le `[Nat] `[instLENat] ((@getElem (HashMap #8 `[Nat] #6 #5) _ `[Nat] _ _ (@indices _ #7 _ _ #2) #1 #0) + 1) (@size _ _ _ _ #2)]

This is not a useful pattern: it's matching on the entire conclusion of the theorem (in fact, a normalized version of it, in which x < y has been replaced by x + 1 ≤ y).

We want something more general: we'd like this theorem to fire whenever grind sees m.indices[a], and so instead of using the attribute we write a custom pattern:

grind_pattern getElem_indices_lt => m.indices[a]

The Lean standard library uses the get_elem_tactic tactic as an auto-parameter for the xs[i] notation (which desugars to GetElem.getElem xs i h, with the proof h generated by get_elem_tactic). We'd like to not only have grind fill in these proofs, but even to be able to omit these proofs. To achieve this, we add the line

macro_rules | `(tactic| get_elem_tactic_extensible) => `(tactic| grind)

(In later versions of Lean this may be part of the built-in behavior.)

We can now return to constructing our GetElem? instance, and simply write:

instance : GetElem? (IndexMap α β) α β (fun m a => a m) where getElem m a h := m.values[m.indices[a]] getElem? m a := m.indices[a]?.bind (fun i => (m.values[i]?)) getElem! m a := m.indices[a]?.bind (fun i => (m.values[i]?)) |>.getD default

with neither any sorrys, nor any explicitly written proofs.

Next, we want to expose the content of these definitions, but only locally in this file:

@[local grind] private theorem getElem_def (m : IndexMap α β) (a : α) (h : a m) : m[a] = m.values[m.indices[a]'h] := rfl @[local grind] private theorem getElem?_def (m : IndexMap α β) (a : α) : m[a]? = m.indices[a]?.bind (fun i => (m.values[i]?)) := rfl @[local grind] private theorem getElem!_def [Inhabited β] (m : IndexMap α β) (a : α) : m[a]! = (m.indices[a]?.bind (m.values[·]?)).getD default := rfl

Again we're using the @[local grind] private theorem pattern to hide these implementation details, but allow grind to see these facts locally.

Next, we want to prove the LawfulGetElem instance, and hope that grind can fill in the proofs:

instance : LawfulGetElem (IndexMap α β) α β (fun m a => a m) where getElem?_def := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αm:IndexMap α βa:αb:βi:Natinst✝¹:LawfulBEq αinst✝:LawfulHashable α (c : IndexMap α β) (i : α) [inst : Decidable (i c)], c[i]? = if h : i c then some c[i] else none All goals completed! 🐙 getElem!_def := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αm:IndexMap α βa:αb:βi:Natinst✝¹:LawfulBEq αinst✝:LawfulHashable α [inst : Inhabited β] (c : IndexMap α β) (i : α), c[i]! = match c[i]? with | some e => e | none => default All goals completed! 🐙

Success!

Let's press onward, and see if we can define insert without having to write any proofs:

@[inline] def insert [LawfulBEq α] (m : IndexMap α β) (a : α) (b : β) : IndexMap α β := match h : m.indices[a]? with | some i => { indices := m.indices keys := m.keys.set i a values := m.values.set i b } | none => { indices := m.indices.insert a m.size keys := m.keys.push a values := m.values.push b }

In both branches, grind is automatically proving both the size_keys and WF fields! Note also in the first branch the set calls m.keys.set i a and m.values.set i b are having there "in-bounds" obligations automatically filled in by grind via the get_elem_tactic auto-parameter.

Next let's try eraseSwap:

@[inline] def eraseSwap (m : IndexMap α β) (a : α) : IndexMap α β := match h : m.indices[a]? with | some i => if w : i = m.size - 1 then { indices := m.indices.erase a keys := m.keys.pop values := m.values.pop } else let lastKey := m.keys.back let lastValue := m.values.back `grind` failed case grind.1.1.2.2.1.1.1.1 α : Type u β : Type v inst : BEq α inst_1 : Hashable α m : IndexMap α β a : α b : β i : Nat inst_2 : LawfulBEq α inst_3 : LawfulHashable α m_1 : IndexMap α β a_1 : α i_1 : Nat h : (indices m_1)[a_1]? = some i_1 w : ¬i_1 = m_1.size - 1 i_2 : Nat a_2 : α h_1 : (((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some a_2) = ¬(((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = some i_2 h_2 : -1 * ↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size + 1 ≤ 0 left : ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some a_2 right : ¬(((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = some i_2 h_4 : ¬i_1 = i_2 left_1 : ¬(keys m_1)[i_2]? = some a_1 right_1 : ¬(indices m_1)[a_1]? = some i_2 h_6 : ((keys m_1).back ⋯ == a_2) = true h_7 : i_1 + 1 ≤ (keys m_1).pop.size left_2 : ((indices m_1).erase a_1).contains a_2 = true right_2 : a_2 ∈ (indices m_1).erase a_1 h_9 : 0 = (indices m_1)[a_1] ⊢ False [grind] Goal diagnostics [facts] Asserted facts [prop] LawfulBEq α [prop] LawfulHashable α [prop] (indices m_1)[a_1]? = some i_1 [prop] ¬i_1 = m_1.size - 1 [prop] ↑(m_1.size - 1) = if -1 * ↑m_1.size + 1 ≤ 0 then ↑m_1.size + -1 else 0 [prop] (((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some a_2) = ¬(((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = some i_2 [prop] ¬a_2 ∈ ((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1 → (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = none [prop] ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size ≤ i_2 → ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = none [prop] ¬a_1 ∈ indices m_1 → (indices m_1)[a_1]? = none [prop] ∀ (h_10 : a_2 ∈ ((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1), (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = some (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2] [prop] ∀ (h_10 : i_2 + 1 ≤ ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size), ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2] [prop] ∀ (h : a_1 ∈ indices m_1), (indices m_1)[a_1]? = some (indices m_1)[a_1] [prop] ((keys m_1)[i_1]? = some a_1) = ((indices m_1)[a_1]? = some i_1) [prop] ((keys m_1)[i_2]? = some a_1) = ((indices m_1)[a_1]? = some i_2) [prop] m_1.size = (values m_1).size [prop] ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = if i_1 = i_2 then some ((keys m_1).back ⋯) else (keys m_1).pop[i_2]? [prop] (keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯ = ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop [prop] (keys m_1).back ⋯ = (keys m_1)[(keys m_1).size - 1] [prop] ↑((keys m_1).size - 1) = if -1 * ↑(keys m_1).size + 1 ≤ 0 then ↑(keys m_1).size + -1 else 0 [prop] (keys m_1).size = (values m_1).size [prop] (keys m_1).pop.size = (keys m_1).size - 1 [prop] (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = if ((keys m_1).back ⋯ == a_2) = true then some i_1 else ((indices m_1).erase a_1)[a_2]? [prop] (keys m_1).size ≤ i_2 → (keys m_1)[i_2]? = none [prop] (keys m_1).size ≤ i_1 → (keys m_1)[i_1]? = none [prop] ∀ (h : i_2 + 1 ≤ (keys m_1).size), (keys m_1)[i_2]? = some (keys m_1)[i_2] [prop] ∀ (h : i_1 + 1 ≤ (keys m_1).size), (keys m_1)[i_1]? = some (keys m_1)[i_1] [prop] ((indices m_1).contains a_1 = true) = (a_1 ∈ indices m_1) [prop] ((((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1).contains a_2 = true) = (a_2 ∈ ((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1) [prop] ((keys m_1)[(indices m_1)[a_1]]? = some a_1) = ((indices m_1)[a_1]? = some (indices m_1)[a_1]) [prop] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop[i_2]? = if i_2 + 1 ≤ ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size - 1 then ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? else none [prop] ↑(((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size - 1) = if -1 * ↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size + 1 ≤ 0 then ↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size + -1 else 0 [prop] ∀ (h_10 : i_1 + 1 ≤ (keys m_1).pop.size), (keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯ = ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop [prop] ((keys m_1)[i_2]? = some a_2) = ((indices m_1)[a_2]? = some i_2) [prop] ((keys m_1)[i_1]? = some a_2) = ((indices m_1)[a_2]? = some i_1) [prop] ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size ≤ i_2 → ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = none [prop] (keys m_1).size ≤ i_2 → (keys m_1)[i_2]? = none [prop] (keys m_1).size ≤ i_1 → (keys m_1)[i_1]? = none [prop] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop.size = ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size - 1 [prop] ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size = (keys m_1).pop.size [prop] (a_2 ∈ ((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1) = ((keys m_1).back ⋯ = a_2 ∨ a_2 ∈ (indices m_1).erase a_1) [prop] ∀ (h : a_1 ∈ m_1), (indices m_1)[a_1] + 1 ≤ m_1.size [prop] (indices m_1)[a_1]? = some (indices m_1)[a_1] [prop] ¬a_2 ∈ indices m_1 → (indices m_1)[a_2]? = none [prop] ∀ (h : a_2 ∈ indices m_1), (indices m_1)[a_2]? = some (indices m_1)[a_2] [prop] (((indices m_1).erase a_1).contains a_2 = true) = (a_2 ∈ (indices m_1).erase a_1) [prop] ((keys m_1)[(indices m_1)[a_1]]? = some a_2) = ((indices m_1)[a_2]? = some (indices m_1)[a_1]) [prop] ((keys m_1)[i_1]? = some (keys m_1)[i_1]) = ((indices m_1)[(keys m_1)[i_1]]? = some i_1) [prop] ((keys m_1)[i_2]? = some (keys m_1)[i_1]) = ((indices m_1)[(keys m_1)[i_1]]? = some i_2) [prop] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size = (keys m_1).size [prop] (a_2 ∈ (indices m_1).erase a_1) = ((a_1 == a_2) = false ∧ a_2 ∈ indices m_1) [prop] ((((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1).contains a_2 = true) = ((keys m_1).back ⋯ = a_2 ∨ ((indices m_1).erase a_1).contains a_2 = true) [prop] (a_1 ∈ m_1) = (a_1 ∈ indices m_1) [prop] (indices m_1)[a_1] + 1 ≤ m_1.size [prop] ((indices m_1).contains a_2 = true) = (a_2 ∈ indices m_1) [prop] (((indices m_1).erase a_1).contains a_2 = true) = ((!a_1 == a_2) = true ∧ (indices m_1).contains a_2 = true) [prop] -1 * ↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size + 1 ≤ 0 [prop] ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some a_2 [prop] ¬(((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = some i_2 [prop] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size ≤ i_2 → ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = none [prop] ∀ (h_10 : i_2 + 1 ≤ ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size), ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2] [prop] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = if i_1 = i_2 then some ((keys m_1).back ⋯) else (keys m_1)[i_2]? [prop] ((keys m_1)[i_1]? = some ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]) = ((indices m_1)[((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]]? = some i_1) [prop] ((keys m_1)[i_2]? = some ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]) = ((indices m_1)[((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]]? = some i_2) [prop] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size ≤ i_2 → ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = none [prop] ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2] = if i_1 = i_2 then (keys m_1).back ⋯ else (keys m_1).pop[i_2] [prop] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop[i_2] = ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2] [prop] ((keys m_1)[i_1]? = some (keys m_1)[i_2]) = ((indices m_1)[(keys m_1)[i_2]]? = some i_1) [prop] ((keys m_1)[i_2]? = some (keys m_1)[i_2]) = ((indices m_1)[(keys m_1)[i_2]]? = some i_2) [prop] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2] = if i_1 = i_2 then (keys m_1).back ⋯ else (keys m_1)[i_2] [prop] ¬(keys m_1)[i_2] ∈ indices m_1 → (indices m_1)[(keys m_1)[i_2]]? = none [prop] ∀ (h_10 : (keys m_1)[i_2] ∈ indices m_1), (indices m_1)[(keys m_1)[i_2]]? = some (indices m_1)[(keys m_1)[i_2]] [prop] ((keys m_1)[(indices m_1)[a_1]]? = some (keys m_1)[i_2]) = ((indices m_1)[(keys m_1)[i_2]]? = some (indices m_1)[a_1]) [prop] ((indices m_1).contains (keys m_1)[i_2] = true) = ((keys m_1)[i_2] ∈ indices m_1) [prop] ((keys m_1)[(indices m_1)[(keys m_1)[i_2]]]? = some (keys m_1)[i_2]) = ((indices m_1)[(keys m_1)[i_2]]? = some (indices m_1)[(keys m_1)[i_2]]) [prop] ((keys m_1)[(indices m_1)[(keys m_1)[i_2]]]? = some a_2) = ((indices m_1)[a_2]? = some (indices m_1)[(keys m_1)[i_2]]) [prop] ((keys m_1)[(indices m_1)[(keys m_1)[i_2]]]? = some a_1) = ((indices m_1)[a_1]? = some (indices m_1)[(keys m_1)[i_2]]) [prop] ∀ (h_10 : (keys m_1)[i_2] ∈ m_1), (indices m_1)[(keys m_1)[i_2]] + 1 ≤ m_1.size [prop] (indices m_1)[(keys m_1)[i_2]]? = some (indices m_1)[(keys m_1)[i_2]] [prop] ((keys m_1)[i_2] ∈ m_1) = ((keys m_1)[i_2] ∈ indices m_1) [prop] (indices m_1)[(keys m_1)[i_2]] + 1 ≤ m_1.size [prop] ¬i_1 = i_2 [prop] (keys m_1).pop.size ≤ i_2 → (keys m_1).pop[i_2]? = none [prop] ∀ (h : i_2 + 1 ≤ (keys m_1).pop.size), (keys m_1).pop[i_2]? = some (keys m_1).pop[i_2] [prop] (keys m_1).pop[i_2]? = if i_2 + 1 ≤ (keys m_1).size - 1 then (keys m_1)[i_2]? else none [prop] (keys m_1).pop.size ≤ i_2 → (keys m_1).pop[i_2]? = none [prop] (keys m_1).pop[i_2] = (keys m_1)[i_2] [prop] (indices m_1)[a_2]? = some (indices m_1)[a_2] [prop] ¬(keys m_1)[i_2]? = some a_1 [prop] ¬(indices m_1)[a_1]? = some i_2 [prop] ((keys m_1).back ⋯ == a_2) = true [prop] (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = some (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2] [prop] (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2] = if h₂ : ((keys m_1).back ⋯ == a_2) = true then i_1 else ((indices m_1).erase a_1)[a_2] [prop] i_1 + 1 ≤ (keys m_1).pop.size [prop] ((indices m_1).erase a_1).contains a_2 = true [prop] a_2 ∈ (indices m_1).erase a_1 [prop] 0 = (indices m_1)[a_1] [eqc] True propositions [prop] ¬(((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = some i_2 [prop] LawfulBEq α [prop] (((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some a_2) = ¬(((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = some i_2 [prop] 0 = (indices m_1)[a_1] [prop] ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some a_2 [prop] LawfulHashable α [prop] -1 * ↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size + 1 ≤ 0 [prop] -1 * ↑(keys m_1).size + 1 ≤ 0 [prop] -1 * ↑m_1.size + 1 ≤ 0 [prop] i_1 < (keys m_1).pop.size [prop] 0 < (keys m_1).size [prop] ((keys m_1)[i_1]? = some a_1) = ((indices m_1)[a_1]? = some i_1) [prop] ((keys m_1)[i_2]? = some a_1) = ((indices m_1)[a_1]? = some i_2) [prop] ((keys m_1).back ⋯ == a_2) = true [prop] (indices m_1)[a_1]? = some i_1 [prop] (indices m_1)[a_1]? = some (indices m_1)[a_1] [prop] (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = some (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2] [prop] ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2] [prop] (keys m_1)[i_1]? = some a_1 [prop] i_2 + 1 ≤ ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size [prop] i_1 < (keys m_1).size [prop] i_2 < ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size [prop] (keys m_1).size - 1 < (keys m_1).size [prop] a_1 ∈ indices m_1 [prop] a_2 ∈ ((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1 [prop] ¬a_1 ∈ indices m_1 → (indices m_1)[a_1]? = none [prop] ¬a_2 ∈ ((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1 → (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = none [prop] ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size ≤ i_2 → ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = none [prop] ∀ (h_10 : i_2 + 1 ≤ ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size), ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2] [prop] ∀ (h : a_1 ∈ indices m_1), (indices m_1)[a_1]? = some (indices m_1)[a_1] [prop] ∀ (h_10 : a_2 ∈ ((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1), (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = some (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2] [prop] (keys m_1).back ⋯ = a_2 ∨ a_2 ∈ (indices m_1).erase a_1 [prop] ((indices m_1).contains a_1 = true) = (a_1 ∈ indices m_1) [prop] ((((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1).contains a_2 = true) = (a_2 ∈ ((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1) [prop] ((keys m_1)[i_1]? = some a_2) = ((indices m_1)[a_2]? = some i_1) [prop] ((keys m_1)[i_1]? = some ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]) = ((indices m_1)[((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]]? = some i_1) [prop] ((keys m_1)[i_2]? = some a_2) = ((indices m_1)[a_2]? = some i_2) [prop] ((keys m_1)[i_2]? = some ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]) = ((indices m_1)[((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]]? = some i_2) [prop] ((keys m_1)[(indices m_1)[a_1]]? = some a_1) = ((indices m_1)[a_1]? = some (indices m_1)[a_1]) [prop] (a_2 ∈ ((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1) = ((keys m_1).back ⋯ = a_2 ∨ a_2 ∈ (indices m_1).erase a_1) [prop] (indices m_1).contains a_1 = true [prop] (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1).contains a_2 = true [prop] (indices m_1)[a_2]? = some i_2 [prop] (indices m_1)[((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]]? = some i_2 [prop] (keys m_1).back ⋯ = a_2 [prop] (keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯ = ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop [prop] (keys m_1).pop[i_2]? = some (keys m_1).pop[i_2] [prop] (keys m_1)[i_1]? = some (keys m_1)[i_1] [prop] (keys m_1)[i_2]? = some a_2 [prop] (keys m_1)[i_2]? = some ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2] [prop] (keys m_1)[i_2]? = some (keys m_1)[i_2] [prop] (keys m_1)[(indices m_1)[a_1]]? = some a_1 [prop] i_1 + 1 ≤ (keys m_1).pop.size [prop] i_1 + 1 ≤ (keys m_1).size [prop] i_2 + 1 ≤ (keys m_1).pop.size [prop] i_2 + 1 ≤ (keys m_1).size [prop] i_2 + 1 ≤ ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size - 1 [prop] i_2 + 1 ≤ (keys m_1).size - 1 [prop] (indices m_1)[a_1] + 1 ≤ m_1.size [prop] i_2 < ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop.size [prop] i_2 < (keys m_1).pop.size [prop] i_2 < (keys m_1).size [prop] a_2 ∈ (indices m_1).erase a_1 [prop] a_1 ∈ m_1 [prop] (keys m_1).pop.size ≤ i_2 → (keys m_1).pop[i_2]? = none [prop] (keys m_1).size ≤ i_1 → (keys m_1)[i_1]? = none [prop] (keys m_1).size ≤ i_2 → (keys m_1)[i_2]? = none [prop] ∀ (h_10 : i_1 + 1 ≤ (keys m_1).pop.size), (keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯ = ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop [prop] ∀ (h : i_1 + 1 ≤ (keys m_1).size), (keys m_1)[i_1]? = some (keys m_1)[i_1] [prop] ∀ (h : i_2 + 1 ≤ (keys m_1).pop.size), (keys m_1).pop[i_2]? = some (keys m_1).pop[i_2] [prop] ∀ (h : i_2 + 1 ≤ (keys m_1).size), (keys m_1)[i_2]? = some (keys m_1)[i_2] [prop] ∀ (h : a_1 ∈ m_1), (indices m_1)[a_1] + 1 ≤ m_1.size [prop] (a_1 == a_2) = false ∧ a_2 ∈ indices m_1 [prop] (keys m_1).back ⋯ = a_2 ∨ ((indices m_1).erase a_1).contains a_2 = true [prop] (((indices m_1).erase a_1).contains a_2 = true) = (a_2 ∈ (indices m_1).erase a_1) [prop] ((((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1).contains a_2 = true) = ((keys m_1).back ⋯ = a_2 ∨ ((indices m_1).erase a_1).contains a_2 = true) [prop] ((keys m_1)[i_1]? = some (keys m_1)[i_1]) = ((indices m_1)[(keys m_1)[i_1]]? = some i_1) [prop] ((keys m_1)[i_1]? = some (keys m_1)[i_2]) = ((indices m_1)[(keys m_1)[i_2]]? = some i_1) [prop] ((keys m_1)[i_2]? = some (keys m_1)[i_1]) = ((indices m_1)[(keys m_1)[i_1]]? = some i_2) [prop] ((keys m_1)[i_2]? = some (keys m_1)[i_2]) = ((indices m_1)[(keys m_1)[i_2]]? = some i_2) [prop] ((keys m_1)[(indices m_1)[a_1]]? = some a_2) = ((indices m_1)[a_2]? = some (indices m_1)[a_1]) [prop] (a_2 ∈ (indices m_1).erase a_1) = ((a_1 == a_2) = false ∧ a_2 ∈ indices m_1) [prop] (a_1 ∈ m_1) = (a_1 ∈ indices m_1) [prop] (a_1 == a_2) = false [prop] ((indices m_1).erase a_1).contains a_2 = true [prop] (indices m_1)[a_2]? = some (indices m_1)[a_2] [prop] (indices m_1)[(keys m_1)[i_1]]? = some i_1 [prop] (indices m_1)[(keys m_1)[i_2]]? = some i_2 [prop] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2] [prop] i_2 + 1 ≤ ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size [prop] (indices m_1)[a_1] + 1 ≤ m_1.size [prop] i_2 < ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size [prop] a_2 ∈ indices m_1 [prop] ¬a_2 ∈ indices m_1 → (indices m_1)[a_2]? = none [prop] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size ≤ i_2 → ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = none [prop] ∀ (h_10 : i_2 + 1 ≤ ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size), ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2] [prop] ∀ (h : a_2 ∈ indices m_1), (indices m_1)[a_2]? = some (indices m_1)[a_2] [prop] (!a_1 == a_2) = true ∧ (indices m_1).contains a_2 = true [prop] ((indices m_1).contains a_2 = true) = (a_2 ∈ indices m_1) [prop] (((indices m_1).erase a_1).contains a_2 = true) = ((!a_1 == a_2) = true ∧ (indices m_1).contains a_2 = true) [prop] ((keys m_1)[(indices m_1)[a_1]]? = some (keys m_1)[i_2]) = ((indices m_1)[(keys m_1)[i_2]]? = some (indices m_1)[a_1]) [prop] (!a_1 == a_2) = true [prop] (indices m_1).contains a_2 = true [prop] (indices m_1)[(keys m_1)[i_2]]? = some (indices m_1)[(keys m_1)[i_2]] [prop] (keys m_1)[i_2] ∈ indices m_1 [prop] ¬(keys m_1)[i_2] ∈ indices m_1 → (indices m_1)[(keys m_1)[i_2]]? = none [prop] ∀ (h_10 : (keys m_1)[i_2] ∈ indices m_1), (indices m_1)[(keys m_1)[i_2]]? = some (indices m_1)[(keys m_1)[i_2]] [prop] ((indices m_1).contains (keys m_1)[i_2] = true) = ((keys m_1)[i_2] ∈ indices m_1) [prop] ((keys m_1)[(indices m_1)[(keys m_1)[i_2]]]? = some a_1) = ((indices m_1)[a_1]? = some (indices m_1)[(keys m_1)[i_2]]) [prop] ((keys m_1)[(indices m_1)[(keys m_1)[i_2]]]? = some a_2) = ((indices m_1)[a_2]? = some (indices m_1)[(keys m_1)[i_2]]) [prop] ((keys m_1)[(indices m_1)[(keys m_1)[i_2]]]? = some (keys m_1)[i_2]) = ((indices m_1)[(keys m_1)[i_2]]? = some (indices m_1)[(keys m_1)[i_2]]) [prop] (indices m_1).contains (keys m_1)[i_2] = true [prop] (indices m_1)[a_2]? = some (indices m_1)[(keys m_1)[i_2]] [prop] (keys m_1)[(indices m_1)[(keys m_1)[i_2]]]? = some a_2 [prop] (keys m_1)[(indices m_1)[(keys m_1)[i_2]]]? = some (keys m_1)[i_2] [prop] (indices m_1)[(keys m_1)[i_2]] + 1 ≤ m_1.size [prop] (indices m_1)[(keys m_1)[i_2]] + 1 ≤ m_1.size [prop] (keys m_1)[i_2] ∈ m_1 [prop] ∀ (h_10 : (keys m_1)[i_2] ∈ m_1), (indices m_1)[(keys m_1)[i_2]] + 1 ≤ m_1.size [prop] ((keys m_1)[i_2] ∈ m_1) = ((keys m_1)[i_2] ∈ indices m_1) [eqc] False propositions [prop] i_1 = m_1.size - 1 [prop] (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = some i_2 [prop] a_1 = a_2 [prop] ¬a_1 ∈ indices m_1 [prop] ¬a_2 ∈ ((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1 [prop] i_1 = i_2 [prop] (indices m_1)[a_1]? = none [prop] (indices m_1)[a_1]? = some i_2 [prop] (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = none [prop] ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = none [prop] (keys m_1)[i_2]? = some a_1 [prop] ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size ≤ i_2 [prop] (indices m_1)[a_2]? = some i_1 [prop] (indices m_1)[((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]]? = some i_1 [prop] (keys m_1).pop[i_2]? = none [prop] (keys m_1)[i_1]? = none [prop] (keys m_1)[i_1]? = some a_2 [prop] (keys m_1)[i_1]? = some ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2] [prop] (keys m_1)[i_2]? = none [prop] (keys m_1).pop.size ≤ i_2 [prop] (keys m_1).size ≤ i_1 [prop] (keys m_1).size ≤ i_2 [prop] ¬a_2 ∈ indices m_1 [prop] (indices m_1)[a_2]? = none [prop] (indices m_1)[a_2]? = some (indices m_1)[a_1] [prop] (indices m_1)[(keys m_1)[i_1]]? = some i_2 [prop] (indices m_1)[(keys m_1)[i_2]]? = some i_1 [prop] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = none [prop] (keys m_1)[i_1]? = some (keys m_1)[i_2] [prop] (keys m_1)[i_2]? = some (keys m_1)[i_1] [prop] (keys m_1)[(indices m_1)[a_1]]? = some a_2 [prop] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size ≤ i_2 [prop] ¬(keys m_1)[i_2] ∈ indices m_1 [prop] (indices m_1)[(keys m_1)[i_2]]? = none [prop] (indices m_1)[(keys m_1)[i_2]]? = some (indices m_1)[a_1] [prop] (keys m_1)[(indices m_1)[a_1]]? = some (keys m_1)[i_2] [prop] (indices m_1)[a_1]? = some (indices m_1)[(keys m_1)[i_2]] [prop] (keys m_1)[(indices m_1)[(keys m_1)[i_2]]]? = some a_1 [eqc] Equivalence classes [eqc] {a_1, (keys m_1)[i_1]} [eqc] {i_1, 0, (indices m_1)[a_1], (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2], if h₂ : ((keys m_1).back ⋯ == a_2) = true then i_1 else ((indices m_1).erase a_1)[a_2], (indices m_1)[a_1]} [eqc] {i_2, (indices m_1)[a_2], (indices m_1)[(keys m_1)[i_2]], (indices m_1)[(keys m_1)[i_2]]} [eqc] {a_2, (keys m_1).back ⋯, ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2], (keys m_1)[(keys m_1).size - 1], if i_1 = i_2 then (keys m_1).back ⋯ else (keys m_1).pop[i_2], ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop[i_2], (keys m_1).pop[i_2], (keys m_1)[i_2], ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2], if i_1 = i_2 then (keys m_1).back ⋯ else (keys m_1)[i_2]} [eqc] {false, a_1 == a_2} [eqc] {true, (keys m_1).back ⋯ == a_2, (indices m_1).contains a_1, (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1).contains a_2, ((indices m_1).erase a_1).contains a_2, !a_1 == a_2, (indices m_1).contains a_2, (indices m_1).contains (keys m_1)[i_2]} [eqc] {(keys m_1).pop.size, m_1.size - 1, ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size, (keys m_1).size - 1, ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop.size, ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size - 1} [eqc] {(keys m_1).size, m_1.size, (values m_1).size, ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size} [eqc] {some i_1, (indices m_1)[a_1]?, (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]?, some (indices m_1)[a_1], some (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2], if ((keys m_1).back ⋯ == a_2) = true then some i_1 else ((indices m_1).erase a_1)[a_2]?, (indices m_1)[(keys m_1)[i_1]]?} [eqc] {some i_2, (indices m_1)[a_2]?, (indices m_1)[((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]]?, some (indices m_1)[a_2], (indices m_1)[(keys m_1)[i_2]]?, some (indices m_1)[(keys m_1)[i_2]]} [eqc] {some a_2, ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]?, some ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2], if i_1 = i_2 then some ((keys m_1).back ⋯) else (keys m_1).pop[i_2]?, (keys m_1).pop[i_2]?, (keys m_1)[i_2]?, some (keys m_1).pop[i_2], some (keys m_1)[i_2], if i_2 + 1 ≤ ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size - 1 then ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? else none, if i_2 + 1 ≤ (keys m_1).size - 1 then (keys m_1)[i_2]? else none, ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop[i_2]?, ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2]?, some ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2], if i_1 = i_2 then some ((keys m_1).back ⋯) else (keys m_1)[i_2]?, (keys m_1)[(indices m_1)[(keys m_1)[i_2]]]?} [eqc] {↑(m_1.size - 1), ↑((keys m_1).size - 1), ↑(((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size - 1)} [eqc] {Membership.mem, fun m a => a ∈ m} [eqc] {↑i_1, ↑0, ↑(indices m_1)[a_1], ↑(indices m_1)[a_1]} [eqc] {↑(keys m_1).pop.size, ↑(((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size - 1), ↑((keys m_1).size - 1), ↑(m_1.size - 1), if -1 * ↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size + 1 ≤ 0 then ↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size + -1 else 0, if -1 * ↑(keys m_1).size + 1 ≤ 0 then ↑(keys m_1).size + -1 else 0, if -1 * ↑m_1.size + 1 ≤ 0 then ↑m_1.size + -1 else 0, ↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size + -1, ↑(keys m_1).size + -1, ↑m_1.size + -1, ↑((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size} [eqc] {↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size, ↑(keys m_1).size, ↑m_1.size} [eqc] {↑1, 1} [eqc] {↑(indices m_1)[(keys m_1)[i_2]], ↑i_2, ↑(indices m_1)[(keys m_1)[i_2]]} [eqc] {(keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯, ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop} [eqc] {-1 * ↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size + 1, -1 * ↑(keys m_1).size + 1, -1 * ↑m_1.size + 1} [eqc] {-1 * ↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size, -1 * ↑(keys m_1).size, -1 * ↑m_1.size} [eqc] {some a_1, (keys m_1)[i_1]?, some (keys m_1)[i_1], (keys m_1)[(indices m_1)[a_1]]?} [eqc] {i_2 + 1, (indices m_1)[(keys m_1)[i_2]] + 1, (indices m_1)[(keys m_1)[i_2]] + 1} [eqc] {i_1 + 1, (indices m_1)[a_1] + 1, (indices m_1)[a_1] + 1} [cases] Case analyses [cases] [1/2]: if -1 * ↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size + 1 ≤ 0 then ↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size + -1 else 0 [cases] source: E-matching Array.getElem?_pop [cases] [1/2]: (((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some a_2) = ¬(((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2]? = some i_2 [cases] source: Initial goal [cases] [2/2]: if i_1 = i_2 then some ((keys m_1).back ⋯) else (keys m_1).pop[i_2]? [cases] source: E-matching Array.getElem?_set [cases] [2/2]: ((keys m_1)[i_2]? = some a_1) = ((indices m_1)[a_1]? = some i_2) [cases] source: E-matching WF [cases] [1/2]: if ((keys m_1).back ⋯ == a_2) = true then some i_1 else ((indices m_1).erase a_1)[a_2]? [cases] source: E-matching HashMap.getElem?_insert [cases] [1/2]: i_1 + 1 ≤ (keys m_1).pop.size [cases] source: E-matching Array.set_pop [cases] [1/2]: (((indices m_1).erase a_1).contains a_2 = true) = (a_2 ∈ (indices m_1).erase a_1) [cases] source: E-matching HashMap.contains_iff_mem [cases] [1/2]: 0 = (indices m_1)[a_1] [cases] source: Model-based theory combination at argument #2 of i_1 < (keys m_1).pop.size and 0 < (keys m_1).size [ematch] E-matching patterns [thm] getElem?_neg: [@getElem? #8 #7 #6 #5 #4 #2 #1] [thm] getElem?_pos: [@getElem? #8 #7 #6 #5 #4 #2 #1] [thm] HashMap.contains_iff_mem: [@Membership.mem #5 (HashMap _ #4 #3 #2) _ #1 #0] [thm] WF: [@getElem? (HashMap #6 `[Nat] #4 #3) _ `[Nat] _ _ (@indices _ #5 _ _ #2) #0, @some `[Nat] #1] [thm] size.eq_1: [@size #4 #3 #2 #1 #0] [thm] Option.some_le_some: [@LE.le (Option #3) _ (@some _ #1) (@some _ #0)] [thm] Option.mem_some: [@Membership.mem #2 (Option _) _ (@some _ #0) #1] [thm] Array.contains_iff_mem: [@Membership.mem #4 (Array _) _ #1 #0] [thm] Array.getElem?_set: [@getElem? (Array #5) `[Nat] _ _ _ (@Array.set _ #4 #3 #1 #2) #0] [thm] Array.mem_or_eq_of_mem_set: [@Membership.mem #6 (Array _) _ (@Array.set _ #5 #4 #2 _) #3] [thm] Array.set_pop: [@Array.set #4 (@Array.pop _ #3) #1 #2 #0] [thm] Array.getElem?_pop: [@getElem? (Array #2) `[Nat] _ _ _ (@Array.pop _ #1) #0] [thm] Array.set_pop: [@Array.pop #4 (@Array.set _ #3 #1 #2 _)] [thm] WF: [@getElem? (Array #6) `[Nat] _ _ _ (@keys _ #5 #4 #3 #2) #1, @some _ #0] [thm] Array.back_eq_getElem: [@Array.back #2 #1 #0] [thm] Option.some_lt_some: [@LT.lt (Option #3) _ (@some _ #1) (@some _ #0)] [thm] Array.size_pos_of_mem: [@Membership.mem #3 (Array _) _ #1 #2, @Array.size _ #1] [thm] size_keys: [@Array.size #4 (@keys _ #3 #2 #1 #0)] [thm] Array.getElem?_eq_none: [@Array.size #3 #1, @getElem? (Array _) `[Nat] _ _ _ #1 #2] [thm] Array.size_pop: [@Array.size #1 (@Array.pop _ #0)] [thm] Array.size_set: [@Array.size #4 (@Array.set _ #3 #2 #1 #0)] [thm] HashMap.mem_insert: [@Membership.mem #9 (HashMap _ #8 #7 #6) _ (@HashMap.insert _ _ #7 #6 #5 #2 #0) #1] [thm] HashMap.getElem?_insert: [@getElem? (HashMap #9 #8 #7 #6) _ _ _ _ (@HashMap.insert _ _ #7 #6 #5 #2 #0) #1] [thm] HashMap.mem_erase: [@Membership.mem #8 (HashMap _ #7 #6 #5) _ (@HashMap.erase _ _ #6 #5 #4 #1) #0] [thm] HashMap.getElem?_erase: [@getElem? (HashMap #8 #7 #6 #5) _ _ _ _ (@HashMap.erase _ _ #6 #5 #4 #1) #0] [thm] Option.not_lt_none: [@LT.lt (Option #2) _ #0 (@none _)] [thm] Option.none_lt_some: [@LT.lt (Option #2) _ (@none _) (@some _ #0)] [thm] Option.not_mem_none: [@Membership.mem #1 (Option _) _ (@none _) #0] [thm] Option.not_some_le_none: [@LE.le (Option #2) _ (@some _ #0) (@none _)] [thm] Option.none_le: [@LE.le (Option #2) _ (@none _) #0] [thm] local_0: [@getElem `[HashMap α Nat] `[α] `[Nat] `[fun m a => a ∈ m] `[HashMap.instGetElem?Mem.toGetElem] `[((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1] `[a_2] #0] [thm] local_0: [@getElem `[HashMap α Nat] `[α] `[Nat] `[fun m a => a ∈ m] `[HashMap.instGetElem?Mem.toGetElem] `[((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1] `[a_2] #0] [thm] Array.getElem_mem: [@Membership.mem #3 (Array _) _ #2 (@getElem (Array _) `[Nat] _ _ _ #2 #1 #0)] [thm] getElem_indices_lt: [@getElem (HashMap #8 `[Nat] #6 #5) _ `[Nat] _ _ (@indices _ #7 _ _ #2) #1 _] [thm] HashMap.getElem_erase: [@getElem (HashMap #9 #8 #7 #6) _ _ _ _ (@HashMap.erase _ _ #7 #6 #5 #2) #1 #0] [thm] HashMap.getElem_insert: [@getElem (HashMap #10 #9 #8 #7) _ _ _ _ (@HashMap.insert _ _ #8 #7 #6 #3 #1) #2 #0] [thm] Array.getElem_set: [@getElem (Array #6) `[Nat] _ _ _ (@Array.set _ #5 #4 #2 #3) #1 #0] [thm] Array.getElem_pop: [@getElem (Array #3) `[Nat] _ _ _ (@Array.pop _ #2) #1 #0] [thm] local_2: [@getElem `[HashMap α Nat] `[α] `[Nat] `[fun m a => a ∈ m] `[HashMap.instGetElem?Mem.toGetElem] `[indices m_1] `[a_1] #0] [thm] local_2: [@getElem `[HashMap α Nat] `[α] `[Nat] `[fun m a => a ∈ m] `[HashMap.instGetElem?Mem.toGetElem] `[indices m_1] `[a_1] #0] [thm] size_keys: [@Array.size #3 (@values #4 _ #2 #1 #0)] [thm] Option.some_beq_some: [@BEq.beq (Option #3) _ (@some _ #1) (@some _ #0)] [thm] Option.some_beq_none: [@BEq.beq (Option #2) _ (@some _ #0) (@none _)] [thm] Option.none_beq_some: [@BEq.beq (Option #2) _ (@none _) (@some _ #0)] [thm] Option.none_beq_none: [@BEq.beq (Option #1) _ (@none _) (@none _)] [thm] HashMap.contains_erase: [@HashMap.contains #8 #7 #6 #5 (@HashMap.erase _ _ #6 #5 #4 #1) #0] [thm] HashMap.contains_insert: [@HashMap.contains #9 #8 #7 #6 (@HashMap.insert _ _ #7 #6 #5 #2 #0) #1] [thm] getElem_def: [@getElem (IndexMap #8 #7 #6 #5) _ _ _ _ #2 #1 #0] [thm] mem_indices_of_mem: [@Membership.mem #5 (IndexMap _ #4 #3 #2) _ #1 #0] [thm] getElem?_def: [@getElem? (IndexMap #7 #6 #5 #4) _ _ _ _ #1 #0] [thm] local_6: [@LE.le `[Nat] `[instLENat] ((@getElem `[HashMap α Nat] `[α] `[Nat] `[fun m a => a ∈ m] `[HashMap.instGetElem?Mem.toGetElem] `[indices m_1] `[a_1] #0) + 1) `[m_1.size]] [thm] local_6: [@LE.le `[Nat] `[instLENat] ((@getElem `[HashMap α Nat] `[α] `[Nat] `[fun m a => a ∈ m] `[HashMap.instGetElem?Mem.toGetElem] `[indices m_1] `[a_1] #0) + 1) `[m_1.size]] [thm] local_7: [@getElem `[HashMap α Nat] `[α] `[Nat] `[fun m a => a ∈ m] `[HashMap.instGetElem?Mem.toGetElem] `[indices m_1] `[a_2] #0] [thm] local_7: [@getElem `[HashMap α Nat] `[α] `[Nat] `[fun m a => a ∈ m] `[HashMap.instGetElem?Mem.toGetElem] `[indices m_1] `[a_2] #0] [thm] local_9: [@getElem `[HashMap α Nat] `[α] `[Nat] `[fun m a => a ∈ m] `[HashMap.instGetElem?Mem.toGetElem] `[indices m_1] `[(keys m_1)[i_2]] #0] [thm] local_9: [@getElem `[HashMap α Nat] `[α] `[Nat] `[fun m a => a ∈ m] `[HashMap.instGetElem?Mem.toGetElem] `[indices m_1] `[(keys m_1)[i_2]] #0] [thm] local_10: [@LE.le `[Nat] `[instLENat] ((@getElem `[HashMap α Nat] `[α] `[Nat] `[fun m a => a ∈ m] `[HashMap.instGetElem?Mem.toGetElem] `[indices m_1] `[(keys m_1)[i_2]] #0) + 1) `[m_1.size]] [thm] local_10: [@LE.le `[Nat] `[instLENat] ((@getElem `[HashMap α Nat] `[α] `[Nat] `[fun m a => a ∈ m] `[HashMap.instGetElem?Mem.toGetElem] `[indices m_1] `[(keys m_1)[i_2]] #0) + 1) `[m_1.size]] [cutsat] Assignment satisfying linear constraints [assign] i_1 := 0 [assign] i_2 := 1 [assign] (keys m_1).pop.size := 2 [assign] (keys m_1).size := 3 [assign] m_1.size := 3 [assign] ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size := 2 [assign] (values m_1).size := 3 [assign] (indices m_1)[a_1] := 0 [assign] (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2] := 0 [assign] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop.size := 2 [assign] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size := 3 [assign] (indices m_1)[a_1] := 0 [assign] (indices m_1)[a_2] := 1 [assign] (indices m_1)[(keys m_1)[i_2]] := 1 [assign] (indices m_1)[(keys m_1)[i_2]] := 1 [ring] Rings [ring] Ring `Int` [basis] Basis [_] ↑m_1.size + -1 * ↑(keys m_1).size = 0 [_] ↑(keys m_1).size + -1 * ↑((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size = 0 [ring] Ring `Ring.OfSemiring.Q Nat` [basis] Basis [_] ↑((keys m_1).size - 1) + -1 * ↑(m_1.size - 1) = 0 [_] ↑(m_1.size - 1) + -1 * ↑(((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size - 1) = 0 [diseqs] Disequalities [_] ¬-1 * ↑(((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size - 1) = 0 [grind] Issues [issue] failed to create E-match local theorem for ∀ (h_2 : i_2 + 1 ≤ ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size), ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯)[i_2] [issue] type error constructing proof for WF when assigning metavariable ?a with i_2 has type Nat of sort `Type` but is expected to have type α of sort `Type u` [issue] type error constructing proof for WF when assigning metavariable ?a with (indices m_1)[a_1] has type Nat of sort `Type` but is expected to have type α of sort `Type u` [issue] type error constructing proof for WF when assigning metavariable ?a with i_2 has type Nat of sort `Type` but is expected to have type α of sort `Type u` [issue] type error constructing proof for WF when assigning metavariable ?a with (indices m_1)[a_1] has type Nat of sort `Type` but is expected to have type α of sort `Type u` [issue] failed to create E-match local theorem for ∀ (h : i_2 + 1 ≤ (keys m_1).size), (keys m_1)[i_2]? = some (keys m_1)[i_2] [issue] failed to create E-match local theorem for ∀ (h : i_1 + 1 ≤ (keys m_1).size), (keys m_1)[i_1]? = some (keys m_1)[i_1] [issue] failed to create E-match local theorem for ∀ (h_2 : i_1 + 1 ≤ (keys m_1).pop.size), (keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯ = ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop [issue] failed to create E-match local theorem for ∀ (h_4 : i_2 + 1 ≤ ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size), ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2]? = some ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯)[i_2] [issue] type error constructing proof for WF when assigning metavariable ?a with (indices m_1)[(keys m_1)[i_2]] has type Nat of sort `Type` but is expected to have type α of sort `Type u` [issue] type error constructing proof for WF when assigning metavariable ?a with (indices m_1)[(keys m_1)[i_2]] has type Nat of sort `Type` but is expected to have type α of sort `Type u` [issue] failed to create E-match local theorem for ∀ (h : i_2 + 1 ≤ (keys m_1).pop.size), (keys m_1).pop[i_2]? = some (keys m_1).pop[i_2] [grind] Diagnostics [thm] E-Matching instances [thm] WF ↦ 16 [thm] getElem?_neg ↦ 9 [thm] getElem?_pos ↦ 9 [thm] Array.getElem?_eq_none ↦ 5 [thm] HashMap.contains_iff_mem ↦ 5 [thm] Array.getElem?_pop ↦ 2 [thm] Array.getElem?_set ↦ 2 [thm] Array.getElem_pop ↦ 2 [thm] Array.getElem_set ↦ 2 [thm] Array.set_pop ↦ 2 [thm] Array.size_pop ↦ 2 [thm] Array.size_set ↦ 2 [thm] getElem_indices_lt ↦ 2 [thm] mem_indices_of_mem ↦ 2 [thm] Array.back_eq_getElem ↦ 1 [thm] size_keys ↦ 1 [thm] size.eq_1 ↦ 1 [thm] HashMap.contains_erase ↦ 1 [thm] HashMap.contains_insert ↦ 1 [thm] HashMap.getElem?_insert ↦ 1 [thm] HashMap.getElem_insert ↦ 1 [thm] HashMap.mem_erase ↦ 1 [thm] HashMap.mem_insert ↦ 1could not synthesize default value for field 'WF' of 'IndexMap' using tactics{ indices := (m.indices.erase a).insert lastKey i keys := m.keys.pop.set i lastKey values := m.values.pop.set i lastValue } | none => m

This fails while attempting to prove the WF field in the second branch. As usual, there is detailed information from grind about its failure state, but almost too much to be helpful! Let's look at the model produced by cutsat and see if we can see what's going on:

[cutsat] Assignment satisfying linear constraints ▼
  [assign] i_1 := 0
  [assign] i_2 := 1
  [assign] (keys m_1).pop.size := 2
  [assign] (keys m_1).size := 3
  [assign] m_1.size := 3
  [assign] ((keys m_1).pop.set i_1 ((keys m_1).back ⋯) ⋯).size := 2
  [assign] (values m_1).size := 3
  [assign] (indices m_1)[a_1] := 0
  [assign] (((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2] := 0
  [assign] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).pop.size := 2
  [assign] ((keys m_1).set i_1 ((keys m_1).back ⋯) ⋯).size := 3
  [assign] (indices m_1)[a_1] := 0
  [assign] (indices m_1)[a_2] := 1
  [assign] (indices m_1)[(keys m_1)[i_2]] := 1
  [assign] (indices m_1)[(keys m_1)[i_2]] := 1

This model consists of an IndexMap of size 3, with keys a_1, a_2 and the otherwise unnamed (keys m_1).back ⋯.

Everything looks fine, except the line:

(((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2] := 0

This shouldn't be possible! Since the three keys are distinct, we should have

(((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2] =
  ((indices m_1).erase a_1)[a_2] =
  (indices m_1)[a_2] =
  1

Now that we've found something suspicious, we can look through the equivalence classes identified by grind. (In the future we'll be providing search tools for inspecting equivalence classes, but for now you need to read through manually.) We find amongst many others:

{a_2,
  (keys m_1).back ⋯,
  (keys m_1)[(keys m_1).size - 1],
  (keys m_1)[i_2], ...}

This should imply, by the injectivity of keys, that i_2 = (keys m_1).size - 1. Since this identity wasn't reflected by the cutsat model, we suspect that grind is not managing to use the injectivity of keys.

Thinking about the way that we've provided the well-formedness condition, as ∀ (i : Nat) (a : α), keys[i]? = some a ↔ indices[a]? = some i, this perhaps isn't surprising: it's expressed in terms of keys[i]? and indices[a]?. Let's add a variant version of the well-formedness condition using getElem instead of getElem?:

@[local grind] private theorem WF' (i : Nat) (a : α) (h₁ : i < m.keys.size) (h₂ : a m) : m.keys[i] = a m.indices[a] = i := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αm:IndexMap α βinst✝¹:LawfulBEq αinst✝:LawfulHashable αi:Nata:αh₁:i < (keys m).sizeh₂:a m(keys m)[i] = a (indices m)[a] = i α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αm:IndexMap α βinst✝¹:LawfulBEq αinst✝:LawfulHashable αi:Nata:αh₁:i < (keys m).sizeh₂:a mthis:(keys m)[i]? = some a (indices m)[a]? = some i := WF m i a(keys m)[i] = a (indices m)[a] = i All goals completed! 🐙

We can verify that with this available, grind can now prove:

example {m : IndexMap α β} {a : α} {h : a m} : m.keys[m.indices[a]'h] = a := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αm✝:IndexMap α βa✝:αb:βi:Natinst✝¹:LawfulBEq αinst✝:LawfulHashable αm:IndexMap α βa:αh:a m(keys m)[(indices m)[a]] = a All goals completed! 🐙

Trying again with eraseSwap, everything goes through cleanly now, with no manual proofs:

@[inline] def eraseSwap (m : IndexMap α β) (a : α) : IndexMap α β := match h : m.indices[a]? with | some i => if w : i = m.size - 1 then { indices := m.indices.erase a keys := m.keys.pop values := m.values.pop } else let lastKey := m.keys.back let lastValue := m.values.back { indices := (m.indices.erase a).insert lastKey i keys := m.keys.pop.set i lastKey values := m.values.pop.set i lastValue } | none => m

Finally we turn to the verification theorems about the basic operations, relating getIdx, findIdx, and insert. By adding a local grind annotation allowing grind to unfold the definitions of these operations, the proofs all go through effortlessly:

/-! ### Verification theorems -/

attribute [local grind] getIdx findIdx insert

@[grind] theorem getIdx_findIdx (m : IndexMap α β) (a : α) (h : a ∈ m) :
    m.getIdx (m.findIdx a) = m[a] := by grind

@[grind] theorem mem_insert (m : IndexMap α β) (a a' : α) (b : β) :
    a' ∈ m.insert a b ↔ a' = a ∨ a' ∈ m := by
  grind

@[grind] theorem getElem_insert
    (m : IndexMap α β) (a a' : α) (b : β) (h : a' ∈ m.insert a b) :
    (m.insert a b)[a'] = if h' : a' == a then b else m[a'] := by
  grind

@[grind] theorem findIdx_insert_self
    (m : IndexMap α β) (a : α) (b : β) :
    (m.insert a b).findIdx a =
      if h : a ∈ m then m.findIdx a else m.size := by
  grind

Note that these are part of the public API of IndexMap, so we need to mark them as @[grind], so that users without our internal local grind annotations can still use them in grind proofs.

Putting this all together, our prototype API has reached the following state:

macro_rules | `(tactic| get_elem_tactic_extensible) => `(tactic| grind) open Std structure IndexMap (α : Type u) (β : Type v) [BEq α] [Hashable α] where private indices : HashMap α Nat private keys : Array α private values : Array β private size_keys' : keys.size = values.size := by grind private WF : (i : Nat) (a : α), keys[i]? = some a indices[a]? = some i := by grind namespace IndexMap variable {α : Type u} {β : Type v} [BEq α] [Hashable α] variable {m : IndexMap α β} {a : α} {b : β} {i : Nat} @[inline] def size (m : IndexMap α β) : Nat := m.values.size @[local grind =] private theorem size_keys : m.keys.size = m.size := m.size_keys' def emptyWithCapacity (capacity := 8) : IndexMap α β where indices := HashMap.emptyWithCapacity capacity keys := Array.emptyWithCapacity capacity values := Array.emptyWithCapacity capacity instance : EmptyCollection (IndexMap α β) where emptyCollection := emptyWithCapacity instance : Inhabited (IndexMap α β) where default := @[inline] def contains (m : IndexMap α β) (a : α) : Bool := m.indices.contains a instance : Membership α (IndexMap α β) where mem m a := a m.indices instance {m : IndexMap α β} {a : α} : Decidable (a m) := inferInstanceAs (Decidable (a m.indices)) @[local grind] private theorem mem_indices_of_mem {m : IndexMap α β} {a : α} : a m a m.indices := Iff.rfl @[inline] def findIdx? (m : IndexMap α β) (a : α) : Option Nat := m.indices[a]? @[inline] def findIdx (m : IndexMap α β) (a : α) (h : a m := by get_elem_tactic) : Nat := m.indices[a] @[inline] def getIdx? (m : IndexMap α β) (i : Nat) : Option β := m.values[i]? @[inline] def getIdx (m : IndexMap α β) (i : Nat) (h : i < m.size := by get_elem_tactic) : β := m.values[i] variable [LawfulBEq α] [LawfulHashable α] attribute [local grind _=_] IndexMap.WF private theorem getElem_indices_lt {h : a m} : m.indices[a] < m.size := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αm:IndexMap α βa:αinst✝¹:LawfulBEq αinst✝:LawfulHashable αh:a m(indices m)[a] < m.size have : m.indices[a]? = some m.indices[a] := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αm:IndexMap α βa:αinst✝¹:LawfulBEq αinst✝:LawfulHashable αh:a m(indices m)[a] < m.size All goals completed! 🐙 All goals completed! 🐙 grind_pattern getElem_indices_lt => m.indices[a] attribute [local grind] size instance : GetElem? (IndexMap α β) α β (fun m a => a m) where getElem m a h := m.values[m.indices[a]] getElem? m a := m.indices[a]?.bind (fun i => (m.values[i]?)) getElem! m a := m.indices[a]?.bind (fun i => (m.values[i]?)) |>.getD default @[local grind] private theorem getElem_def (m : IndexMap α β) (a : α) (h : a m) : m[a] = m.values[m.indices[a]'h] := rfl @[local grind] private theorem getElem?_def (m : IndexMap α β) (a : α) : m[a]? = m.indices[a]?.bind (fun i => (m.values[i]?)) := rfl @[local grind] private theorem getElem!_def [Inhabited β] (m : IndexMap α β) (a : α) : m[a]! = (m.indices[a]?.bind (m.values[·]?)).getD default := rfl instance : LawfulGetElem (IndexMap α β) α β (fun m a => a m) where getElem?_def := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αm:IndexMap α βa:αb:βi:Natinst✝¹:LawfulBEq αinst✝:LawfulHashable α (c : IndexMap α β) (i : α) [inst : Decidable (i c)], c[i]? = if h : i c then some c[i] else none All goals completed! 🐙 getElem!_def := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αm:IndexMap α βa:αb:βi:Natinst✝¹:LawfulBEq αinst✝:LawfulHashable α [inst : Inhabited β] (c : IndexMap α β) (i : α), c[i]! = match c[i]? with | some e => e | none => default All goals completed! 🐙 @[inline] def insert [LawfulBEq α] (m : IndexMap α β) (a : α) (b : β) : IndexMap α β := match h : m.indices[a]? with | some i => { indices := m.indices keys := m.keys.set i a values := m.values.set i b } | none => { indices := m.indices.insert a m.size keys := m.keys.push a values := m.values.push b } instance [LawfulBEq α] : Singleton (α × β) (IndexMap α β) := fun a, b => ( : IndexMap α β).insert a b instance [LawfulBEq α] : Insert (α × β) (IndexMap α β) := fun a, b s => s.insert a b instance [LawfulBEq α] : LawfulSingleton (α × β) (IndexMap α β) := fun _ => rfl @[local grind] private theorem WF' (i : Nat) (a : α) (h₁ : i < m.keys.size) (h₂ : a m) : m.keys[i] = a m.indices[a] = i := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αm:IndexMap α βinst✝¹:LawfulBEq αinst✝:LawfulHashable αi:Nata:αh₁:i < (keys m).sizeh₂:a m(keys m)[i] = a (indices m)[a] = i α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αm:IndexMap α βinst✝¹:LawfulBEq αinst✝:LawfulHashable αi:Nata:αh₁:i < (keys m).sizeh₂:a mthis:(keys m)[i]? = some a (indices m)[a]? = some i := WF m i a(keys m)[i] = a (indices m)[a] = i All goals completed! 🐙 /-- Erase the key-value pair with the given key, moving the last pair into its place in the order. If the key is not present, the map is unchanged. -/ @[inline] def eraseSwap (m : IndexMap α β) (a : α) : IndexMap α β := match h : m.indices[a]? with | some i => if w : i = m.size - 1 then { indices := m.indices.erase a keys := m.keys.pop values := m.values.pop } else let lastKey := m.keys.back let lastValue := m.values.back { indices := (m.indices.erase a).insert lastKey i keys := m.keys.pop.set i lastKey values := m.values.pop.set i lastValue } | none => m /-! ### Verification theorems -/ attribute [local grind] getIdx findIdx insert @[grind] theorem getIdx_findIdx (m : IndexMap α β) (a : α) (h : a m) : m.getIdx (m.findIdx a) = m[a] := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αinst✝¹:LawfulBEq αinst✝:LawfulHashable αm:IndexMap α βa:αh:a mm.getIdx (m.findIdx a h) = m[a] All goals completed! 🐙 @[grind] theorem mem_insert (m : IndexMap α β) (a a' : α) (b : β) : a' m.insert a b a' = a a' m := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αinst✝¹:LawfulBEq αinst✝:LawfulHashable αm:IndexMap α βa:αa':αb:βa' m.insert a b a' = a a' m All goals completed! 🐙 @[grind] theorem getElem_insert (m : IndexMap α β) (a a' : α) (b : β) (h : a' m.insert a b) : (m.insert a b)[a'] = if h' : a' == a then b else m[a'] := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αinst✝¹:LawfulBEq αinst✝:LawfulHashable αm:IndexMap α βa:αa':αb:βh:a' m.insert a b(m.insert a b)[a'] = if h' : (a' == a) = true then b else m[a'] All goals completed! 🐙 @[grind] theorem findIdx_insert_self (m : IndexMap α β) (a : α) (b : β) : (m.insert a b).findIdx a = if h : a m then m.findIdx a else m.size := α:Type uβ:Type vinst✝³:BEq αinst✝²:Hashable αinst✝¹:LawfulBEq αinst✝:LawfulHashable αm:IndexMap α βa:αb:β(m.insert a b).findIdx a = if h : a m then m.findIdx a h else m.size All goals completed! 🐙 end IndexMap

We haven't yet proved all the theorems we would want about these operations (or indeed any theorems about eraseSwap); the interested reader is encouraged to try proving more, and perhaps even releasing a complete IndexMap library!

Summarizing the design principles discussed above about encapsulation:

  • the fields of IndexMap are all private, as these are implementation details.

  • the theorems about these fields are all private, and marked as @[local grind], rather than @[grind], as they won't be needed after we've set up the API.

  • the verification theorems are both marked as @[grind], and proved by grind: the annotation is necessary because we want grind to be able to prove these facts even once we're outside the current module, and the @[local grind] theorems are no longer available.