https://www.youtube.com/watch?v=uLCqJLFP7f8
The above link is about SEL4, the proven kernel. They have about 1 million lines of proof. I've been looking at the issue of "proof down to the metal". It seems that SEL4 will run on an ARM processor which is the basis for the Raspberry PI. I have a PI and am looking to boot SEL4. There is also the proven lisp stack which I've previously mentioned. It seems that it may be possible (in the next hundred years?) to have an Axiom image that proves the GCD algorithms all the way to the metal. The search continues... Tim _______________________________________________ Axiom-developer mailing list [hidden email] https://lists.nongnu.org/mailman/listinfo/axiom-developer |
You can already define and prove gcd in idris/agda/coq. It's not too hard either. This weekend I am trying to prove transitive closure can be computed in Idris. The way I represent it is that I have f:(a -> a -> Type), this forms a type that is inhabited when a statement is true. I can wrap this into another type that represents transitiveness. I can get (Transitive f x), from which I can make a set: (Set (Transitive f x)). This type describes a set containing all symbols reachable from 'x', through some way they relate 'f'. Idris has some flaws that annoy when using it. Those issues become clear when trying to prove injectivity for certain sort or functions that have multiple variables. Also it's sometimes quite dumb, forgetting how values computed too early and other times is remembers that quite too well. -- Henri Tuhola pe 20. syysk. 2019 klo 8.56 Tim Daly <[hidden email]> kirjoitti: https://www.youtube.com/watch?v=uLCqJLFP7f8 _______________________________________________ Axiom-developer mailing list [hidden email] https://lists.nongnu.org/mailman/listinfo/axiom-developer |
Does the Idris code cover GCD for polynomials?
On 9/20/19, Henri Tuhola <[hidden email]> wrote: > You can already define and prove gcd in idris/agda/coq. It's not too hard > either. > > This weekend I am trying to prove transitive closure can be computed in > Idris. The way I represent it is that I have f:(a -> a -> Type), this forms > a type that is inhabited when a statement is true. I can wrap this into > another type that represents transitiveness. I can get (Transitive f x), > from which I can make a set: (Set (Transitive f x)). This type describes a > set containing all symbols reachable from 'x', through some way they relate > 'f'. > > Idris has some flaws that annoy when using it. Those issues become clear > when trying to prove injectivity for certain sort or functions that have > multiple variables. Also it's sometimes quite dumb, forgetting how values > computed too early and other times is remembers that quite too well. > > -- Henri Tuhola > > pe 20. syysk. 2019 klo 8.56 Tim Daly <[hidden email]> kirjoitti: > >> https://www.youtube.com/watch?v=uLCqJLFP7f8 >> >> The above link is about SEL4, the proven kernel. >> They have about 1 million lines of proof. >> >> I've been looking at the issue of "proof down to the metal". >> It seems that SEL4 will run on an ARM processor which >> is the basis for the Raspberry PI. I have a PI and am looking >> to boot SEL4. >> >> There is also the proven lisp stack which I've previously >> mentioned. >> >> It seems that it may be possible (in the next hundred years?) >> to have an Axiom image that proves the GCD algorithms all >> the way to the metal. >> >> The search continues... >> >> Tim >> >> _______________________________________________ >> Axiom-developer mailing list >> [hidden email] >> https://lists.nongnu.org/mailman/listinfo/axiom-developer >> > _______________________________________________ Axiom-developer mailing list [hidden email] https://lists.nongnu.org/mailman/listinfo/axiom-developer |
I doubt it does cover that. And I think there would be at least two
approaches to implement GCD for polynomials in Idris. Obvious approach would be to construct a type that represents polynomials on some base number and variables. For example, (Polynomial Nat [X]). You would then prove GCD for this structure. Another approach I can think of would exploit the way how type-propositions themselves can cause computer-algebra-system -like behavior, reusing variables that the type system is already working with. You can prove (a = b) -> (b = c) -> (a = c), then compose equations together with that. Similarly you can rewrite x to y inside a type if you have (x = y). There's already some support for changing the behavior of Idris type checker, it could be that some sort of equation simplifier could be implemented with features that Idris already has. -- Henri Tuhola _______________________________________________ Axiom-developer mailing list [hidden email] https://lists.nongnu.org/mailman/listinfo/axiom-developer |
Axiom has type information everywhere. It is strongly
dependently typed. So give a Polynomial type, which Axiom has, over a Ring or Field, such as Polynomial(Integer) or Polynomial(Fraction(Integer)) we can use theorems from the Ring while proving properties of Polynomials. Axiom has a deep type hierarchy, all of which can be inherited by types like Polynomial. The type hierarchy uses Group Theory as its scaffold. I'm "decorating" the inherited type hierarchy with axioms. These will be available by inheritance for Polynomial proofs. Proving GCD in non-negative integers can assume that all of the arguments are non-negative integers (i.e. NATs). There are quite a few GCD algorithms in Axiom. I'm trying to prove the implemented algorithms for GCD correct rather than rewrite them. I've seen a couple of the Idris videos. It looks quite interesting. Given an Idris proof, is there a "verifier" function? Tim On 9/20/19, Henri Tuhola <[hidden email]> wrote: > I doubt it does cover that. And I think there would be at least two > approaches to implement GCD for polynomials in Idris. > > Obvious approach would be to construct a type that represents > polynomials on some base number and variables. For example, > (Polynomial Nat [X]). You would then prove GCD for this structure. > > Another approach I can think of would exploit the way how > type-propositions themselves can cause computer-algebra-system -like > behavior, reusing variables that the type system is already working > with. You can prove (a = b) -> (b = c) -> (a = c), then compose > equations together with that. Similarly you can rewrite x to y inside > a type if you have (x = y). There's already some support for changing > the behavior of Idris type checker, it could be that some sort of > equation simplifier could be implemented with features that Idris > already has. > > -- Henri Tuhola > _______________________________________________ Axiom-developer mailing list [hidden email] https://lists.nongnu.org/mailman/listinfo/axiom-developer |
I'm a fan of both Axiom and Idris. I think my ideal would be Axiom
mathematics build on top of the Idris type system. The Axiom type system was incredibly advanced for its time but I suspect the Idris type system has finally caught up and overtaken it? Correct me if I'm wrong but I think the Axiom type system does not have the following capabilities that Idris does: * Enforcement of pure functions. * Ability to flag a function as total as opposed to partial (automatic in some cases). * Universes (types of types hierarchy). I'm no expert but I would have guessed these things would be almost indispensable for proving Axiom correct? Also Idris makes it far more practical to use these things, I don't think Axiom can implement category theory constructs like monads. Also, although both have dependent types, Axiom does not use them for say, preventing the addition of a 2D vector to a 3D vector. In Idris this is more likely to be compile time error than a runtime error, I know there are theoretical limits to this but I think Idris has capabilities to make this practical in more cases. I don't pretend I know how an Idris type system could be used with Axiom in practice. For instance I think the proofs Henri is talking about are equalities in the type system (propositions as types). So how would these equations relate to equations acted on by equation solvers (which might be an element of some equation type). Could there be some way to lift equations into the type system and back? Sorry if I'm confusing things here but I just have an intuition that there is something even more powerful here if all this could be put together. Martin On 21/09/2019 04:28, Tim Daly wrote: > Axiom has type information everywhere. It is strongly > dependently typed. So give a Polynomial type, which > Axiom has, over a Ring or Field, such as > Polynomial(Integer) or Polynomial(Fraction(Integer)) > we can use theorems from the Ring while proving > properties of Polynomials. _______________________________________________ Axiom-developer mailing list [hidden email] https://lists.nongnu.org/mailman/listinfo/axiom-developer |
Idris has a way to present equalities like this:
addition_unit : (a:Nat) -> (a + 0) = a addition_s : (a,b:Nat) -> (a + S b) = S (a + b) add_commutative : (a,b:Nat) -> (a + b = b + a) They can be used to prove more things: try_out : (x,y:Nat) -> ((x + 0) + y) = y + x try_out x y = rewrite addition_unit x in add_commutative x y It's rewriting the left expression to right expression, though you can easily flip the direction. For clarity I show these few dissections: try_out x y = ?a1 a1 : plus (plus x 0) y = plus y x try_out x y = rewrite addition_unit x in ?a2 a2 : plus x y = plus y x Idris has this feature called "Elaborator reflection". It allows you to describe automated tactics for writing proofs/programs. The "getGoal" and "getEnv" allow you to examine types in the context: getGoal : Elab (TTName, TT) getEnv : Elab (List (TTName, Binder TT)) The elaborator reflection also allows accessing the term rewriting. I suppose that's all you need in order to write a program that simplifies equations inside the type context? -- Henri Tuhola On Sat, 21 Sep 2019 at 11:50, Martin Baker <[hidden email]> wrote: > > I'm a fan of both Axiom and Idris. I think my ideal would be Axiom > mathematics build on top of the Idris type system. > > The Axiom type system was incredibly advanced for its time but I suspect > the Idris type system has finally caught up and overtaken it? Correct me > if I'm wrong but I think the Axiom type system does not have the > following capabilities that Idris does: > > * Enforcement of pure functions. > * Ability to flag a function as total as opposed to partial (automatic > in some cases). > * Universes (types of types hierarchy). > > I'm no expert but I would have guessed these things would be almost > indispensable for proving Axiom correct? > > Also Idris makes it far more practical to use these things, I don't > think Axiom can implement category theory constructs like monads. Also, > although both have dependent types, Axiom does not use them for say, > preventing the addition of a 2D vector to a 3D vector. In Idris this is > more likely to be compile time error than a runtime error, I know there > are theoretical limits to this but I think Idris has capabilities to > make this practical in more cases. > > I don't pretend I know how an Idris type system could be used with Axiom > in practice. For instance I think the proofs Henri is talking about are > equalities in the type system (propositions as types). So how would > these equations relate to equations acted on by equation solvers (which > might be an element of some equation type). Could there be some way to > lift equations into the type system and back? > > Sorry if I'm confusing things here but I just have an intuition that > there is something even more powerful here if all this could be put > together. > > Martin > > On 21/09/2019 04:28, Tim Daly wrote: > > Axiom has type information everywhere. It is strongly > > dependently typed. So give a Polynomial type, which > > Axiom has, over a Ring or Field, such as > > Polynomial(Integer) or Polynomial(Fraction(Integer)) > > we can use theorems from the Ring while proving > > properties of Polynomials. > > _______________________________________________ > Axiom-developer mailing list > [hidden email] > https://lists.nongnu.org/mailman/listinfo/axiom-developer _______________________________________________ Axiom-developer mailing list [hidden email] https://lists.nongnu.org/mailman/listinfo/axiom-developer |
On 21/09/2019 13:40, Henri Tuhola wrote:
> The elaborator reflection also allows accessing the term rewriting. I > suppose that's all you need in order to write a program that > simplifies equations inside the type context? I am trying to understand if these equations could be solved in this way? I think Axiom equation solving tends to work in terms of reals and complex numbers. I suspect that a type that depends on a floating point literal would be problematic in that the equality could fail due to a rounding error. Also, although I never understood it, I get the impression that Axiom equation solving is extremely complicated. First representing it as functions within polynomials within functions an so on, then expressing the multivariate polynomials in terms of a single variable. I've probably got this all wrong but the point is: could elaborator reflection handle this level of complexity? If these kinds of thing can't be done in the type system then I guess they would have to be handled differently from equations in proofs. Martin _______________________________________________ Axiom-developer mailing list [hidden email] https://lists.nongnu.org/mailman/listinfo/axiom-developer |
In reply to this post by Henri Tuhola
Hmm. The problem to be solved involves several parts.
Idris is of interest in PART 6, 7, and 8 below. PART 1: We have the domain We have GCD in NAT (axiom: NonNegativeInteger or NNI) NonNegativeInteger is what Axiom calls a "Domain", which means that it contains signatures, such as quo : (%,%) -> % rem : (%,%) -> % gcd : (%,%) -> % which says that gcd takes 2 NonNegativeIntegers (NATs) and returns a NonNegativeInteger (NAT). The NonNegativeInteger domain also includes information about how its elements are represented. PART 2: We have an implementation of gcd in the domain The NNI domain contains an implementation of gcd: gcd(x,y) == zero? x => y gcd(y rem x,x) PART 3: We have a way to inherit things for the domain The NNI domain inherits properties from what Axiom (unfortunately) calls Categories. Categories provide additional signatures and default implementations. PART 4: We have the FUNDAMENTAL PROBLEM The PROBLEM to be solved is that we want to prove that the above code for gcd is correct. Of course, the first question is "correct with respect to..." PART 5: We need a specification language There needs to be a specification of the gcd function. What are the properties it should fulfill? What are the invariants? What are the preconditions? What are the postconditions? Some parts of the specification will be inherited. Which means we need a language for specification. PART 6: We need a theorem language Given a specification, what theorems are available? Some theorems are inherited from the categories, usually as axioms. Some theorems and axioms are directly stated in the NNI domain. Some lemmas need to be added to the domain to help the proof process. Which means we need a language for theorems. PART 7: We need a proof engine Now that we have an implementation, a specification, a collection of theorems and pre- and post-conditions, lemmas, and invariants we need a proof. Which engine will we use for the proof? What syntax does it require? Does it provide a verifier to re-check proofs? PART 8: We need to prove many GCD algorithms Axiom contains 22 signatures for gcd. For example, it contains a gcd for polynomials. The above machinery needs to support proofs in those domains also. PART 9: LOOP GOTO part 4 above, pick a new function, and repeat. PART 10: ISSUES PART 10a: "Down to the metal" THere are a pile of "side issues". I'm re-implementing Axiom using Common Lisp CLOS. THe defclass macro in CLOS creates new Common Lisp types. This allows using the types for type-checking (currently looking at bi-directional checking algorithms) Axiom sits on Common Lisp. There is a question of using a "trusted core". I'm looking into Milawa https://www.cl.cam.ac.uk/~mom22/soundness.pdf with a deeply layered design. I'm also looking at SEL4 on ARM https://ts.data61.csiro.au/publications/nicta_full_text/3783.pdf which is a trustworthy operating system. I wrote a paper on the semantics of the Intel instruction set: Daly, Timothy Intel Instruction Semantics Generator SEI/CERT Research Report, March 2012 http://daly.axiom-developer.org/TimothyDaly_files/publications/sei/intel/intel.pdf so SEL4 on Intel is interesting. PART 10b: Dependent type theory Dependent types are undecidable. Axiom contains several heuristics to resolve types at runtime. The heuristic type algorithm needs to be explicit and declarative. PART 10c: Size Axiom contains about 10,000 algorithms in 1100 categories, domains, and packages. This is going to take a while. PART 10d: Mathematics Many of the algorithms are partial. Many are PhD thesis work (and hard to understand). Many are ad hoc and have no mathematical specification. PART 10e: Time The target delivery date is April, 2023. There is much to do. Tim On 9/21/19, Henri Tuhola <[hidden email]> wrote: > Idris has a way to present equalities like this: > > addition_unit : (a:Nat) -> (a + 0) = a > addition_s : (a,b:Nat) -> (a + S b) = S (a + b) > add_commutative : (a,b:Nat) -> (a + b = b + a) > > They can be used to prove more things: > > try_out : (x,y:Nat) -> ((x + 0) + y) = y + x > try_out x y = rewrite addition_unit x in add_commutative x y > > It's rewriting the left expression to right expression, though you can > easily flip the direction. For clarity I show these few dissections: > > try_out x y = ?a1 > a1 : plus (plus x 0) y = plus y x > > try_out x y = rewrite addition_unit x in ?a2 > a2 : plus x y = plus y x > > Idris has this feature called "Elaborator reflection". It allows you > to describe automated tactics for writing proofs/programs. > The "getGoal" and "getEnv" allow you to examine types in the context: > > getGoal : Elab (TTName, TT) > getEnv : Elab (List (TTName, Binder TT)) > > The elaborator reflection also allows accessing the term rewriting. I > suppose that's all you need in order to write a program that > simplifies equations inside the type context? > > -- Henri Tuhola > > On Sat, 21 Sep 2019 at 11:50, Martin Baker <[hidden email]> wrote: >> >> I'm a fan of both Axiom and Idris. I think my ideal would be Axiom >> mathematics build on top of the Idris type system. >> >> The Axiom type system was incredibly advanced for its time but I suspect >> the Idris type system has finally caught up and overtaken it? Correct me >> if I'm wrong but I think the Axiom type system does not have the >> following capabilities that Idris does: >> >> * Enforcement of pure functions. >> * Ability to flag a function as total as opposed to partial (automatic >> in some cases). >> * Universes (types of types hierarchy). >> >> I'm no expert but I would have guessed these things would be almost >> indispensable for proving Axiom correct? >> >> Also Idris makes it far more practical to use these things, I don't >> think Axiom can implement category theory constructs like monads. Also, >> although both have dependent types, Axiom does not use them for say, >> preventing the addition of a 2D vector to a 3D vector. In Idris this is >> more likely to be compile time error than a runtime error, I know there >> are theoretical limits to this but I think Idris has capabilities to >> make this practical in more cases. >> >> I don't pretend I know how an Idris type system could be used with Axiom >> in practice. For instance I think the proofs Henri is talking about are >> equalities in the type system (propositions as types). So how would >> these equations relate to equations acted on by equation solvers (which >> might be an element of some equation type). Could there be some way to >> lift equations into the type system and back? >> >> Sorry if I'm confusing things here but I just have an intuition that >> there is something even more powerful here if all this could be put >> together. >> >> Martin >> >> On 21/09/2019 04:28, Tim Daly wrote: >> > Axiom has type information everywhere. It is strongly >> > dependently typed. So give a Polynomial type, which >> > Axiom has, over a Ring or Field, such as >> > Polynomial(Integer) or Polynomial(Fraction(Integer)) >> > we can use theorems from the Ring while proving >> > properties of Polynomials. >> >> _______________________________________________ >> Axiom-developer mailing list >> [hidden email] >> https://lists.nongnu.org/mailman/listinfo/axiom-developer > _______________________________________________ Axiom-developer mailing list [hidden email] https://lists.nongnu.org/mailman/listinfo/axiom-developer |
Tim,
I can see how you can prove individual algorithms correct and I can see how you can use a proven Lisp but, if you want to prove "down to the metal", it looks to me like there is an enormous gap in-between which is SPAD. Can you really prove SPAD correct? You mention a specification language (PART 5) and a theorem language (PART 6) and presumably these will be mapped into SPAD somehow but does that really get to the root of a lot of the bugs in Axiom? Theoretically the SPAD compiler is deterministic but for all practical purposes it doesn't seem to be. I suspect that, even for experts like yourself, it would be virtually impossible to predict what it will do. Is it possible to write a formal specification of the SPAD language syntax, semantics and its type system? Can it contain contradictions such as a type of all types? I suspect that its would also be massively difficult (bordering on impossible without massive resources) to prove these sort of things in Idris but I still think it would be orders of magnitude easier than SPAD. Martin On 22/09/2019 01:22, Tim Daly wrote: > Hmm. The problem to be solved involves several parts. > Idris is of interest in PART 6, 7, and 8 below. > > PART 1: We have the domain > > We have GCD in NAT (axiom: NonNegativeInteger or NNI) > > NonNegativeInteger is what Axiom calls a "Domain", which means > that it contains signatures, such as > > quo : (%,%) -> % > rem : (%,%) -> % > gcd : (%,%) -> % > > which says that gcd takes 2 NonNegativeIntegers (NATs) and > returns a NonNegativeInteger (NAT). > > The NonNegativeInteger domain also includes information about > how its elements are represented. > > PART 2: We have an implementation of gcd in the domain > > The NNI domain contains an implementation of gcd: > > gcd(x,y) == > zero? x => y > gcd(y rem x,x) > > PART 3: We have a way to inherit things for the domain > > The NNI domain inherits properties from what Axiom > (unfortunately) calls Categories. Categories provide > additional signatures and default implementations. > > PART 4: We have the FUNDAMENTAL PROBLEM > > The PROBLEM to be solved is that we want to prove > that the above code for gcd is correct. > > Of course, the first question is "correct with respect to..." > > PART 5: We need a specification language > > There needs to be a specification of the gcd function. > What are the properties it should fulfill? > What are the invariants? > What are the preconditions? > What are the postconditions? > > Some parts of the specification will be inherited. > > Which means we need a language for specification. > > PART 6: We need a theorem language > > Given a specification, what theorems are available? > Some theorems are inherited from the categories, > usually as axioms. > > Some theorems and axioms are directly stated in > the NNI domain. > > Some lemmas need to be added to the domain to help > the proof process. > > Which means we need a language for theorems. > > PART 7: We need a proof engine > > Now that we have an implementation, a specification, > a collection of theorems and pre- and post-conditions, > lemmas, and invariants we need a proof. > > Which engine will we use for the proof? > What syntax does it require? > Does it provide a verifier to re-check proofs? > > PART 8: We need to prove many GCD algorithms > > Axiom contains 22 signatures for gcd. For example, > it contains a gcd for polynomials. The above machinery > needs to support proofs in those domains also. > > PART 9: LOOP > > GOTO part 4 above, pick a new function, and repeat. > > > PART 10: ISSUES > > PART 10a: "Down to the metal" > > THere are a pile of "side issues". I'm re-implementing Axiom > using Common Lisp CLOS. THe defclass macro in CLOS > creates new Common Lisp types. This allows using the types > for type-checking (currently looking at bi-directional checking > algorithms) > > Axiom sits on Common Lisp. There is a question of using a > "trusted core". I'm looking into Milawa > https://www.cl.cam.ac.uk/~mom22/soundness.pdf > with a deeply layered design. > > I'm also looking at SEL4 on ARM > https://ts.data61.csiro.au/publications/nicta_full_text/3783.pdf > which is a trustworthy operating system. > > I wrote a paper on the semantics of the Intel instruction set: > Daly, Timothy Intel Instruction Semantics Generator SEI/CERT Research > Report, March 2012 > http://daly.axiom-developer.org/TimothyDaly_files/publications/sei/intel/intel.pdf > so SEL4 on Intel is interesting. > > > PART 10b: Dependent type theory > > Dependent types are undecidable. Axiom contains several > heuristics to resolve types at runtime. The heuristic type > algorithm needs to be explicit and declarative. > > PART 10c: Size > > Axiom contains about 10,000 algorithms in 1100 categories, > domains, and packages. This is going to take a while. > > PART 10d: Mathematics > > Many of the algorithms are partial. Many are PhD thesis > work (and hard to understand). Many are ad hoc and have > no mathematical specification. > > PART 10e: Time > > The target delivery date is April, 2023. > There is much to do. > > Tim > > > On 9/21/19, Henri Tuhola <[hidden email]> wrote: >> Idris has a way to present equalities like this: >> >> addition_unit : (a:Nat) -> (a + 0) = a >> addition_s : (a,b:Nat) -> (a + S b) = S (a + b) >> add_commutative : (a,b:Nat) -> (a + b = b + a) >> >> They can be used to prove more things: >> >> try_out : (x,y:Nat) -> ((x + 0) + y) = y + x >> try_out x y = rewrite addition_unit x in add_commutative x y >> >> It's rewriting the left expression to right expression, though you can >> easily flip the direction. For clarity I show these few dissections: >> >> try_out x y = ?a1 >> a1 : plus (plus x 0) y = plus y x >> >> try_out x y = rewrite addition_unit x in ?a2 >> a2 : plus x y = plus y x >> >> Idris has this feature called "Elaborator reflection". It allows you >> to describe automated tactics for writing proofs/programs. >> The "getGoal" and "getEnv" allow you to examine types in the context: >> >> getGoal : Elab (TTName, TT) >> getEnv : Elab (List (TTName, Binder TT)) >> >> The elaborator reflection also allows accessing the term rewriting. I >> suppose that's all you need in order to write a program that >> simplifies equations inside the type context? >> >> -- Henri Tuhola >> >> On Sat, 21 Sep 2019 at 11:50, Martin Baker <[hidden email]> wrote: >>> >>> I'm a fan of both Axiom and Idris. I think my ideal would be Axiom >>> mathematics build on top of the Idris type system. >>> >>> The Axiom type system was incredibly advanced for its time but I suspect >>> the Idris type system has finally caught up and overtaken it? Correct me >>> if I'm wrong but I think the Axiom type system does not have the >>> following capabilities that Idris does: >>> >>> * Enforcement of pure functions. >>> * Ability to flag a function as total as opposed to partial (automatic >>> in some cases). >>> * Universes (types of types hierarchy). >>> >>> I'm no expert but I would have guessed these things would be almost >>> indispensable for proving Axiom correct? >>> >>> Also Idris makes it far more practical to use these things, I don't >>> think Axiom can implement category theory constructs like monads. Also, >>> although both have dependent types, Axiom does not use them for say, >>> preventing the addition of a 2D vector to a 3D vector. In Idris this is >>> more likely to be compile time error than a runtime error, I know there >>> are theoretical limits to this but I think Idris has capabilities to >>> make this practical in more cases. >>> >>> I don't pretend I know how an Idris type system could be used with Axiom >>> in practice. For instance I think the proofs Henri is talking about are >>> equalities in the type system (propositions as types). So how would >>> these equations relate to equations acted on by equation solvers (which >>> might be an element of some equation type). Could there be some way to >>> lift equations into the type system and back? >>> >>> Sorry if I'm confusing things here but I just have an intuition that >>> there is something even more powerful here if all this could be put >>> together. >>> >>> Martin >>> >>> On 21/09/2019 04:28, Tim Daly wrote: >>>> Axiom has type information everywhere. It is strongly >>>> dependently typed. So give a Polynomial type, which >>>> Axiom has, over a Ring or Field, such as >>>> Polynomial(Integer) or Polynomial(Fraction(Integer)) >>>> we can use theorems from the Ring while proving >>>> properties of Polynomials. >>> >>> _______________________________________________ >>> Axiom-developer mailing list >>> [hidden email] >>> https://lists.nongnu.org/mailman/listinfo/axiom-developer >> > _______________________________________________ Axiom-developer mailing list [hidden email] https://lists.nongnu.org/mailman/listinfo/axiom-developer |
In reply to this post by Tim Daly
Of particular interest is clarity.
I've been working with LEAN. The code is in C++ and is very clever. For instance, there is a beautiful macro embedded in data structures to perform reference counting. Unfortunately, I can't reverse-engineer the logic rules that are embedded in the C++ code. HOL, on the other hand, seems to have a very clear connection betwen the code and the logic rules. In a proof system it is vital that the logic rules and their implementation is "obviously correct" and transparent. I have not yet looked at Idris so I can't comment on that. Tim On 9/21/19, Tim Daly <[hidden email]> wrote: > Hmm. The problem to be solved involves several parts. > Idris is of interest in PART 6, 7, and 8 below. > > PART 1: We have the domain > > We have GCD in NAT (axiom: NonNegativeInteger or NNI) > > NonNegativeInteger is what Axiom calls a "Domain", which means > that it contains signatures, such as > > quo : (%,%) -> % > rem : (%,%) -> % > gcd : (%,%) -> % > > which says that gcd takes 2 NonNegativeIntegers (NATs) and > returns a NonNegativeInteger (NAT). > > The NonNegativeInteger domain also includes information about > how its elements are represented. > > PART 2: We have an implementation of gcd in the domain > > The NNI domain contains an implementation of gcd: > > gcd(x,y) == > zero? x => y > gcd(y rem x,x) > > PART 3: We have a way to inherit things for the domain > > The NNI domain inherits properties from what Axiom > (unfortunately) calls Categories. Categories provide > additional signatures and default implementations. > > PART 4: We have the FUNDAMENTAL PROBLEM > > The PROBLEM to be solved is that we want to prove > that the above code for gcd is correct. > > Of course, the first question is "correct with respect to..." > > PART 5: We need a specification language > > There needs to be a specification of the gcd function. > What are the properties it should fulfill? > What are the invariants? > What are the preconditions? > What are the postconditions? > > Some parts of the specification will be inherited. > > Which means we need a language for specification. > > PART 6: We need a theorem language > > Given a specification, what theorems are available? > Some theorems are inherited from the categories, > usually as axioms. > > Some theorems and axioms are directly stated in > the NNI domain. > > Some lemmas need to be added to the domain to help > the proof process. > > Which means we need a language for theorems. > > PART 7: We need a proof engine > > Now that we have an implementation, a specification, > a collection of theorems and pre- and post-conditions, > lemmas, and invariants we need a proof. > > Which engine will we use for the proof? > What syntax does it require? > Does it provide a verifier to re-check proofs? > > PART 8: We need to prove many GCD algorithms > > Axiom contains 22 signatures for gcd. For example, > it contains a gcd for polynomials. The above machinery > needs to support proofs in those domains also. > > PART 9: LOOP > > GOTO part 4 above, pick a new function, and repeat. > > > PART 10: ISSUES > > PART 10a: "Down to the metal" > > THere are a pile of "side issues". I'm re-implementing Axiom > using Common Lisp CLOS. THe defclass macro in CLOS > creates new Common Lisp types. This allows using the types > for type-checking (currently looking at bi-directional checking > algorithms) > > Axiom sits on Common Lisp. There is a question of using a > "trusted core". I'm looking into Milawa > https://www.cl.cam.ac.uk/~mom22/soundness.pdf > with a deeply layered design. > > I'm also looking at SEL4 on ARM > https://ts.data61.csiro.au/publications/nicta_full_text/3783.pdf > which is a trustworthy operating system. > > I wrote a paper on the semantics of the Intel instruction set: > Daly, Timothy Intel Instruction Semantics Generator SEI/CERT Research > Report, March 2012 > http://daly.axiom-developer.org/TimothyDaly_files/publications/sei/intel/intel.pdf > so SEL4 on Intel is interesting. > > > PART 10b: Dependent type theory > > Dependent types are undecidable. Axiom contains several > heuristics to resolve types at runtime. The heuristic type > algorithm needs to be explicit and declarative. > > PART 10c: Size > > Axiom contains about 10,000 algorithms in 1100 categories, > domains, and packages. This is going to take a while. > > PART 10d: Mathematics > > Many of the algorithms are partial. Many are PhD thesis > work (and hard to understand). Many are ad hoc and have > no mathematical specification. > > PART 10e: Time > > The target delivery date is April, 2023. > There is much to do. > > Tim > > > On 9/21/19, Henri Tuhola <[hidden email]> wrote: >> Idris has a way to present equalities like this: >> >> addition_unit : (a:Nat) -> (a + 0) = a >> addition_s : (a,b:Nat) -> (a + S b) = S (a + b) >> add_commutative : (a,b:Nat) -> (a + b = b + a) >> >> They can be used to prove more things: >> >> try_out : (x,y:Nat) -> ((x + 0) + y) = y + x >> try_out x y = rewrite addition_unit x in add_commutative x y >> >> It's rewriting the left expression to right expression, though you can >> easily flip the direction. For clarity I show these few dissections: >> >> try_out x y = ?a1 >> a1 : plus (plus x 0) y = plus y x >> >> try_out x y = rewrite addition_unit x in ?a2 >> a2 : plus x y = plus y x >> >> Idris has this feature called "Elaborator reflection". It allows you >> to describe automated tactics for writing proofs/programs. >> The "getGoal" and "getEnv" allow you to examine types in the context: >> >> getGoal : Elab (TTName, TT) >> getEnv : Elab (List (TTName, Binder TT)) >> >> The elaborator reflection also allows accessing the term rewriting. I >> suppose that's all you need in order to write a program that >> simplifies equations inside the type context? >> >> -- Henri Tuhola >> >> On Sat, 21 Sep 2019 at 11:50, Martin Baker <[hidden email]> wrote: >>> >>> I'm a fan of both Axiom and Idris. I think my ideal would be Axiom >>> mathematics build on top of the Idris type system. >>> >>> The Axiom type system was incredibly advanced for its time but I suspect >>> the Idris type system has finally caught up and overtaken it? Correct me >>> if I'm wrong but I think the Axiom type system does not have the >>> following capabilities that Idris does: >>> >>> * Enforcement of pure functions. >>> * Ability to flag a function as total as opposed to partial (automatic >>> in some cases). >>> * Universes (types of types hierarchy). >>> >>> I'm no expert but I would have guessed these things would be almost >>> indispensable for proving Axiom correct? >>> >>> Also Idris makes it far more practical to use these things, I don't >>> think Axiom can implement category theory constructs like monads. Also, >>> although both have dependent types, Axiom does not use them for say, >>> preventing the addition of a 2D vector to a 3D vector. In Idris this is >>> more likely to be compile time error than a runtime error, I know there >>> are theoretical limits to this but I think Idris has capabilities to >>> make this practical in more cases. >>> >>> I don't pretend I know how an Idris type system could be used with Axiom >>> in practice. For instance I think the proofs Henri is talking about are >>> equalities in the type system (propositions as types). So how would >>> these equations relate to equations acted on by equation solvers (which >>> might be an element of some equation type). Could there be some way to >>> lift equations into the type system and back? >>> >>> Sorry if I'm confusing things here but I just have an intuition that >>> there is something even more powerful here if all this could be put >>> together. >>> >>> Martin >>> >>> On 21/09/2019 04:28, Tim Daly wrote: >>> > Axiom has type information everywhere. It is strongly >>> > dependently typed. So give a Polynomial type, which >>> > Axiom has, over a Ring or Field, such as >>> > Polynomial(Integer) or Polynomial(Fraction(Integer)) >>> > we can use theorems from the Ring while proving >>> > properties of Polynomials. >>> >>> _______________________________________________ >>> Axiom-developer mailing list >>> [hidden email] >>> https://lists.nongnu.org/mailman/listinfo/axiom-developer >> > _______________________________________________ Axiom-developer mailing list [hidden email] https://lists.nongnu.org/mailman/listinfo/axiom-developer |
If you have looked at ATS lang and rejected it for your purpose then just ignore rest of mail. For more information (https://github.com/githwxi/ATS-Postiats/wiki) I think ATS is worth a look at least once if you are looking for lang/system that produces efficient code, can encode and prove theorems, can do programming with theorem proving, have dependent types and many more rich types for example viewtypes. ATS compiles to C. It also compile to clojure, javascript and may be some more, with some restriction. In ATS one can return a proof along with the computation result. Proof is consumed/checked only during compilation, and actual result is produced when code is executed. For example: val x : int = 2 // normal binding val (pf | x) = afun (...) // First case is example of normal binding found in most langs. In second case "pf" is bound to proof return by "afun" and "x" is bound to the result of computation, where "|" separtes them. (During runtime there is no proof object , it is used only during compilation) To see the power of ATS,small part of factorial example taken from ATS book is described below . Factorial of natural number is first "encoded" as relation in ATS using "dataprop": //--------------------------------------------------------------- dataprop FACT (int,int) = // Base case | FACT_bas (0,1) of () // Inductive case | {n:nat}{r1,r:int} FACT_ind (n,r) of (FACT (n-1,r1), MUL (n,r1,r)) //----------------------------------------------------------------- ----------------------------------------------------------------- SOME COMMENTS: ----------------------------------------------------------------- Two proof constructor are FACT_bas and FACT_ind {n:nat} means n is a natural number {r1,r: int} means r1 and r are integers Type of FACT is (int,int) -> prop MUL (int,int,int) is also defined via dataprop and MUL (a,b,c) encodes a * b = c ---------------------------------------------------------------------- Function to compute factorial is defined as follows: //---------------------------------------------------------------------- fun fact {n: nat} .<n>. (m: int (n)):<> [r: int] (FACT (n,r) | int (r) ) = if m = 0 then (FACT_bas () | 1 ) // base case else let val (pf1 | r1) = fact (m - 1) // pf1: FACT(m-1,r1) val (pfmul | r ) = imul2 (m,r1) // pf2: MUL(m,r1,r) in (FACT_ind (pf1,pfmul) | r) end // inductive case //---------------------------------------------------------------------- ---------------------------------------------------------------------- SOME COMMENTS: ---------------------------------------------------------------------- .<n>. is a termination metric {n: nat} means for all natural numbers n (m: int (n)) means m is value of type int (n) imul2 is function which also returns a proof of mul and its result [r: int] (FACT (n,r) | int (r) ) means there exist a value of type integer r such that proof FACT (n,r) holds. Type sig of "fact" is : {n:nat} (int (n)) -> [r: int] (FACT (n,r) | int (r)) Finally I can read above as: for any natural number n , the fact (n) produces the factorial of n , say r such that prop FACT (n,r) holds . -------------------------------------------------------------------------- On Sun, Sep 22, 2019 at 11:30 PM Tim Daly <[hidden email]> wrote: Of particular interest is clarity. _______________________________________________ Axiom-developer mailing list [hidden email] https://lists.nongnu.org/mailman/listinfo/axiom-developer |
ATS looks very interesting and no, I havne't seen this before.
I will look into it. Axiom will be implemented in Common Lisp for a lot of reasons, not the least of which is that it depends on certain Lisp features to implement some algorithms. That said, I'm studying and implememting ideas from other languages as domain specific languages (DSL) within this effort. Lisp is very good at DSLs. Indeed Axiom could be considered one. Thanks for the pointer. Tim On 9/24/19, Veer Singh <[hidden email]> wrote: > If you have looked at ATS lang and rejected it for your purpose > then just ignore rest of mail. > > > ATS programming language (http://www.ats-lang.org/) > For more information (https://github.com/githwxi/ATS-Postiats/wiki) > > I think ATS is worth a look at least once if you are looking for > lang/system that produces efficient code, can encode and prove theorems, > can do programming with theorem proving, have dependent types and > many more rich types for example viewtypes. > > ATS compiles to C. > It also compile to clojure, javascript and may be > some more, with some restriction. > > In ATS one can return a proof along with the computation result. > Proof is consumed/checked only during compilation, and actual result is > produced when code is executed. > > For example: > val x : int = 2 // normal binding > val (pf | x) = afun (...) // > > First case is example of normal binding found in most langs. > > In second case "pf" is bound to proof return by "afun" and > "x" is bound to the result of computation, where "|" separtes them. > (During runtime there is no proof object , it is used only during > compilation) > > To see the power of ATS,small part of factorial example > taken from ATS book is described below . > > Factorial of natural number is first "encoded" as relation > in ATS using "dataprop": > > //--------------------------------------------------------------- > > dataprop FACT (int,int) = > // Base case > | FACT_bas (0,1) of () > > // Inductive case > | {n:nat}{r1,r:int} > FACT_ind (n,r) of (FACT (n-1,r1), MUL (n,r1,r)) > > //----------------------------------------------------------------- > > > ----------------------------------------------------------------- > SOME COMMENTS: > ----------------------------------------------------------------- > Two proof constructor are FACT_bas and FACT_ind > > {n:nat} means n is a natural number > > {r1,r: int} means r1 and r are integers > > Type of FACT is (int,int) -> prop > > MUL (int,int,int) is also defined via dataprop and > MUL (a,b,c) encodes a * b = c > > ---------------------------------------------------------------------- > > > Function to compute factorial is defined as follows: > > //---------------------------------------------------------------------- > fun fact {n: nat} .<n>. > (m: int (n)):<> [r: int] (FACT (n,r) | int (r) ) = > if m = 0 > then (FACT_bas () | 1 ) // base case > else let > val (pf1 | r1) = fact (m - 1) // pf1: FACT(m-1,r1) > val (pfmul | r ) = imul2 (m,r1) // pf2: MUL(m,r1,r) > in (FACT_ind (pf1,pfmul) | r) end // inductive case > > //---------------------------------------------------------------------- > > > > ---------------------------------------------------------------------- > SOME COMMENTS: > ---------------------------------------------------------------------- > .<n>. is a termination metric > > {n: nat} means for all natural numbers n > > (m: int (n)) means m is value of type int (n) > > imul2 is function which also returns a proof of mul and its result > > [r: int] (FACT (n,r) | int (r) ) means there exist a value > of type integer r such that proof FACT (n,r) holds. > > Type sig of "fact" is : > {n:nat} (int (n)) -> [r: int] (FACT (n,r) | int (r)) > > Finally I can read above as: > for any natural number n , the fact (n) produces > the factorial of n , say r such that prop FACT (n,r) holds . > -------------------------------------------------------------------------- > > > > > On Sun, Sep 22, 2019 at 11:30 PM Tim Daly <[hidden email]> wrote: > >> Of particular interest is clarity. >> >> I've been working with LEAN. The code is in C++ and is very >> clever. For instance, there is a beautiful macro embedded in >> data structures to perform reference counting. >> >> Unfortunately, I can't reverse-engineer the logic rules that are >> embedded in the C++ code. >> >> HOL, on the other hand, seems to have a very clear connection >> betwen the code and the logic rules. >> >> In a proof system it is vital that the logic rules and their >> implementation is "obviously correct" and transparent. >> >> I have not yet looked at Idris so I can't comment on that. >> >> Tim >> >> >> On 9/21/19, Tim Daly <[hidden email]> wrote: >> > Hmm. The problem to be solved involves several parts. >> > Idris is of interest in PART 6, 7, and 8 below. >> > >> > PART 1: We have the domain >> > >> > We have GCD in NAT (axiom: NonNegativeInteger or NNI) >> > >> > NonNegativeInteger is what Axiom calls a "Domain", which means >> > that it contains signatures, such as >> > >> > quo : (%,%) -> % >> > rem : (%,%) -> % >> > gcd : (%,%) -> % >> > >> > which says that gcd takes 2 NonNegativeIntegers (NATs) and >> > returns a NonNegativeInteger (NAT). >> > >> > The NonNegativeInteger domain also includes information about >> > how its elements are represented. >> > >> > PART 2: We have an implementation of gcd in the domain >> > >> > The NNI domain contains an implementation of gcd: >> > >> > gcd(x,y) == >> > zero? x => y >> > gcd(y rem x,x) >> > >> > PART 3: We have a way to inherit things for the domain >> > >> > The NNI domain inherits properties from what Axiom >> > (unfortunately) calls Categories. Categories provide >> > additional signatures and default implementations. >> > >> > PART 4: We have the FUNDAMENTAL PROBLEM >> > >> > The PROBLEM to be solved is that we want to prove >> > that the above code for gcd is correct. >> > >> > Of course, the first question is "correct with respect to..." >> > >> > PART 5: We need a specification language >> > >> > There needs to be a specification of the gcd function. >> > What are the properties it should fulfill? >> > What are the invariants? >> > What are the preconditions? >> > What are the postconditions? >> > >> > Some parts of the specification will be inherited. >> > >> > Which means we need a language for specification. >> > >> > PART 6: We need a theorem language >> > >> > Given a specification, what theorems are available? >> > Some theorems are inherited from the categories, >> > usually as axioms. >> > >> > Some theorems and axioms are directly stated in >> > the NNI domain. >> > >> > Some lemmas need to be added to the domain to help >> > the proof process. >> > >> > Which means we need a language for theorems. >> > >> > PART 7: We need a proof engine >> > >> > Now that we have an implementation, a specification, >> > a collection of theorems and pre- and post-conditions, >> > lemmas, and invariants we need a proof. >> > >> > Which engine will we use for the proof? >> > What syntax does it require? >> > Does it provide a verifier to re-check proofs? >> > >> > PART 8: We need to prove many GCD algorithms >> > >> > Axiom contains 22 signatures for gcd. For example, >> > it contains a gcd for polynomials. The above machinery >> > needs to support proofs in those domains also. >> > >> > PART 9: LOOP >> > >> > GOTO part 4 above, pick a new function, and repeat. >> > >> > >> > PART 10: ISSUES >> > >> > PART 10a: "Down to the metal" >> > >> > THere are a pile of "side issues". I'm re-implementing Axiom >> > using Common Lisp CLOS. THe defclass macro in CLOS >> > creates new Common Lisp types. This allows using the types >> > for type-checking (currently looking at bi-directional checking >> > algorithms) >> > >> > Axiom sits on Common Lisp. There is a question of using a >> > "trusted core". I'm looking into Milawa >> > https://www.cl.cam.ac.uk/~mom22/soundness.pdf >> > with a deeply layered design. >> > >> > I'm also looking at SEL4 on ARM >> > https://ts.data61.csiro.au/publications/nicta_full_text/3783.pdf >> > which is a trustworthy operating system. >> > >> > I wrote a paper on the semantics of the Intel instruction set: >> > Daly, Timothy Intel Instruction Semantics Generator SEI/CERT Research >> > Report, March 2012 >> > >> http://daly.axiom-developer.org/TimothyDaly_files/publications/sei/intel/intel.pdf >> > so SEL4 on Intel is interesting. >> > >> > >> > PART 10b: Dependent type theory >> > >> > Dependent types are undecidable. Axiom contains several >> > heuristics to resolve types at runtime. The heuristic type >> > algorithm needs to be explicit and declarative. >> > >> > PART 10c: Size >> > >> > Axiom contains about 10,000 algorithms in 1100 categories, >> > domains, and packages. This is going to take a while. >> > >> > PART 10d: Mathematics >> > >> > Many of the algorithms are partial. Many are PhD thesis >> > work (and hard to understand). Many are ad hoc and have >> > no mathematical specification. >> > >> > PART 10e: Time >> > >> > The target delivery date is April, 2023. >> > There is much to do. >> > >> > Tim >> > >> > >> > On 9/21/19, Henri Tuhola <[hidden email]> wrote: >> >> Idris has a way to present equalities like this: >> >> >> >> addition_unit : (a:Nat) -> (a + 0) = a >> >> addition_s : (a,b:Nat) -> (a + S b) = S (a + b) >> >> add_commutative : (a,b:Nat) -> (a + b = b + a) >> >> >> >> They can be used to prove more things: >> >> >> >> try_out : (x,y:Nat) -> ((x + 0) + y) = y + x >> >> try_out x y = rewrite addition_unit x in add_commutative x y >> >> >> >> It's rewriting the left expression to right expression, though you can >> >> easily flip the direction. For clarity I show these few dissections: >> >> >> >> try_out x y = ?a1 >> >> a1 : plus (plus x 0) y = plus y x >> >> >> >> try_out x y = rewrite addition_unit x in ?a2 >> >> a2 : plus x y = plus y x >> >> >> >> Idris has this feature called "Elaborator reflection". It allows you >> >> to describe automated tactics for writing proofs/programs. >> >> The "getGoal" and "getEnv" allow you to examine types in the context: >> >> >> >> getGoal : Elab (TTName, TT) >> >> getEnv : Elab (List (TTName, Binder TT)) >> >> >> >> The elaborator reflection also allows accessing the term rewriting. I >> >> suppose that's all you need in order to write a program that >> >> simplifies equations inside the type context? >> >> >> >> -- Henri Tuhola >> >> >> >> On Sat, 21 Sep 2019 at 11:50, Martin Baker <[hidden email]> >> >> wrote: >> >>> >> >>> I'm a fan of both Axiom and Idris. I think my ideal would be Axiom >> >>> mathematics build on top of the Idris type system. >> >>> >> >>> The Axiom type system was incredibly advanced for its time but I >> suspect >> >>> the Idris type system has finally caught up and overtaken it? Correct >> me >> >>> if I'm wrong but I think the Axiom type system does not have the >> >>> following capabilities that Idris does: >> >>> >> >>> * Enforcement of pure functions. >> >>> * Ability to flag a function as total as opposed to partial >> >>> (automatic >> >>> in some cases). >> >>> * Universes (types of types hierarchy). >> >>> >> >>> I'm no expert but I would have guessed these things would be almost >> >>> indispensable for proving Axiom correct? >> >>> >> >>> Also Idris makes it far more practical to use these things, I don't >> >>> think Axiom can implement category theory constructs like monads. >> >>> Also, >> >>> although both have dependent types, Axiom does not use them for say, >> >>> preventing the addition of a 2D vector to a 3D vector. In Idris this >> >>> is >> >>> more likely to be compile time error than a runtime error, I know >> >>> there >> >>> are theoretical limits to this but I think Idris has capabilities to >> >>> make this practical in more cases. >> >>> >> >>> I don't pretend I know how an Idris type system could be used with >> Axiom >> >>> in practice. For instance I think the proofs Henri is talking about >> >>> are >> >>> equalities in the type system (propositions as types). So how would >> >>> these equations relate to equations acted on by equation solvers >> >>> (which >> >>> might be an element of some equation type). Could there be some way >> >>> to >> >>> lift equations into the type system and back? >> >>> >> >>> Sorry if I'm confusing things here but I just have an intuition that >> >>> there is something even more powerful here if all this could be put >> >>> together. >> >>> >> >>> Martin >> >>> >> >>> On 21/09/2019 04:28, Tim Daly wrote: >> >>> > Axiom has type information everywhere. It is strongly >> >>> > dependently typed. So give a Polynomial type, which >> >>> > Axiom has, over a Ring or Field, such as >> >>> > Polynomial(Integer) or Polynomial(Fraction(Integer)) >> >>> > we can use theorems from the Ring while proving >> >>> > properties of Polynomials. >> >>> >> >>> _______________________________________________ >> >>> Axiom-developer mailing list >> >>> [hidden email] >> >>> https://lists.nongnu.org/mailman/listinfo/axiom-developer >> >> >> > >> >> _______________________________________________ >> Axiom-developer mailing list >> [hidden email] >> https://lists.nongnu.org/mailman/listinfo/axiom-developer >> > _______________________________________________ Axiom-developer mailing list [hidden email] https://lists.nongnu.org/mailman/listinfo/axiom-developer |
Free forum by Nabble | Edit this page |