Axiom's Sane redesign musings

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
20 messages Options
Reply | Threaded
Open this post in threaded view
|

Axiom's Sane redesign musings

Tim Daly
Sane: "rational, coherent, judicious, sound"

Axiom is computational mathematics, of course. The effort to
Prove Axiom Sane involves merging computer algebra and proof
technology so Axiom's algorithms have associated proofs.

The union of these two fields has made it obvious how far behind
Axiom has fallen. Mathematicians and some computer languages
(e.g. Haskell) have started to converge on some unifying ideas.

As a result, the new Sane compiler and interpreter needs to
consider the current environment that mathematicians expect.
This falls into several areas.

1) SYNTAX

It is clear that Axiom's syntax and user interface has not kept
up with the proof technology. A trivial example is a signature.
Axiom likes:

   foo : (a,b) -> c

whereas Lean / Coq / or even Haskell prefers

   foo: a -> b -> c

Given that many systems (e.g. Haskell) use the second syntax
we need to consider adopting this more widely accepted syntax.

2) PROOF and SPECIFICATIONS

And, obviously, Axiom does not support 'axiom', 'lemma',
'corollary', nor any specification language, or any language
for proof support (e.g. 'invariant').

Mathematicians have these tools readily available in many
systems today.

3) USER INTERFACE

Axiom's command line interface and hyperdoc were fine for its
time but Lean and Coq use a browser. For example, in Lean:

https://leanprover.github.io/tutorial/

or in Coq:

https://jscoq.github.io/

The combination of a browser, interactive development, and
"immediate documentation" merges a lot of Axiom's goals.

4) CATEGORY STRUCTURE

William Farmer and Jacques Carette (McMaster University)
have the notion of "tiny theories". The basic idea (in Axiom
speak) is that each Category should only introduce a single
signature or a single axiom (e.g. commutativity). It seems
possible to restructure the Category hierarchy to use this model
wilthout changing the end result.

5) BIFORM THEORIES

Farmer and Carette also have the notion of a 'biform theory'
which is a combination of code and proof. This is obviously
the same idea behind the Sane effort.

6) INTERACTIVITY

As mentioned in (3) above, the user interface needs to support
much more interactive program development. Error messages
ought to point at the suspected code. This involves rewriting
the compiler and interpreter to better interface with these tools.

Axiom already talks to a browser front end (Volume 11)
interactively but the newer combination of documentation
and interaction needs to be supported.

Sage provides interactivity for some things but does not
(as far as I know) support the Lean-style browser interface.

7) ALGORITHMS

Axiom's key strength and its key contribution is its collection
of algorithms. The proof systems strive for proofs but can't
do simple computations fast.

The proof systems also cannot trust "Oracle" systems, since
they remain unproven.

When Axiom's algorithms are proven, they provide the Oracle.

8) THE Sane EFFORT

Some of the changes above are cosmetic (e.g. syntax), some
are "additive" (e.g. lemma, corollary), some are just a restructure
(e.g. "tiny theory" categories).

Some of the changes are 'just programming', such as using
the browser to merge the command line and hyperdoc. This
involves changing the interpreter and compiler to deliver better
and more focused feedback All of that is "just a matter of
programming".

The fundamental changes like merging a specification
language and underlying proof machinery are a real challenge.
"Layering Axiom" onto a trusted core is a real challenge but
(glacial) progress is being made.

The proof systems are getting better. They lack long-term
stability (so far) and they lack Axiom's programming ability
(so far). But merging computer algebra and proof systems
seems to me to be possible with a focused effort.

Due to its design and structure, Axiom seems the perfect
platform for this merger.

Work on each of these areas is "in process" (albeit slowly).
The new Sane compiler is "on the path" to support all of these
goals.

9) THE "30 Year Horizion"

Axiom either converges with the future directions by becoming
a proven and trusted mathematical tool ... or it dies.

Axiom dies? ... As they say in the Navy "not on my watch".

Tim

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: Axiom's Sane redesign musings

Martin Baker-3
On 19/06/2019 12:18, Tim Daly wrote:
> The effort to
> Prove Axiom Sane involves merging computer algebra and proof
> technology so Axiom's algorithms have associated proofs.

Tim,

Are there some fundamental compromises that have to be made when
combining computer algebra and proof technology?

For instance in proof assistants, like Coq, equations are types for
instance:

x = x + 0

is a proposition which, by Curry-Howard, can be represented as a type.
To prove the proposition we have to find an inhabitant of the type (Refl).

But in computer algebra I would have thought this equation would be an
element of some Equation type (I'm skipping over a lot of details).

Do you think both of these approaches could be brought together in a
consistent and elegant way?

Also my (limited) understanding is that Coq prevents inconsistencies
which makes it not Turing complete and therefore cant do everything
Axiom can do?

Martin

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: Axiom's Sane redesign musings

Tim Daly
In reply to this post by Tim Daly
Martin,

> Are there some fundamental compromises that have to be made when combining computer > algebra and proof technology?

> For instance in proof assistants, like Coq, equations are types for instance:

> x = x + 0

> is a proposition which, by Curry-Howard, can be represented as a type. To prove the
> proposition we have to find an inhabitant of the type (Refl).

> But in computer algebra I would have thought this equation would be an element of
> some Equation type (I'm skipping over a lot of details).

> Do you think both of these approaches could be brought together in a
> consistent and elegant way?

> Also my (limited) understanding is that Coq prevents inconsistencies which
> makes it not Turing complete and therefore cant do everything Axiom can do?

Martin,

There are many problems with the marriage of logic and algebra. For instance,
the logic systems want proof of termination which is usually not
possible. Often,
though, a partial proof that does not guarantee termination is interesting and
sufficient. TLA provides some possible ideas in this direction.

Another problem is that in the generality of dependent types, full
type resolution
is undecidable (hence, a lot of the heuristics built into Axiom's interpreter).
Still there are interesting discussions of programs as proof objects ongoing at
the moment on the Lean discussion thread
https://leanprover.zulipchat.com/#

Still other problems arise in equational reasoning or probablistic algorithms.
I have looked at (and not really understood well) logics for these.

On the other hand, things like the Groebner Basis algorithm have been proven
in Coq. The technology, and the basic axioms, are gathering greater power.

In particular, since Axiom has a strong scaffolding of abstract algebra, it is
better suited for proofs than other systems. The fact that Axiom's compiler
insists on type information makes it somewhat more likely to be provable.

I have a very limited project goal of proving just the GCD algorithms in Axiom.

The idea is to create a "thin thread" through all of the issues so that they can
be attacked in more detail by later work. I've been gathering bits and pieces
of GCD related items from the literature. See Volume 13 for the current state
(mostly research notes and snippets to be reviewed)
https://github.com/daly/PDFS/blob/master/bookvol13.pdf

The idea is to decorate the categories with their respective axioms. Then to
decorate the domains with their axioms. Next to write specifications for the
function implementations. Then finding invariants, pre- and post-conditions,
etc. Finally, to generate a proof that can be automatically checked. So each
function has a deep pile of inheritated informaion to help the proof.

For example, a GCD algorithm in NonNegativeInteger already has the
ability to assume non-negative types. And it inherits all the axioms that
come with being a GCDDomain.

The fact that computer algebra really is a form of mathematics (as opposed
to, say proving a compiler) gives me hope that progress is possible, even if
difficult.

Do I know how to cross the "x=x+0" type vs equation form? No, but
this is research so I won't know until I know :-)

Tim

On 6/19/19, Tim Daly <[hidden email]> wrote:

> Sane: "rational, coherent, judicious, sound"
>
> Axiom is computational mathematics, of course. The effort to
> Prove Axiom Sane involves merging computer algebra and proof
> technology so Axiom's algorithms have associated proofs.
>
> The union of these two fields has made it obvious how far behind
> Axiom has fallen. Mathematicians and some computer languages
> (e.g. Haskell) have started to converge on some unifying ideas.
>
> As a result, the new Sane compiler and interpreter needs to
> consider the current environment that mathematicians expect.
> This falls into several areas.
>
> 1) SYNTAX
>
> It is clear that Axiom's syntax and user interface has not kept
> up with the proof technology. A trivial example is a signature.
> Axiom likes:
>
>    foo : (a,b) -> c
>
> whereas Lean / Coq / or even Haskell prefers
>
>    foo: a -> b -> c
>
> Given that many systems (e.g. Haskell) use the second syntax
> we need to consider adopting this more widely accepted syntax.
>
> 2) PROOF and SPECIFICATIONS
>
> And, obviously, Axiom does not support 'axiom', 'lemma',
> 'corollary', nor any specification language, or any language
> for proof support (e.g. 'invariant').
>
> Mathematicians have these tools readily available in many
> systems today.
>
> 3) USER INTERFACE
>
> Axiom's command line interface and hyperdoc were fine for its
> time but Lean and Coq use a browser. For example, in Lean:
>
> https://leanprover.github.io/tutorial/
>
> or in Coq:
>
> https://jscoq.github.io/
>
> The combination of a browser, interactive development, and
> "immediate documentation" merges a lot of Axiom's goals.
>
> 4) CATEGORY STRUCTURE
>
> William Farmer and Jacques Carette (McMaster University)
> have the notion of "tiny theories". The basic idea (in Axiom
> speak) is that each Category should only introduce a single
> signature or a single axiom (e.g. commutativity). It seems
> possible to restructure the Category hierarchy to use this model
> wilthout changing the end result.
>
> 5) BIFORM THEORIES
>
> Farmer and Carette also have the notion of a 'biform theory'
> which is a combination of code and proof. This is obviously
> the same idea behind the Sane effort.
>
> 6) INTERACTIVITY
>
> As mentioned in (3) above, the user interface needs to support
> much more interactive program development. Error messages
> ought to point at the suspected code. This involves rewriting
> the compiler and interpreter to better interface with these tools.
>
> Axiom already talks to a browser front end (Volume 11)
> interactively but the newer combination of documentation
> and interaction needs to be supported.
>
> Sage provides interactivity for some things but does not
> (as far as I know) support the Lean-style browser interface.
>
> 7) ALGORITHMS
>
> Axiom's key strength and its key contribution is its collection
> of algorithms. The proof systems strive for proofs but can't
> do simple computations fast.
>
> The proof systems also cannot trust "Oracle" systems, since
> they remain unproven.
>
> When Axiom's algorithms are proven, they provide the Oracle.
>
> 8) THE Sane EFFORT
>
> Some of the changes above are cosmetic (e.g. syntax), some
> are "additive" (e.g. lemma, corollary), some are just a restructure
> (e.g. "tiny theory" categories).
>
> Some of the changes are 'just programming', such as using
> the browser to merge the command line and hyperdoc. This
> involves changing the interpreter and compiler to deliver better
> and more focused feedback All of that is "just a matter of
> programming".
>
> The fundamental changes like merging a specification
> language and underlying proof machinery are a real challenge.
> "Layering Axiom" onto a trusted core is a real challenge but
> (glacial) progress is being made.
>
> The proof systems are getting better. They lack long-term
> stability (so far) and they lack Axiom's programming ability
> (so far). But merging computer algebra and proof systems
> seems to me to be possible with a focused effort.
>
> Due to its design and structure, Axiom seems the perfect
> platform for this merger.
>
> Work on each of these areas is "in process" (albeit slowly).
> The new Sane compiler is "on the path" to support all of these
> goals.
>
> 9) THE "30 Year Horizion"
>
> Axiom either converges with the future directions by becoming
> a proven and trusted mathematical tool ... or it dies.
>
> Axiom dies? ... As they say in the Navy "not on my watch".
>
> Tim
>

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

William Sit-3

Dear Martin and Tim:

The expression  x = x + 0, whether treated as a type or an equation, can only make sense when x, =, + and 0 are clearly typed and defined. It makes little sense to me that this, as an equation, can be "proved" to be valid universally (that is, without the definition of, say +). For example, if we are in a domain where + is defined as max, or as min, or as the binary logic operator "and", then the expression is not an identity for all values x in the domain. If + is the usual addition, or the logic operatior "or", then the expression is a true identity and can be proved (more or less from definitions). In Axiom, if valid, these are simply stated as properties of the domain or category as in "has ...".

I assume this is well-known and perhaps that is what Martin meant by "(I'm skipping over a lot of details)". I am ignorant of type theory and often confused by terms like type, equation (equation type?), identity, and even proposition (like x = x + 0 is an identity) as used in proof theories. Please ignore this message if the first paragraph is irrelevant to this thread.

William Sit
Professor Emeritus
Department of Mathematics
The City College of The City University of New York
New York, NY 10031
homepage: wsit.ccny.cuny.edu

________________________________________
From: Axiom-developer <axiom-developer-bounces+wyscc=[hidden email]> on behalf of Tim Daly <[hidden email]>
Sent: Wednesday, June 19, 2019 1:02 PM
To: axiom-dev; Tim Daly
Subject: [EXTERNAL] Re: [Axiom-developer] Axiom's Sane redesign musings

Martin,

> Are there some fundamental compromises that have to be made when combining computer > algebra and proof technology?

> For instance in proof assistants, like Coq, equations are types for instance:

> x = x + 0

> is a proposition which, by Curry-Howard, can be represented as a type. To prove the
> proposition we have to find an inhabitant of the type (Refl).

> But in computer algebra I would have thought this equation would be an element of
> some Equation type (I'm skipping over a lot of details).

> Do you think both of these approaches could be brought together in a
> consistent and elegant way?

> Also my (limited) understanding is that Coq prevents inconsistencies which
> makes it not Turing complete and therefore cant do everything Axiom can do?

Martin,

There are many problems with the marriage of logic and algebra. For instance,
the logic systems want proof of termination which is usually not
possible. Often,
though, a partial proof that does not guarantee termination is interesting and
sufficient. TLA provides some possible ideas in this direction.

Another problem is that in the generality of dependent types, full
type resolution
is undecidable (hence, a lot of the heuristics built into Axiom's interpreter).
Still there are interesting discussions of programs as proof objects ongoing at
the moment on the Lean discussion thread
https://urldefense.proofpoint.com/v2/url?u=https-3A__leanprover.zulipchat.com_-23&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=Bh77WgNdFtJTZCN46189mNhezS-_JXcBwUWxnNPRF_8&s=u7sojPwgc1fuWoQpKPwsjxjH98p64z3RQNZlwxBTLPQ&e=

Still other problems arise in equational reasoning or probablistic algorithms.
I have looked at (and not really understood well) logics for these.

On the other hand, things like the Groebner Basis algorithm have been proven
in Coq. The technology, and the basic axioms, are gathering greater power.

In particular, since Axiom has a strong scaffolding of abstract algebra, it is
better suited for proofs than other systems. The fact that Axiom's compiler
insists on type information makes it somewhat more likely to be provable.

I have a very limited project goal of proving just the GCD algorithms in Axiom.

The idea is to create a "thin thread" through all of the issues so that they can
be attacked in more detail by later work. I've been gathering bits and pieces
of GCD related items from the literature. See Volume 13 for the current state
(mostly research notes and snippets to be reviewed)
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_daly_PDFS_blob_master_bookvol13.pdf&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=Bh77WgNdFtJTZCN46189mNhezS-_JXcBwUWxnNPRF_8&s=maYsCHxZ4fjQEYua_1NfaxuO785JXAJlC4YRps-YFwQ&e=

The idea is to decorate the categories with their respective axioms. Then to
decorate the domains with their axioms. Next to write specifications for the
function implementations. Then finding invariants, pre- and post-conditions,
etc. Finally, to generate a proof that can be automatically checked. So each
function has a deep pile of inheritated informaion to help the proof.

For example, a GCD algorithm in NonNegativeInteger already has the
ability to assume non-negative types. And it inherits all the axioms that
come with being a GCDDomain.

The fact that computer algebra really is a form of mathematics (as opposed
to, say proving a compiler) gives me hope that progress is possible, even if
difficult.

Do I know how to cross the "x=x+0" type vs equation form? No, but
this is research so I won't know until I know :-)

Tim

On 6/19/19, Tim Daly <[hidden email]> wrote:

> Sane: "rational, coherent, judicious, sound"
>
> Axiom is computational mathematics, of course. The effort to
> Prove Axiom Sane involves merging computer algebra and proof
> technology so Axiom's algorithms have associated proofs.
>
> The union of these two fields has made it obvious how far behind
> Axiom has fallen. Mathematicians and some computer languages
> (e.g. Haskell) have started to converge on some unifying ideas.
>
> As a result, the new Sane compiler and interpreter needs to
> consider the current environment that mathematicians expect.
> This falls into several areas.
>
> 1) SYNTAX
>
> It is clear that Axiom's syntax and user interface has not kept
> up with the proof technology. A trivial example is a signature.
> Axiom likes:
>
>    foo : (a,b) -> c
>
> whereas Lean / Coq / or even Haskell prefers
>
>    foo: a -> b -> c
>
> Given that many systems (e.g. Haskell) use the second syntax
> we need to consider adopting this more widely accepted syntax.
>
> 2) PROOF and SPECIFICATIONS
>
> And, obviously, Axiom does not support 'axiom', 'lemma',
> 'corollary', nor any specification language, or any language
> for proof support (e.g. 'invariant').
>
> Mathematicians have these tools readily available in many
> systems today.
>
> 3) USER INTERFACE
>
> Axiom's command line interface and hyperdoc were fine for its
> time but Lean and Coq use a browser. For example, in Lean:
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__leanprover.github.io_tutorial_&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=Bh77WgNdFtJTZCN46189mNhezS-_JXcBwUWxnNPRF_8&s=EM0zfm6XLk9cSYEzwVYO2vVnvmksn6UcM0KkUGyu_Og&e=
>
> or in Coq:
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__jscoq.github.io_&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=Bh77WgNdFtJTZCN46189mNhezS-_JXcBwUWxnNPRF_8&s=vLHZcW4aiLGGw3-bGeVoO4TFa3CBy3qIAlOIbRSbwf8&e=
>
> The combination of a browser, interactive development, and
> "immediate documentation" merges a lot of Axiom's goals.
>
> 4) CATEGORY STRUCTURE
>
> William Farmer and Jacques Carette (McMaster University)
> have the notion of "tiny theories". The basic idea (in Axiom
> speak) is that each Category should only introduce a single
> signature or a single axiom (e.g. commutativity). It seems
> possible to restructure the Category hierarchy to use this model
> wilthout changing the end result.
>
> 5) BIFORM THEORIES
>
> Farmer and Carette also have the notion of a 'biform theory'
> which is a combination of code and proof. This is obviously
> the same idea behind the Sane effort.
>
> 6) INTERACTIVITY
>
> As mentioned in (3) above, the user interface needs to support
> much more interactive program development. Error messages
> ought to point at the suspected code. This involves rewriting
> the compiler and interpreter to better interface with these tools.
>
> Axiom already talks to a browser front end (Volume 11)
> interactively but the newer combination of documentation
> and interaction needs to be supported.
>
> Sage provides interactivity for some things but does not
> (as far as I know) support the Lean-style browser interface.
>
> 7) ALGORITHMS
>
> Axiom's key strength and its key contribution is its collection
> of algorithms. The proof systems strive for proofs but can't
> do simple computations fast.
>
> The proof systems also cannot trust "Oracle" systems, since
> they remain unproven.
>
> When Axiom's algorithms are proven, they provide the Oracle.
>
> 8) THE Sane EFFORT
>
> Some of the changes above are cosmetic (e.g. syntax), some
> are "additive" (e.g. lemma, corollary), some are just a restructure
> (e.g. "tiny theory" categories).
>
> Some of the changes are 'just programming', such as using
> the browser to merge the command line and hyperdoc. This
> involves changing the interpreter and compiler to deliver better
> and more focused feedback All of that is "just a matter of
> programming".
>
> The fundamental changes like merging a specification
> language and underlying proof machinery are a real challenge.
> "Layering Axiom" onto a trusted core is a real challenge but
> (glacial) progress is being made.
>
> The proof systems are getting better. They lack long-term
> stability (so far) and they lack Axiom's programming ability
> (so far). But merging computer algebra and proof systems
> seems to me to be possible with a focused effort.
>
> Due to its design and structure, Axiom seems the perfect
> platform for this merger.
>
> Work on each of these areas is "in process" (albeit slowly).
> The new Sane compiler is "on the path" to support all of these
> goals.
>
> 9) THE "30 Year Horizion"
>
> Axiom either converges with the future directions by becoming
> a proven and trusted mathematical tool ... or it dies.
>
> Axiom dies? ... As they say in the Navy "not on my watch".
>
> Tim
>

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.nongnu.org_mailman_listinfo_axiom-2Ddeveloper&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=Bh77WgNdFtJTZCN46189mNhezS-_JXcBwUWxnNPRF_8&s=071bx5Hfnags51BaZCergjN7ZxGQ0eQzcEsWvOJHG1s&e=
_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

Tim Daly
It is all rather confusing. There are well over 100 "logics".

Equational logic, or equational reasoning, is a very restricted
logic having only equality as an operation.

An alternative interpretation would be a substitution-style
logic where equals can be substituted for equals as one of
the operations.

The expression x = x + 0 isn't meaningful in isolation. In
Axiom it would have surrounding axioms (in the coming
Sane version), one of which is that in some cases, the
zero is a unit.

Note that this is not an assignment operation but one whose
target type is a Boolean.

But "x = x + 0" has the type EQUATION(POLY(INT)), not
Boolean. Axiom says:

x = x + 0
               Type: Equation(Polynomial(Integer))
%::Boolean
   True
               Type: Boolean

The "elegant way" that Martin is questioning is the problem
of combining a certain kind of logic operation (refl aka
"reflection" where both sides are equal) with the notion of
a mathematical unit.

This particular issue is simple compared with the many other
issues. In Axiom, the question would be what is the type of
x, of 0, of +, and of the whole equation. If x is a matrix, for
example, the 0 is coerced to a matrix. And now the question
is "Is this equality true for matrix operations?"

Tim

On 6/25/19, William Sit <[hidden email]> wrote:

>
> Dear Martin and Tim:
>
> The expression  x = x + 0, whether treated as a type or an equation, can
> only make sense when x, =, + and 0 are clearly typed and defined. It makes
> little sense to me that this, as an equation, can be "proved" to be valid
> universally (that is, without the definition of, say +). For example, if we
> are in a domain where + is defined as max, or as min, or as the binary logic
> operator "and", then the expression is not an identity for all values x in
> the domain. If + is the usual addition, or the logic operatior "or", then
> the expression is a true identity and can be proved (more or less from
> definitions). In Axiom, if valid, these are simply stated as properties of
> the domain or category as in "has ...".
>
> I assume this is well-known and perhaps that is what Martin meant by "(I'm
> skipping over a lot of details)". I am ignorant of type theory and often
> confused by terms like type, equation (equation type?), identity, and even
> proposition (like x = x + 0 is an identity) as used in proof theories.
> Please ignore this message if the first paragraph is irrelevant to this
> thread.
>
> William Sit
> Professor Emeritus
> Department of Mathematics
> The City College of The City University of New York
> New York, NY 10031
> homepage: wsit.ccny.cuny.edu
>
> ________________________________________
> From: Axiom-developer
> <axiom-developer-bounces+wyscc=[hidden email]> on behalf of
> Tim Daly <[hidden email]>
> Sent: Wednesday, June 19, 2019 1:02 PM
> To: axiom-dev; Tim Daly
> Subject: [EXTERNAL] Re: [Axiom-developer] Axiom's Sane redesign musings
>
> Martin,
>
>> Are there some fundamental compromises that have to be made when combining
>> computer > algebra and proof technology?
>
>> For instance in proof assistants, like Coq, equations are types for
>> instance:
>
>> x = x + 0
>
>> is a proposition which, by Curry-Howard, can be represented as a type. To
>> prove the
>> proposition we have to find an inhabitant of the type (Refl).
>
>> But in computer algebra I would have thought this equation would be an
>> element of
>> some Equation type (I'm skipping over a lot of details).
>
>> Do you think both of these approaches could be brought together in a
>> consistent and elegant way?
>
>> Also my (limited) understanding is that Coq prevents inconsistencies which
>> makes it not Turing complete and therefore cant do everything Axiom can
>> do?
>
> Martin,
>
> There are many problems with the marriage of logic and algebra. For
> instance,
> the logic systems want proof of termination which is usually not
> possible. Often,
> though, a partial proof that does not guarantee termination is interesting
> and
> sufficient. TLA provides some possible ideas in this direction.
>
> Another problem is that in the generality of dependent types, full
> type resolution
> is undecidable (hence, a lot of the heuristics built into Axiom's
> interpreter).
> Still there are interesting discussions of programs as proof objects ongoing
> at
> the moment on the Lean discussion thread
> https://urldefense.proofpoint.com/v2/url?u=https-3A__leanprover.zulipchat.com_-23&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=Bh77WgNdFtJTZCN46189mNhezS-_JXcBwUWxnNPRF_8&s=u7sojPwgc1fuWoQpKPwsjxjH98p64z3RQNZlwxBTLPQ&e=
>
> Still other problems arise in equational reasoning or probablistic
> algorithms.
> I have looked at (and not really understood well) logics for these.
>
> On the other hand, things like the Groebner Basis algorithm have been proven
> in Coq. The technology, and the basic axioms, are gathering greater power.
>
> In particular, since Axiom has a strong scaffolding of abstract algebra, it
> is
> better suited for proofs than other systems. The fact that Axiom's compiler
> insists on type information makes it somewhat more likely to be provable.
>
> I have a very limited project goal of proving just the GCD algorithms in
> Axiom.
>
> The idea is to create a "thin thread" through all of the issues so that they
> can
> be attacked in more detail by later work. I've been gathering bits and
> pieces
> of GCD related items from the literature. See Volume 13 for the current
> state
> (mostly research notes and snippets to be reviewed)
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_daly_PDFS_blob_master_bookvol13.pdf&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=Bh77WgNdFtJTZCN46189mNhezS-_JXcBwUWxnNPRF_8&s=maYsCHxZ4fjQEYua_1NfaxuO785JXAJlC4YRps-YFwQ&e=
>
> The idea is to decorate the categories with their respective axioms. Then to
> decorate the domains with their axioms. Next to write specifications for the
> function implementations. Then finding invariants, pre- and post-conditions,
> etc. Finally, to generate a proof that can be automatically checked. So each
> function has a deep pile of inheritated informaion to help the proof.
>
> For example, a GCD algorithm in NonNegativeInteger already has the
> ability to assume non-negative types. And it inherits all the axioms that
> come with being a GCDDomain.
>
> The fact that computer algebra really is a form of mathematics (as opposed
> to, say proving a compiler) gives me hope that progress is possible, even if
> difficult.
>
> Do I know how to cross the "x=x+0" type vs equation form? No, but
> this is research so I won't know until I know :-)
>
> Tim
>
> On 6/19/19, Tim Daly <[hidden email]> wrote:
>> Sane: "rational, coherent, judicious, sound"
>>
>> Axiom is computational mathematics, of course. The effort to
>> Prove Axiom Sane involves merging computer algebra and proof
>> technology so Axiom's algorithms have associated proofs.
>>
>> The union of these two fields has made it obvious how far behind
>> Axiom has fallen. Mathematicians and some computer languages
>> (e.g. Haskell) have started to converge on some unifying ideas.
>>
>> As a result, the new Sane compiler and interpreter needs to
>> consider the current environment that mathematicians expect.
>> This falls into several areas.
>>
>> 1) SYNTAX
>>
>> It is clear that Axiom's syntax and user interface has not kept
>> up with the proof technology. A trivial example is a signature.
>> Axiom likes:
>>
>>    foo : (a,b) -> c
>>
>> whereas Lean / Coq / or even Haskell prefers
>>
>>    foo: a -> b -> c
>>
>> Given that many systems (e.g. Haskell) use the second syntax
>> we need to consider adopting this more widely accepted syntax.
>>
>> 2) PROOF and SPECIFICATIONS
>>
>> And, obviously, Axiom does not support 'axiom', 'lemma',
>> 'corollary', nor any specification language, or any language
>> for proof support (e.g. 'invariant').
>>
>> Mathematicians have these tools readily available in many
>> systems today.
>>
>> 3) USER INTERFACE
>>
>> Axiom's command line interface and hyperdoc were fine for its
>> time but Lean and Coq use a browser. For example, in Lean:
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__leanprover.github.io_tutorial_&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=Bh77WgNdFtJTZCN46189mNhezS-_JXcBwUWxnNPRF_8&s=EM0zfm6XLk9cSYEzwVYO2vVnvmksn6UcM0KkUGyu_Og&e=
>>
>> or in Coq:
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__jscoq.github.io_&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=Bh77WgNdFtJTZCN46189mNhezS-_JXcBwUWxnNPRF_8&s=vLHZcW4aiLGGw3-bGeVoO4TFa3CBy3qIAlOIbRSbwf8&e=
>>
>> The combination of a browser, interactive development, and
>> "immediate documentation" merges a lot of Axiom's goals.
>>
>> 4) CATEGORY STRUCTURE
>>
>> William Farmer and Jacques Carette (McMaster University)
>> have the notion of "tiny theories". The basic idea (in Axiom
>> speak) is that each Category should only introduce a single
>> signature or a single axiom (e.g. commutativity). It seems
>> possible to restructure the Category hierarchy to use this model
>> wilthout changing the end result.
>>
>> 5) BIFORM THEORIES
>>
>> Farmer and Carette also have the notion of a 'biform theory'
>> which is a combination of code and proof. This is obviously
>> the same idea behind the Sane effort.
>>
>> 6) INTERACTIVITY
>>
>> As mentioned in (3) above, the user interface needs to support
>> much more interactive program development. Error messages
>> ought to point at the suspected code. This involves rewriting
>> the compiler and interpreter to better interface with these tools.
>>
>> Axiom already talks to a browser front end (Volume 11)
>> interactively but the newer combination of documentation
>> and interaction needs to be supported.
>>
>> Sage provides interactivity for some things but does not
>> (as far as I know) support the Lean-style browser interface.
>>
>> 7) ALGORITHMS
>>
>> Axiom's key strength and its key contribution is its collection
>> of algorithms. The proof systems strive for proofs but can't
>> do simple computations fast.
>>
>> The proof systems also cannot trust "Oracle" systems, since
>> they remain unproven.
>>
>> When Axiom's algorithms are proven, they provide the Oracle.
>>
>> 8) THE Sane EFFORT
>>
>> Some of the changes above are cosmetic (e.g. syntax), some
>> are "additive" (e.g. lemma, corollary), some are just a restructure
>> (e.g. "tiny theory" categories).
>>
>> Some of the changes are 'just programming', such as using
>> the browser to merge the command line and hyperdoc. This
>> involves changing the interpreter and compiler to deliver better
>> and more focused feedback All of that is "just a matter of
>> programming".
>>
>> The fundamental changes like merging a specification
>> language and underlying proof machinery are a real challenge.
>> "Layering Axiom" onto a trusted core is a real challenge but
>> (glacial) progress is being made.
>>
>> The proof systems are getting better. They lack long-term
>> stability (so far) and they lack Axiom's programming ability
>> (so far). But merging computer algebra and proof systems
>> seems to me to be possible with a focused effort.
>>
>> Due to its design and structure, Axiom seems the perfect
>> platform for this merger.
>>
>> Work on each of these areas is "in process" (albeit slowly).
>> The new Sane compiler is "on the path" to support all of these
>> goals.
>>
>> 9) THE "30 Year Horizion"
>>
>> Axiom either converges with the future directions by becoming
>> a proven and trusted mathematical tool ... or it dies.
>>
>> Axiom dies? ... As they say in the Navy "not on my watch".
>>
>> Tim
>>
>
> _______________________________________________
> Axiom-developer mailing list
> [hidden email]
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.nongnu.org_mailman_listinfo_axiom-2Ddeveloper&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=Bh77WgNdFtJTZCN46189mNhezS-_JXcBwUWxnNPRF_8&s=071bx5Hfnags51BaZCergjN7ZxGQ0eQzcEsWvOJHG1s&e=

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

Martin Baker-3
On 6/25/19, William Sit<[hidden email]>  wrote:
 > The expression  x = x + 0, whether treated as a type or an equation,
 > can only make sense when x, =, + and 0 are clearly typed and defined.
 > It makes little sense to me that this, as an equation, can be "proved"
 > to be valid universally (that is, without the definition of, say +).

If x is a natural number defined like this in Coq:

Inductive nat : Set := O : nat | S : nat -> nat

then x = x + 0 is not an axiom but is derivable.
Of course this all depends on the structures and definitions, I didn't
mean to imply that it is valid universally.

On 25/06/2019 19:28, Tim Daly wrote:
> The "elegant way" that Martin is questioning is the problem
> of combining a certain kind of logic operation (refl aka
> "reflection" where both sides are equal) with the notion of
> a mathematical unit.

I think that refl (short for "reflexivity" of = relation), is the usual
abbreviation for the only constructor of an equality type in Martin-Lof
type theory.

I get the impression that this theory is very powerful in proof
assistants and I am questioning if you proposing to build this into
Axiom and how?

Martin

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

Tim Daly
Martin,

My current "line of attack" on this research is to try to prove the
GCD algorithm in NonNegativeInteger.

While this seems trivial in proof systems it is expensive to
compute from the inductive definition. While this seems
trivial in computer algebra, the implementation code lacks
proof.

There are several steps I'm taking. I'm creating a new compiler
that handles Spad code with several new additions. The new
language (and the effort in general) is called "Sane".

One step is to make sure the new compiler generates code
that runs in Axiom. This is challenging as there is almost no
documentation about Axiom internals so it all has to be
reverse-engineered.

Next is the addition of "axioms" to the categories, such as
adding axioms for equality to BasicType, where equality is
defined to include reflexive and transitive properties.
(Equality, by the way, is probably THE most controversial
topics in logic, c.f. the univalence axiom in HoTT). These
axioms decorate the category signatures and are inherited.

Next is the addition of axioms to domains, also decorating
the signatures with axioms.

Next is the logical specification of properties of the data
type implementation of the domain, called the REP in
Axiom and the "carrier" in logic. For example, think of a
binary tree REP and what properties you can guarantee.

Next is adding a specification for the operations that
implement the signatures. These are specific to each
function that a domain implements.

Next is decorating code with pre- and post-conditions
as well as loop invariants and storage invariants.

Next the collection of all of the above is cast into a form
used by a proof system (currently Lean) that implements
dependent types (Calculus of Inductive Construction).

Next a proof is constructed. The resulting proof is attached
to the Axiom code. Proof checking is wildly cheaper than
proof construction and the new Sane compiler would
perform proof checking at compile time for each function.

So if there is a breaking change somewhere in the tower
the proof would fail.

Challenges along the way, besides reverse-engineering
the Spad compiler, include adding languages for stating
axioms, for stating REP properties, for stating function
specifications, for stating program properties, and for
stating proof certificates. The pieces all exist somewhere
but they are not necessarily compatible, nor well defined.

Is this all possible to do? Well, of course, as this is all
"just mathematics". Do *I* know how to do this? Well,
of course not, which is what makes this a research effort.

Ultimately I'm trying to build an instance of merging proof
and computer algebra at a very deep, proven level. Think
of it as a PhD thesis project without the degree incentive :-)

Tim


On 6/25/19, Martin Baker <[hidden email]> wrote:

> On 6/25/19, William Sit<[hidden email]>  wrote:
>  > The expression  x = x + 0, whether treated as a type or an equation,
>  > can only make sense when x, =, + and 0 are clearly typed and defined.
>  > It makes little sense to me that this, as an equation, can be "proved"
>  > to be valid universally (that is, without the definition of, say +).
>
> If x is a natural number defined like this in Coq:
>
> Inductive nat : Set := O : nat | S : nat -> nat
>
> then x = x + 0 is not an axiom but is derivable.
> Of course this all depends on the structures and definitions, I didn't
> mean to imply that it is valid universally.
>
> On 25/06/2019 19:28, Tim Daly wrote:
>> The "elegant way" that Martin is questioning is the problem
>> of combining a certain kind of logic operation (refl aka
>> "reflection" where both sides are equal) with the notion of
>> a mathematical unit.
>
> I think that refl (short for "reflexivity" of = relation), is the usual
> abbreviation for the only constructor of an equality type in Martin-Lof
> type theory.
>
> I get the impression that this theory is very powerful in proof
> assistants and I am questioning if you proposing to build this into
> Axiom and how?
>
> Martin
>

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

Martin Baker-3
Tim,

When I read this I am picking up a slightly different flavor than what I
got from your first message in the thread.

You first message seemed to me to be about "merging computer algebra and
proof technology" but this message seems to be about running the two in
parallel.

I really like the idea of merging the two because Axiom has a steep
learning curve and proof assistants have a steep learning curve. If they
were merged the leaning curve would still be steep but at least there
would only be one.

This seems to be more about:
* specifying axioms (and other identities?) in the category.
* decorating the domain with some denotational semantics stuff.
then checking it, at compile time, by running a separate proof assistant
program.

Is this what you are suggesting.

Martin

On 26/06/2019 01:16, Tim Daly wrote:

> Martin,
>
> My current "line of attack" on this research is to try to prove the
> GCD algorithm in NonNegativeInteger.
>
> While this seems trivial in proof systems it is expensive to
> compute from the inductive definition. While this seems
> trivial in computer algebra, the implementation code lacks
> proof.
>
> There are several steps I'm taking. I'm creating a new compiler
> that handles Spad code with several new additions. The new
> language (and the effort in general) is called "Sane".
>
> One step is to make sure the new compiler generates code
> that runs in Axiom. This is challenging as there is almost no
> documentation about Axiom internals so it all has to be
> reverse-engineered.
>
> Next is the addition of "axioms" to the categories, such as
> adding axioms for equality to BasicType, where equality is
> defined to include reflexive and transitive properties.
> (Equality, by the way, is probably THE most controversial
> topics in logic, c.f. the univalence axiom in HoTT). These
> axioms decorate the category signatures and are inherited.
>
> Next is the addition of axioms to domains, also decorating
> the signatures with axioms.
>
> Next is the logical specification of properties of the data
> type implementation of the domain, called the REP in
> Axiom and the "carrier" in logic. For example, think of a
> binary tree REP and what properties you can guarantee.
>
> Next is adding a specification for the operations that
> implement the signatures. These are specific to each
> function that a domain implements.
>
> Next is decorating code with pre- and post-conditions
> as well as loop invariants and storage invariants.
>
> Next the collection of all of the above is cast into a form
> used by a proof system (currently Lean) that implements
> dependent types (Calculus of Inductive Construction).
>
> Next a proof is constructed. The resulting proof is attached
> to the Axiom code. Proof checking is wildly cheaper than
> proof construction and the new Sane compiler would
> perform proof checking at compile time for each function.
>
> So if there is a breaking change somewhere in the tower
> the proof would fail.
>
> Challenges along the way, besides reverse-engineering
> the Spad compiler, include adding languages for stating
> axioms, for stating REP properties, for stating function
> specifications, for stating program properties, and for
> stating proof certificates. The pieces all exist somewhere
> but they are not necessarily compatible, nor well defined.
>
> Is this all possible to do? Well, of course, as this is all
> "just mathematics". Do *I* know how to do this? Well,
> of course not, which is what makes this a research effort.
>
> Ultimately I'm trying to build an instance of merging proof
> and computer algebra at a very deep, proven level. Think
> of it as a PhD thesis project without the degree incentive :-)
>
> Tim
>
>
> On 6/25/19, Martin Baker <[hidden email]> wrote:
>> On 6/25/19, William Sit<[hidden email]>  wrote:
>>   > The expression  x = x + 0, whether treated as a type or an equation,
>>   > can only make sense when x, =, + and 0 are clearly typed and defined.
>>   > It makes little sense to me that this, as an equation, can be "proved"
>>   > to be valid universally (that is, without the definition of, say +).
>>
>> If x is a natural number defined like this in Coq:
>>
>> Inductive nat : Set := O : nat | S : nat -> nat
>>
>> then x = x + 0 is not an axiom but is derivable.
>> Of course this all depends on the structures and definitions, I didn't
>> mean to imply that it is valid universally.
>>
>> On 25/06/2019 19:28, Tim Daly wrote:
>>> The "elegant way" that Martin is questioning is the problem
>>> of combining a certain kind of logic operation (refl aka
>>> "reflection" where both sides are equal) with the notion of
>>> a mathematical unit.
>>
>> I think that refl (short for "reflexivity" of = relation), is the usual
>> abbreviation for the only constructor of an equality type in Martin-Lof
>> type theory.
>>
>> I get the impression that this theory is very powerful in proof
>> assistants and I am questioning if you proposing to build this into
>> Axiom and how?
>>
>> Martin
>>
>

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

Tim Daly
Yes, but even more than that.

The goal is to make Axiom a "trusted system" for the whole of
computational mathematics.

To do this, consider this (not entirely random) collection of systems
and facts:

1) Trusted systems are built using the de Bruijn principle. See, for example,
https://pdfs.semanticscholar.org/2498/8797cf3e98c3a4a7c3dc775f8c02ce252c33.pdf

The basic idea is to have a trusted kernel that is composed of
a small number of logic operations known to be correct. All other
operations get "compiled" into these trusted operations. This is
how the proof systems operate, by choosing a logic such as CIC
and then constructing a CIC kernel.

2) Axiom is really nothing more that a "Domain Specific Language"
on top of common lisp. For Axiom to be trusted, the lisp must be
trusted, which the whole of common lisp is not.

3) We would like a common lisp that is proven "down to the
metal". There is a lisp (not common lisp) kernel called Milawa
https://link.springer.com/article/10.1007/s10817-015-9324-6
that has this property. It starts with a proven kernel and builds
up layers (currently 11) towards ACL2, a common lisp theorem
prover.

4) There is a theorem prover called Matita which appears to be
of interest as another "intermediate step". See
http://matita.cs.unibo.it/index.shtml

5) There is also a newly forming Lean prover and I am trying
to follow along and get up to speed on it: See:
https://leanprover.github.io/

5) There is the mistaken belief that common lisp is a typeless
language. However, CLOS has defclass and every defclass
constructs a new type. So by careful design it is possible to
construct a "fully typed" domain specific language and compiler
in common lisp.

So what does this mean for a Sane Axiom system?

The game is to build the Sane compiler using CLOS so it is
strongly typed. The output of the compiler generates lisp code
that conforms to some (possibly higher, Axiom-specific) layer
of the Matita lisp. So compiler output would be in a provably
type-safe subset of lisp.

In addition, the Sane compiler is being constructed so that
the compiler itself can be proven correct. Everything in the
compiler is strongly typed as is its output.

Ultimately the goal is a proven Sane compiler that accepts
a provable Sane language for computer algebra which generates
proven computer algebra code running on a lisp that is
proven "down to the metal".

The end result is a trusted system for computational mathematics.

Like the blind philosophers, I can grab any part of this
elephantine system and describe it in great detail. My struggle
is to "envision the whole", make it sensible, and then construct
it in a provable way.

I've been at this for about 7 years now. I took 10 classes at
CMU. read several hundred papers (see the Axiom bibliography
volume, https://github.com/daly/PDFS/blob/master/bookvolbib.pdf)
and spent a few years "coming up to speed on the
proof theory" side of computational mathematics. Now I
finally have (poor) command of computer algebra, proof
theory, and provable programming. Having all the parts,
the question is "Can I construct the elephant?". I don't know.
But I am trying.

Tim


On 6/26/19, Martin Baker <[hidden email]> wrote:

> Tim,
>
> When I read this I am picking up a slightly different flavor than what I
> got from your first message in the thread.
>
> You first message seemed to me to be about "merging computer algebra and
> proof technology" but this message seems to be about running the two in
> parallel.
>
> I really like the idea of merging the two because Axiom has a steep
> learning curve and proof assistants have a steep learning curve. If they
> were merged the leaning curve would still be steep but at least there
> would only be one.
>
> This seems to be more about:
> * specifying axioms (and other identities?) in the category.
> * decorating the domain with some denotational semantics stuff.
> then checking it, at compile time, by running a separate proof assistant
> program.
>
> Is this what you are suggesting.
>
> Martin
>
> On 26/06/2019 01:16, Tim Daly wrote:
>> Martin,
>>
>> My current "line of attack" on this research is to try to prove the
>> GCD algorithm in NonNegativeInteger.
>>
>> While this seems trivial in proof systems it is expensive to
>> compute from the inductive definition. While this seems
>> trivial in computer algebra, the implementation code lacks
>> proof.
>>
>> There are several steps I'm taking. I'm creating a new compiler
>> that handles Spad code with several new additions. The new
>> language (and the effort in general) is called "Sane".
>>
>> One step is to make sure the new compiler generates code
>> that runs in Axiom. This is challenging as there is almost no
>> documentation about Axiom internals so it all has to be
>> reverse-engineered.
>>
>> Next is the addition of "axioms" to the categories, such as
>> adding axioms for equality to BasicType, where equality is
>> defined to include reflexive and transitive properties.
>> (Equality, by the way, is probably THE most controversial
>> topics in logic, c.f. the univalence axiom in HoTT). These
>> axioms decorate the category signatures and are inherited.
>>
>> Next is the addition of axioms to domains, also decorating
>> the signatures with axioms.
>>
>> Next is the logical specification of properties of the data
>> type implementation of the domain, called the REP in
>> Axiom and the "carrier" in logic. For example, think of a
>> binary tree REP and what properties you can guarantee.
>>
>> Next is adding a specification for the operations that
>> implement the signatures. These are specific to each
>> function that a domain implements.
>>
>> Next is decorating code with pre- and post-conditions
>> as well as loop invariants and storage invariants.
>>
>> Next the collection of all of the above is cast into a form
>> used by a proof system (currently Lean) that implements
>> dependent types (Calculus of Inductive Construction).
>>
>> Next a proof is constructed. The resulting proof is attached
>> to the Axiom code. Proof checking is wildly cheaper than
>> proof construction and the new Sane compiler would
>> perform proof checking at compile time for each function.
>>
>> So if there is a breaking change somewhere in the tower
>> the proof would fail.
>>
>> Challenges along the way, besides reverse-engineering
>> the Spad compiler, include adding languages for stating
>> axioms, for stating REP properties, for stating function
>> specifications, for stating program properties, and for
>> stating proof certificates. The pieces all exist somewhere
>> but they are not necessarily compatible, nor well defined.
>>
>> Is this all possible to do? Well, of course, as this is all
>> "just mathematics". Do *I* know how to do this? Well,
>> of course not, which is what makes this a research effort.
>>
>> Ultimately I'm trying to build an instance of merging proof
>> and computer algebra at a very deep, proven level. Think
>> of it as a PhD thesis project without the degree incentive :-)
>>
>> Tim
>>
>>
>> On 6/25/19, Martin Baker <[hidden email]> wrote:
>>> On 6/25/19, William Sit<[hidden email]>  wrote:
>>>   > The expression  x = x + 0, whether treated as a type or an equation,
>>>   > can only make sense when x, =, + and 0 are clearly typed and
>>> defined.
>>>   > It makes little sense to me that this, as an equation, can be
>>> "proved"
>>>   > to be valid universally (that is, without the definition of, say +).
>>>
>>> If x is a natural number defined like this in Coq:
>>>
>>> Inductive nat : Set := O : nat | S : nat -> nat
>>>
>>> then x = x + 0 is not an axiom but is derivable.
>>> Of course this all depends on the structures and definitions, I didn't
>>> mean to imply that it is valid universally.
>>>
>>> On 25/06/2019 19:28, Tim Daly wrote:
>>>> The "elegant way" that Martin is questioning is the problem
>>>> of combining a certain kind of logic operation (refl aka
>>>> "reflection" where both sides are equal) with the notion of
>>>> a mathematical unit.
>>>
>>> I think that refl (short for "reflexivity" of = relation), is the usual
>>> abbreviation for the only constructor of an equality type in Martin-Lof
>>> type theory.
>>>
>>> I get the impression that this theory is very powerful in proof
>>> assistants and I am questioning if you proposing to build this into
>>> Axiom and how?
>>>
>>> Martin
>>>
>>
>

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

Tim Daly
Another thought....

There has been a "step change" in computer science in the last few years.

Guy Steele did a survey of the use of logic notation in conference papers.
More than 50% of the papers in some conferences use logic notation
(from one of the many logics).

CMU teaches their CS courses all based on and requiring the use of
logic and the associated notation. My college mathematics covered
the use of truth tables. The graduate course covered the use of
Karnaugh maps.

Reading current papers, I have found several papers with multiple
pages containing nothing but "judgements", pages of greek notation.
If you think Axiom's learning curve is steep, you should look at
Homotopy Type Theory (HoTT).

I taught a compiler course at Vassar in the previous century but
the Dragon book didn't cover CIC in any detail and I would not
have understood it if it did.

The original Axiom software predates most of the published logic
papers, at least as they applied to software. Haskell Curry wrote
from the logic side in 1934 and William Howard published in 1969
but I only heard of the Curry-Howard correspondence in 1998.

Writing a compiler these days requires the use of this approach.
In all, that's a good thing as it makes it clear how to handle types
and how to construct software that is marginally more correct.

The new Sane compiler is building on these logic foundations,
based on the Calculus of Inductive Construction and Dependent
Type theory. The compiler itself is strongly typed as is the
language it supports.

Since dependent types are not decidable there will always be
heuristics at runtime to try to disambiguate types, only now we
will have to write the code in greek :-)

Tim



On 6/26/19, Tim Daly <[hidden email]> wrote:

> Yes, but even more than that.
>
> The goal is to make Axiom a "trusted system" for the whole of
> computational mathematics.
>
> To do this, consider this (not entirely random) collection of systems
> and facts:
>
> 1) Trusted systems are built using the de Bruijn principle. See, for
> example,
> https://pdfs.semanticscholar.org/2498/8797cf3e98c3a4a7c3dc775f8c02ce252c33.pdf
>
> The basic idea is to have a trusted kernel that is composed of
> a small number of logic operations known to be correct. All other
> operations get "compiled" into these trusted operations. This is
> how the proof systems operate, by choosing a logic such as CIC
> and then constructing a CIC kernel.
>
> 2) Axiom is really nothing more that a "Domain Specific Language"
> on top of common lisp. For Axiom to be trusted, the lisp must be
> trusted, which the whole of common lisp is not.
>
> 3) We would like a common lisp that is proven "down to the
> metal". There is a lisp (not common lisp) kernel called Milawa
> https://link.springer.com/article/10.1007/s10817-015-9324-6
> that has this property. It starts with a proven kernel and builds
> up layers (currently 11) towards ACL2, a common lisp theorem
> prover.
>
> 4) There is a theorem prover called Matita which appears to be
> of interest as another "intermediate step". See
> http://matita.cs.unibo.it/index.shtml
>
> 5) There is also a newly forming Lean prover and I am trying
> to follow along and get up to speed on it: See:
> https://leanprover.github.io/
>
> 5) There is the mistaken belief that common lisp is a typeless
> language. However, CLOS has defclass and every defclass
> constructs a new type. So by careful design it is possible to
> construct a "fully typed" domain specific language and compiler
> in common lisp.
>
> So what does this mean for a Sane Axiom system?
>
> The game is to build the Sane compiler using CLOS so it is
> strongly typed. The output of the compiler generates lisp code
> that conforms to some (possibly higher, Axiom-specific) layer
> of the Matita lisp. So compiler output would be in a provably
> type-safe subset of lisp.
>
> In addition, the Sane compiler is being constructed so that
> the compiler itself can be proven correct. Everything in the
> compiler is strongly typed as is its output.
>
> Ultimately the goal is a proven Sane compiler that accepts
> a provable Sane language for computer algebra which generates
> proven computer algebra code running on a lisp that is
> proven "down to the metal".
>
> The end result is a trusted system for computational mathematics.
>
> Like the blind philosophers, I can grab any part of this
> elephantine system and describe it in great detail. My struggle
> is to "envision the whole", make it sensible, and then construct
> it in a provable way.
>
> I've been at this for about 7 years now. I took 10 classes at
> CMU. read several hundred papers (see the Axiom bibliography
> volume, https://github.com/daly/PDFS/blob/master/bookvolbib.pdf)
> and spent a few years "coming up to speed on the
> proof theory" side of computational mathematics. Now I
> finally have (poor) command of computer algebra, proof
> theory, and provable programming. Having all the parts,
> the question is "Can I construct the elephant?". I don't know.
> But I am trying.
>
> Tim
>
>
> On 6/26/19, Martin Baker <[hidden email]> wrote:
>> Tim,
>>
>> When I read this I am picking up a slightly different flavor than what I
>> got from your first message in the thread.
>>
>> You first message seemed to me to be about "merging computer algebra and
>> proof technology" but this message seems to be about running the two in
>> parallel.
>>
>> I really like the idea of merging the two because Axiom has a steep
>> learning curve and proof assistants have a steep learning curve. If they
>> were merged the leaning curve would still be steep but at least there
>> would only be one.
>>
>> This seems to be more about:
>> * specifying axioms (and other identities?) in the category.
>> * decorating the domain with some denotational semantics stuff.
>> then checking it, at compile time, by running a separate proof assistant
>> program.
>>
>> Is this what you are suggesting.
>>
>> Martin
>>
>> On 26/06/2019 01:16, Tim Daly wrote:
>>> Martin,
>>>
>>> My current "line of attack" on this research is to try to prove the
>>> GCD algorithm in NonNegativeInteger.
>>>
>>> While this seems trivial in proof systems it is expensive to
>>> compute from the inductive definition. While this seems
>>> trivial in computer algebra, the implementation code lacks
>>> proof.
>>>
>>> There are several steps I'm taking. I'm creating a new compiler
>>> that handles Spad code with several new additions. The new
>>> language (and the effort in general) is called "Sane".
>>>
>>> One step is to make sure the new compiler generates code
>>> that runs in Axiom. This is challenging as there is almost no
>>> documentation about Axiom internals so it all has to be
>>> reverse-engineered.
>>>
>>> Next is the addition of "axioms" to the categories, such as
>>> adding axioms for equality to BasicType, where equality is
>>> defined to include reflexive and transitive properties.
>>> (Equality, by the way, is probably THE most controversial
>>> topics in logic, c.f. the univalence axiom in HoTT). These
>>> axioms decorate the category signatures and are inherited.
>>>
>>> Next is the addition of axioms to domains, also decorating
>>> the signatures with axioms.
>>>
>>> Next is the logical specification of properties of the data
>>> type implementation of the domain, called the REP in
>>> Axiom and the "carrier" in logic. For example, think of a
>>> binary tree REP and what properties you can guarantee.
>>>
>>> Next is adding a specification for the operations that
>>> implement the signatures. These are specific to each
>>> function that a domain implements.
>>>
>>> Next is decorating code with pre- and post-conditions
>>> as well as loop invariants and storage invariants.
>>>
>>> Next the collection of all of the above is cast into a form
>>> used by a proof system (currently Lean) that implements
>>> dependent types (Calculus of Inductive Construction).
>>>
>>> Next a proof is constructed. The resulting proof is attached
>>> to the Axiom code. Proof checking is wildly cheaper than
>>> proof construction and the new Sane compiler would
>>> perform proof checking at compile time for each function.
>>>
>>> So if there is a breaking change somewhere in the tower
>>> the proof would fail.
>>>
>>> Challenges along the way, besides reverse-engineering
>>> the Spad compiler, include adding languages for stating
>>> axioms, for stating REP properties, for stating function
>>> specifications, for stating program properties, and for
>>> stating proof certificates. The pieces all exist somewhere
>>> but they are not necessarily compatible, nor well defined.
>>>
>>> Is this all possible to do? Well, of course, as this is all
>>> "just mathematics". Do *I* know how to do this? Well,
>>> of course not, which is what makes this a research effort.
>>>
>>> Ultimately I'm trying to build an instance of merging proof
>>> and computer algebra at a very deep, proven level. Think
>>> of it as a PhD thesis project without the degree incentive :-)
>>>
>>> Tim
>>>
>>>
>>> On 6/25/19, Martin Baker <[hidden email]> wrote:
>>>> On 6/25/19, William Sit<[hidden email]>  wrote:
>>>>   > The expression  x = x + 0, whether treated as a type or an
>>>> equation,
>>>>   > can only make sense when x, =, + and 0 are clearly typed and
>>>> defined.
>>>>   > It makes little sense to me that this, as an equation, can be
>>>> "proved"
>>>>   > to be valid universally (that is, without the definition of, say
>>>> +).
>>>>
>>>> If x is a natural number defined like this in Coq:
>>>>
>>>> Inductive nat : Set := O : nat | S : nat -> nat
>>>>
>>>> then x = x + 0 is not an axiom but is derivable.
>>>> Of course this all depends on the structures and definitions, I didn't
>>>> mean to imply that it is valid universally.
>>>>
>>>> On 25/06/2019 19:28, Tim Daly wrote:
>>>>> The "elegant way" that Martin is questioning is the problem
>>>>> of combining a certain kind of logic operation (refl aka
>>>>> "reflection" where both sides are equal) with the notion of
>>>>> a mathematical unit.
>>>>
>>>> I think that refl (short for "reflexivity" of = relation), is the usual
>>>> abbreviation for the only constructor of an equality type in Martin-Lof
>>>> type theory.
>>>>
>>>> I get the impression that this theory is very powerful in proof
>>>> assistants and I am questioning if you proposing to build this into
>>>> Axiom and how?
>>>>
>>>> Martin
>>>>
>>>
>>
>

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

Tim Daly
One major Sane design decision is the use of CLOS,
the Common Lisp Object System.

First, since each CLOS object is a type it is possible to
create strong types everywhere. This helps with the ultimate
need to typecheck the compiler and the generated code.

Second, CLOS is an integral part of common lisp. One of
the Sane design goals is to make it possible to use Axiom's
domains in ordinary lisp programs. Since Axiom is nothing
more than a domain specific language on common lisp it
makes sense to construct it so users can freely intermix
polynomials with non-algebraic code.

Third, CLOS is designed for large program development,
hiding most of the implementation details and exposing
a well-defined API. This will make future maintenance and
documentation of Axiom easier, contributing to its longer
intended life.

So for a traditional Axiom user nothing seems to have
changed. But for future users it will be easy to compute
an integral in the middle of regular programs.

Tim

On 6/27/19, Tim Daly <[hidden email]> wrote:

> Another thought....
>
> There has been a "step change" in computer science in the last few years.
>
> Guy Steele did a survey of the use of logic notation in conference papers.
> More than 50% of the papers in some conferences use logic notation
> (from one of the many logics).
>
> CMU teaches their CS courses all based on and requiring the use of
> logic and the associated notation. My college mathematics covered
> the use of truth tables. The graduate course covered the use of
> Karnaugh maps.
>
> Reading current papers, I have found several papers with multiple
> pages containing nothing but "judgements", pages of greek notation.
> If you think Axiom's learning curve is steep, you should look at
> Homotopy Type Theory (HoTT).
>
> I taught a compiler course at Vassar in the previous century but
> the Dragon book didn't cover CIC in any detail and I would not
> have understood it if it did.
>
> The original Axiom software predates most of the published logic
> papers, at least as they applied to software. Haskell Curry wrote
> from the logic side in 1934 and William Howard published in 1969
> but I only heard of the Curry-Howard correspondence in 1998.
>
> Writing a compiler these days requires the use of this approach.
> In all, that's a good thing as it makes it clear how to handle types
> and how to construct software that is marginally more correct.
>
> The new Sane compiler is building on these logic foundations,
> based on the Calculus of Inductive Construction and Dependent
> Type theory. The compiler itself is strongly typed as is the
> language it supports.
>
> Since dependent types are not decidable there will always be
> heuristics at runtime to try to disambiguate types, only now we
> will have to write the code in greek :-)
>
> Tim
>
>
>
> On 6/26/19, Tim Daly <[hidden email]> wrote:
>> Yes, but even more than that.
>>
>> The goal is to make Axiom a "trusted system" for the whole of
>> computational mathematics.
>>
>> To do this, consider this (not entirely random) collection of systems
>> and facts:
>>
>> 1) Trusted systems are built using the de Bruijn principle. See, for
>> example,
>> https://pdfs.semanticscholar.org/2498/8797cf3e98c3a4a7c3dc775f8c02ce252c33.pdf
>>
>> The basic idea is to have a trusted kernel that is composed of
>> a small number of logic operations known to be correct. All other
>> operations get "compiled" into these trusted operations. This is
>> how the proof systems operate, by choosing a logic such as CIC
>> and then constructing a CIC kernel.
>>
>> 2) Axiom is really nothing more that a "Domain Specific Language"
>> on top of common lisp. For Axiom to be trusted, the lisp must be
>> trusted, which the whole of common lisp is not.
>>
>> 3) We would like a common lisp that is proven "down to the
>> metal". There is a lisp (not common lisp) kernel called Milawa
>> https://link.springer.com/article/10.1007/s10817-015-9324-6
>> that has this property. It starts with a proven kernel and builds
>> up layers (currently 11) towards ACL2, a common lisp theorem
>> prover.
>>
>> 4) There is a theorem prover called Matita which appears to be
>> of interest as another "intermediate step". See
>> http://matita.cs.unibo.it/index.shtml
>>
>> 5) There is also a newly forming Lean prover and I am trying
>> to follow along and get up to speed on it: See:
>> https://leanprover.github.io/
>>
>> 5) There is the mistaken belief that common lisp is a typeless
>> language. However, CLOS has defclass and every defclass
>> constructs a new type. So by careful design it is possible to
>> construct a "fully typed" domain specific language and compiler
>> in common lisp.
>>
>> So what does this mean for a Sane Axiom system?
>>
>> The game is to build the Sane compiler using CLOS so it is
>> strongly typed. The output of the compiler generates lisp code
>> that conforms to some (possibly higher, Axiom-specific) layer
>> of the Matita lisp. So compiler output would be in a provably
>> type-safe subset of lisp.
>>
>> In addition, the Sane compiler is being constructed so that
>> the compiler itself can be proven correct. Everything in the
>> compiler is strongly typed as is its output.
>>
>> Ultimately the goal is a proven Sane compiler that accepts
>> a provable Sane language for computer algebra which generates
>> proven computer algebra code running on a lisp that is
>> proven "down to the metal".
>>
>> The end result is a trusted system for computational mathematics.
>>
>> Like the blind philosophers, I can grab any part of this
>> elephantine system and describe it in great detail. My struggle
>> is to "envision the whole", make it sensible, and then construct
>> it in a provable way.
>>
>> I've been at this for about 7 years now. I took 10 classes at
>> CMU. read several hundred papers (see the Axiom bibliography
>> volume, https://github.com/daly/PDFS/blob/master/bookvolbib.pdf)
>> and spent a few years "coming up to speed on the
>> proof theory" side of computational mathematics. Now I
>> finally have (poor) command of computer algebra, proof
>> theory, and provable programming. Having all the parts,
>> the question is "Can I construct the elephant?". I don't know.
>> But I am trying.
>>
>> Tim
>>
>>
>> On 6/26/19, Martin Baker <[hidden email]> wrote:
>>> Tim,
>>>
>>> When I read this I am picking up a slightly different flavor than what I
>>> got from your first message in the thread.
>>>
>>> You first message seemed to me to be about "merging computer algebra and
>>> proof technology" but this message seems to be about running the two in
>>> parallel.
>>>
>>> I really like the idea of merging the two because Axiom has a steep
>>> learning curve and proof assistants have a steep learning curve. If they
>>> were merged the leaning curve would still be steep but at least there
>>> would only be one.
>>>
>>> This seems to be more about:
>>> * specifying axioms (and other identities?) in the category.
>>> * decorating the domain with some denotational semantics stuff.
>>> then checking it, at compile time, by running a separate proof assistant
>>> program.
>>>
>>> Is this what you are suggesting.
>>>
>>> Martin
>>>
>>> On 26/06/2019 01:16, Tim Daly wrote:
>>>> Martin,
>>>>
>>>> My current "line of attack" on this research is to try to prove the
>>>> GCD algorithm in NonNegativeInteger.
>>>>
>>>> While this seems trivial in proof systems it is expensive to
>>>> compute from the inductive definition. While this seems
>>>> trivial in computer algebra, the implementation code lacks
>>>> proof.
>>>>
>>>> There are several steps I'm taking. I'm creating a new compiler
>>>> that handles Spad code with several new additions. The new
>>>> language (and the effort in general) is called "Sane".
>>>>
>>>> One step is to make sure the new compiler generates code
>>>> that runs in Axiom. This is challenging as there is almost no
>>>> documentation about Axiom internals so it all has to be
>>>> reverse-engineered.
>>>>
>>>> Next is the addition of "axioms" to the categories, such as
>>>> adding axioms for equality to BasicType, where equality is
>>>> defined to include reflexive and transitive properties.
>>>> (Equality, by the way, is probably THE most controversial
>>>> topics in logic, c.f. the univalence axiom in HoTT). These
>>>> axioms decorate the category signatures and are inherited.
>>>>
>>>> Next is the addition of axioms to domains, also decorating
>>>> the signatures with axioms.
>>>>
>>>> Next is the logical specification of properties of the data
>>>> type implementation of the domain, called the REP in
>>>> Axiom and the "carrier" in logic. For example, think of a
>>>> binary tree REP and what properties you can guarantee.
>>>>
>>>> Next is adding a specification for the operations that
>>>> implement the signatures. These are specific to each
>>>> function that a domain implements.
>>>>
>>>> Next is decorating code with pre- and post-conditions
>>>> as well as loop invariants and storage invariants.
>>>>
>>>> Next the collection of all of the above is cast into a form
>>>> used by a proof system (currently Lean) that implements
>>>> dependent types (Calculus of Inductive Construction).
>>>>
>>>> Next a proof is constructed. The resulting proof is attached
>>>> to the Axiom code. Proof checking is wildly cheaper than
>>>> proof construction and the new Sane compiler would
>>>> perform proof checking at compile time for each function.
>>>>
>>>> So if there is a breaking change somewhere in the tower
>>>> the proof would fail.
>>>>
>>>> Challenges along the way, besides reverse-engineering
>>>> the Spad compiler, include adding languages for stating
>>>> axioms, for stating REP properties, for stating function
>>>> specifications, for stating program properties, and for
>>>> stating proof certificates. The pieces all exist somewhere
>>>> but they are not necessarily compatible, nor well defined.
>>>>
>>>> Is this all possible to do? Well, of course, as this is all
>>>> "just mathematics". Do *I* know how to do this? Well,
>>>> of course not, which is what makes this a research effort.
>>>>
>>>> Ultimately I'm trying to build an instance of merging proof
>>>> and computer algebra at a very deep, proven level. Think
>>>> of it as a PhD thesis project without the degree incentive :-)
>>>>
>>>> Tim
>>>>
>>>>
>>>> On 6/25/19, Martin Baker <[hidden email]> wrote:
>>>>> On 6/25/19, William Sit<[hidden email]>  wrote:
>>>>>   > The expression  x = x + 0, whether treated as a type or an
>>>>> equation,
>>>>>   > can only make sense when x, =, + and 0 are clearly typed and
>>>>> defined.
>>>>>   > It makes little sense to me that this, as an equation, can be
>>>>> "proved"
>>>>>   > to be valid universally (that is, without the definition of, say
>>>>> +).
>>>>>
>>>>> If x is a natural number defined like this in Coq:
>>>>>
>>>>> Inductive nat : Set := O : nat | S : nat -> nat
>>>>>
>>>>> then x = x + 0 is not an axiom but is derivable.
>>>>> Of course this all depends on the structures and definitions, I didn't
>>>>> mean to imply that it is valid universally.
>>>>>
>>>>> On 25/06/2019 19:28, Tim Daly wrote:
>>>>>> The "elegant way" that Martin is questioning is the problem
>>>>>> of combining a certain kind of logic operation (refl aka
>>>>>> "reflection" where both sides are equal) with the notion of
>>>>>> a mathematical unit.
>>>>>
>>>>> I think that refl (short for "reflexivity" of = relation), is the
>>>>> usual
>>>>> abbreviation for the only constructor of an equality type in
>>>>> Martin-Lof
>>>>> type theory.
>>>>>
>>>>> I get the impression that this theory is very powerful in proof
>>>>> assistants and I am questioning if you proposing to build this into
>>>>> Axiom and how?
>>>>>
>>>>> Martin
>>>>>
>>>>
>>>
>>
>

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

Martin Baker-3
Tim,

This all seems to be about the lisp layer, obviously thats what
interests you.

It seems to me that if SPAD code is complicated and not aligned to type
theory then, when SPAD is complied to Lisp, the List code will be
complicated and hard to work with. Your previous remarks, about not
seeing the whole elephant, suggest to me that you are drowning in
complexity. Most languages, in their lifetime, acquire some messiness
that needs to be cleared out occasionally.

Don't you think its time for SPAD 2.0 ?

Martin

On 30/06/2019 02:17, Tim Daly wrote:

> One major Sane design decision is the use of CLOS,
> the Common Lisp Object System.
>
> First, since each CLOS object is a type it is possible to
> create strong types everywhere. This helps with the ultimate
> need to typecheck the compiler and the generated code.
>
> Second, CLOS is an integral part of common lisp. One of
> the Sane design goals is to make it possible to use Axiom's
> domains in ordinary lisp programs. Since Axiom is nothing
> more than a domain specific language on common lisp it
> makes sense to construct it so users can freely intermix
> polynomials with non-algebraic code.
>
> Third, CLOS is designed for large program development,
> hiding most of the implementation details and exposing
> a well-defined API. This will make future maintenance and
> documentation of Axiom easier, contributing to its longer
> intended life.
>
> So for a traditional Axiom user nothing seems to have
> changed. But for future users it will be easy to compute
> an integral in the middle of regular programs.
>
> Tim
>
> On 6/27/19, Tim Daly <[hidden email]> wrote:
>> Another thought....
>>
>> There has been a "step change" in computer science in the last few years.
>>
>> Guy Steele did a survey of the use of logic notation in conference papers.
>> More than 50% of the papers in some conferences use logic notation
>> (from one of the many logics).
>>
>> CMU teaches their CS courses all based on and requiring the use of
>> logic and the associated notation. My college mathematics covered
>> the use of truth tables. The graduate course covered the use of
>> Karnaugh maps.
>>
>> Reading current papers, I have found several papers with multiple
>> pages containing nothing but "judgements", pages of greek notation.
>> If you think Axiom's learning curve is steep, you should look at
>> Homotopy Type Theory (HoTT).
>>
>> I taught a compiler course at Vassar in the previous century but
>> the Dragon book didn't cover CIC in any detail and I would not
>> have understood it if it did.
>>
>> The original Axiom software predates most of the published logic
>> papers, at least as they applied to software. Haskell Curry wrote
>> from the logic side in 1934 and William Howard published in 1969
>> but I only heard of the Curry-Howard correspondence in 1998.
>>
>> Writing a compiler these days requires the use of this approach.
>> In all, that's a good thing as it makes it clear how to handle types
>> and how to construct software that is marginally more correct.
>>
>> The new Sane compiler is building on these logic foundations,
>> based on the Calculus of Inductive Construction and Dependent
>> Type theory. The compiler itself is strongly typed as is the
>> language it supports.
>>
>> Since dependent types are not decidable there will always be
>> heuristics at runtime to try to disambiguate types, only now we
>> will have to write the code in greek :-)
>>
>> Tim

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

Tim Daly
There are thousands of hours of expertise and thousands of
functions embedded in Spad code. An important design goal
is to ensure that code continues to function. The Sane compiler
will output code that runs in the interpreter. It is important that
"nothing breaks".

That said, the Sane compiler is "layered". The core of the design
is that Axiom categories, domains, and packages are represnted
as lisp classes and instances. This "core" is essentially what the
compiler people call an Abstract Syntax Tree (AST). But in this
case it is much more than syntax.

Given this "core" there are several tasks.

1) compile spad code to the core. The "front end" should
accept and understand current Spad code, unwinding it
into the core class structure.

2) compile core classes to Axiom data structures. Thi
"back end" generates current Axiom data structures from
the core data structures.

Now the compiler generates working code yet is in a state
to accept enhancements, essentially by extending the core
class objects.

3) In the interpreter, modify the getdatabase function to
extract data from the core rather than the databases. So
the databases go away but the interpreter continues to work.

So now the interpreter has been "lifted" onto the core classes
but continues to function.

4) enhance the core to support the axioms / specifications /
proofs /etc. This involves adding fields to the lisp classes.
This core work gives additional fields to hold information.

5) extend the Spad language (Spad 2.0?) to handle the
additional axioms / specifications / proofs / etc. This
involves adding syntax to the Spad language to support
the new fields.

6) build back-end targets to proof systems, spec systems,
etc. Compilers like GCC have multiple back ends. The Sane
compiler targets the interpreter, a specification checker, a
proof system, etc. as separate back ends, all from the same
core structures.

The Compiler Design

A separate but parallel design goal is to build the compiler so
it can be type-checked. Each function has typed input and
typed output and is, for the most part, purely functional. So,
for example, a "Filename" is an instance of a "File" object.
A "Line" is an instance of "String". The "FileReader" is

FileReader : Filename -> List Line

Careful design of the language used to construct hte compiler
(as opposed to the Spad language it accepts) makes it easier
to type check the compiler itself.

By REALLY careful design, the types are build on a layered
subset of lisp, like Milawa
https://www.cl.cam.ac.uk/~mom22/soundness.pdf
which is sound all the way down to the metal.

It all goes in stages. Build the new core class structure in a
strongly typed fashion. Accept the Spad language. Generate
Interpreter code. Enhance the core to support proofs / specs.
Enhance the language to support proofs / specs. Accept the
new language. Generate back ends to target the interpreter
and a proof / spec system. Build it all on a sound base so
the compiler can be checked.

To the initial end user, the Sane version is the same as the
current system. But in the long term all of the Axiom code
could be called from any Lisp function. The Sane version
can also be used as an "Oracle" for proof systems, since
the code has been proven correct.

This is a huge project but it can be done in small steps.
In particular, the goal is to build a "thin thread" all the way
through the system to handle only the GCD algorithms.
Once that proof happens "the real work begins".

Tim




On 6/30/19, Martin Baker <[hidden email]> wrote:

> Tim,
>
> This all seems to be about the lisp layer, obviously thats what
> interests you.
>
> It seems to me that if SPAD code is complicated and not aligned to type
> theory then, when SPAD is complied to Lisp, the List code will be
> complicated and hard to work with. Your previous remarks, about not
> seeing the whole elephant, suggest to me that you are drowning in
> complexity. Most languages, in their lifetime, acquire some messiness
> that needs to be cleared out occasionally.
>
> Don't you think its time for SPAD 2.0 ?
>
> Martin
>
> On 30/06/2019 02:17, Tim Daly wrote:
>> One major Sane design decision is the use of CLOS,
>> the Common Lisp Object System.
>>
>> First, since each CLOS object is a type it is possible to
>> create strong types everywhere. This helps with the ultimate
>> need to typecheck the compiler and the generated code.
>>
>> Second, CLOS is an integral part of common lisp. One of
>> the Sane design goals is to make it possible to use Axiom's
>> domains in ordinary lisp programs. Since Axiom is nothing
>> more than a domain specific language on common lisp it
>> makes sense to construct it so users can freely intermix
>> polynomials with non-algebraic code.
>>
>> Third, CLOS is designed for large program development,
>> hiding most of the implementation details and exposing
>> a well-defined API. This will make future maintenance and
>> documentation of Axiom easier, contributing to its longer
>> intended life.
>>
>> So for a traditional Axiom user nothing seems to have
>> changed. But for future users it will be easy to compute
>> an integral in the middle of regular programs.
>>
>> Tim
>>
>> On 6/27/19, Tim Daly <[hidden email]> wrote:
>>> Another thought....
>>>
>>> There has been a "step change" in computer science in the last few
>>> years.
>>>
>>> Guy Steele did a survey of the use of logic notation in conference
>>> papers.
>>> More than 50% of the papers in some conferences use logic notation
>>> (from one of the many logics).
>>>
>>> CMU teaches their CS courses all based on and requiring the use of
>>> logic and the associated notation. My college mathematics covered
>>> the use of truth tables. The graduate course covered the use of
>>> Karnaugh maps.
>>>
>>> Reading current papers, I have found several papers with multiple
>>> pages containing nothing but "judgements", pages of greek notation.
>>> If you think Axiom's learning curve is steep, you should look at
>>> Homotopy Type Theory (HoTT).
>>>
>>> I taught a compiler course at Vassar in the previous century but
>>> the Dragon book didn't cover CIC in any detail and I would not
>>> have understood it if it did.
>>>
>>> The original Axiom software predates most of the published logic
>>> papers, at least as they applied to software. Haskell Curry wrote
>>> from the logic side in 1934 and William Howard published in 1969
>>> but I only heard of the Curry-Howard correspondence in 1998.
>>>
>>> Writing a compiler these days requires the use of this approach.
>>> In all, that's a good thing as it makes it clear how to handle types
>>> and how to construct software that is marginally more correct.
>>>
>>> The new Sane compiler is building on these logic foundations,
>>> based on the Calculus of Inductive Construction and Dependent
>>> Type theory. The compiler itself is strongly typed as is the
>>> language it supports.
>>>
>>> Since dependent types are not decidable there will always be
>>> heuristics at runtime to try to disambiguate types, only now we
>>> will have to write the code in greek :-)
>>>
>>> Tim
>

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

Clifford Yapp
On Sun, Jun 30, 2019 at 11:40 AM Tim Daly <[hidden email]> wrote:

By REALLY careful design, the types are build on a layered
subset of lisp, like Milawa
https://www.cl.cam.ac.uk/~mom22/soundness.pdf
which is sound all the way down to the metal.
 
Milawa and Jitawa... Wow.  That is a really interesting development.

Tim, I'm nowhere near current on recent developments so apologies if I'm not interpreting this correctly, but will the Sane core be able to identify any existing bugs in the Spad code?  I.e., will any successful compilation of existing Spad code on the Sane stack indicate verified correctness of the Spad algorithm?  Or (almost as useful for applied problems) will it instead mean that for any particular application of a Sane-compiled Spad algorithm to a specific numerical problem, the system will be able to generate a specific proof for the correctness of that answer that is in turn tractably verifiable by simple proof checkers?

(Either way, what I'm hoping is that whether or not proof techniques are powerful enough to verify general properties of symbolic expressions, this approach will allow an applied user (say a physicist) to at the end of the day be very confident that the computer-algebra-system-supplied answer for any specific problem is correct...)

Cheers,
CY

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Fwd: [EXTERNAL] Re: Axiom's Sane redesign musings

Tim Daly
In reply to this post by Tim Daly
---------- Forwarded message ----------
From: Tim Daly <[hidden email]>
Date: Tue, 9 Jul 2019 22:56:24 -0400
Subject: Re: [EXTERNAL] Re: [Axiom-developer] Axiom's Sane redesign musings
To: Martin Baker <[hidden email]>

Progress is being made. Axiom was written long before Common
Lisp existed. A lot of the code is due to translations from Maclisp
and LispVM.

I'm moving the Axiom hierarchy to Common Lisp CLOS code.
This has several advantages.

It eliminates the databases. They were created because there
was not enough memory (at 16 megabytes).

It uses native code to do dispatch.

CLOS defclass creates Common Lisp types so all of the code
is well-typed at the lisp level.

CLOS is lisp so ordinary lisp code can use Axiom functions
directly. This makes Axiom into an "embeddable library".

CLOS is the target language for the Axiom compiler. The
compiler is being re-architected to use nano passes [0] for
compiling. This will make it much easier to maintain in
the long term and much easier to extend.

This technique allows building the compiler in stages from
running code to running code at every step.

The new interpreter is much simpler as it just runs the CLOS
code directly.

The choice of logic and specification languages are still unclear.
These will also be created first at the lisp level and then covered
by the compiler with additional nano-passes.

Anyway, coding has begun.

Tim

[0]  Sarker, Dipanwita and Waddell, Oscar and Kybvig, R. Kent
"A Nanopass Framework for Compiler Education" (2004)


On 6/30/19, Tim Daly <[hidden email]> wrote:

> There are thousands of hours of expertise and thousands of
> functions embedded in Spad code. An important design goal
> is to ensure that code continues to function. The Sane compiler
> will output code that runs in the interpreter. It is important that
> "nothing breaks".
>
> That said, the Sane compiler is "layered". The core of the design
> is that Axiom categories, domains, and packages are represnted
> as lisp classes and instances. This "core" is essentially what the
> compiler people call an Abstract Syntax Tree (AST). But in this
> case it is much more than syntax.
>
> Given this "core" there are several tasks.
>
> 1) compile spad code to the core. The "front end" should
> accept and understand current Spad code, unwinding it
> into the core class structure.
>
> 2) compile core classes to Axiom data structures. Thi
> "back end" generates current Axiom data structures from
> the core data structures.
>
> Now the compiler generates working code yet is in a state
> to accept enhancements, essentially by extending the core
> class objects.
>
> 3) In the interpreter, modify the getdatabase function to
> extract data from the core rather than the databases. So
> the databases go away but the interpreter continues to work.
>
> So now the interpreter has been "lifted" onto the core classes
> but continues to function.
>
> 4) enhance the core to support the axioms / specifications /
> proofs /etc. This involves adding fields to the lisp classes.
> This core work gives additional fields to hold information.
>
> 5) extend the Spad language (Spad 2.0?) to handle the
> additional axioms / specifications / proofs / etc. This
> involves adding syntax to the Spad language to support
> the new fields.
>
> 6) build back-end targets to proof systems, spec systems,
> etc. Compilers like GCC have multiple back ends. The Sane
> compiler targets the interpreter, a specification checker, a
> proof system, etc. as separate back ends, all from the same
> core structures.
>
> The Compiler Design
>
> A separate but parallel design goal is to build the compiler so
> it can be type-checked. Each function has typed input and
> typed output and is, for the most part, purely functional. So,
> for example, a "Filename" is an instance of a "File" object.
> A "Line" is an instance of "String". The "FileReader" is
>
> FileReader : Filename -> List Line
>
> Careful design of the language used to construct hte compiler
> (as opposed to the Spad language it accepts) makes it easier
> to type check the compiler itself.
>
> By REALLY careful design, the types are build on a layered
> subset of lisp, like Milawa
> https://www.cl.cam.ac.uk/~mom22/soundness.pdf
> which is sound all the way down to the metal.
>
> It all goes in stages. Build the new core class structure in a
> strongly typed fashion. Accept the Spad language. Generate
> Interpreter code. Enhance the core to support proofs / specs.
> Enhance the language to support proofs / specs. Accept the
> new language. Generate back ends to target the interpreter
> and a proof / spec system. Build it all on a sound base so
> the compiler can be checked.
>
> To the initial end user, the Sane version is the same as the
> current system. But in the long term all of the Axiom code
> could be called from any Lisp function. The Sane version
> can also be used as an "Oracle" for proof systems, since
> the code has been proven correct.
>
> This is a huge project but it can be done in small steps.
> In particular, the goal is to build a "thin thread" all the way
> through the system to handle only the GCD algorithms.
> Once that proof happens "the real work begins".
>
> Tim
>
>
>
>
> On 6/30/19, Martin Baker <[hidden email]> wrote:
>> Tim,
>>
>> This all seems to be about the lisp layer, obviously thats what
>> interests you.
>>
>> It seems to me that if SPAD code is complicated and not aligned to type
>> theory then, when SPAD is complied to Lisp, the List code will be
>> complicated and hard to work with. Your previous remarks, about not
>> seeing the whole elephant, suggest to me that you are drowning in
>> complexity. Most languages, in their lifetime, acquire some messiness
>> that needs to be cleared out occasionally.
>>
>> Don't you think its time for SPAD 2.0 ?
>>
>> Martin
>>
>> On 30/06/2019 02:17, Tim Daly wrote:
>>> One major Sane design decision is the use of CLOS,
>>> the Common Lisp Object System.
>>>
>>> First, since each CLOS object is a type it is possible to
>>> create strong types everywhere. This helps with the ultimate
>>> need to typecheck the compiler and the generated code.
>>>
>>> Second, CLOS is an integral part of common lisp. One of
>>> the Sane design goals is to make it possible to use Axiom's
>>> domains in ordinary lisp programs. Since Axiom is nothing
>>> more than a domain specific language on common lisp it
>>> makes sense to construct it so users can freely intermix
>>> polynomials with non-algebraic code.
>>>
>>> Third, CLOS is designed for large program development,
>>> hiding most of the implementation details and exposing
>>> a well-defined API. This will make future maintenance and
>>> documentation of Axiom easier, contributing to its longer
>>> intended life.
>>>
>>> So for a traditional Axiom user nothing seems to have
>>> changed. But for future users it will be easy to compute
>>> an integral in the middle of regular programs.
>>>
>>> Tim
>>>
>>> On 6/27/19, Tim Daly <[hidden email]> wrote:
>>>> Another thought....
>>>>
>>>> There has been a "step change" in computer science in the last few
>>>> years.
>>>>
>>>> Guy Steele did a survey of the use of logic notation in conference
>>>> papers.
>>>> More than 50% of the papers in some conferences use logic notation
>>>> (from one of the many logics).
>>>>
>>>> CMU teaches their CS courses all based on and requiring the use of
>>>> logic and the associated notation. My college mathematics covered
>>>> the use of truth tables. The graduate course covered the use of
>>>> Karnaugh maps.
>>>>
>>>> Reading current papers, I have found several papers with multiple
>>>> pages containing nothing but "judgements", pages of greek notation.
>>>> If you think Axiom's learning curve is steep, you should look at
>>>> Homotopy Type Theory (HoTT).
>>>>
>>>> I taught a compiler course at Vassar in the previous century but
>>>> the Dragon book didn't cover CIC in any detail and I would not
>>>> have understood it if it did.
>>>>
>>>> The original Axiom software predates most of the published logic
>>>> papers, at least as they applied to software. Haskell Curry wrote
>>>> from the logic side in 1934 and William Howard published in 1969
>>>> but I only heard of the Curry-Howard correspondence in 1998.
>>>>
>>>> Writing a compiler these days requires the use of this approach.
>>>> In all, that's a good thing as it makes it clear how to handle types
>>>> and how to construct software that is marginally more correct.
>>>>
>>>> The new Sane compiler is building on these logic foundations,
>>>> based on the Calculus of Inductive Construction and Dependent
>>>> Type theory. The compiler itself is strongly typed as is the
>>>> language it supports.
>>>>
>>>> Since dependent types are not decidable there will always be
>>>> heuristics at runtime to try to disambiguate types, only now we
>>>> will have to write the code in greek :-)
>>>>
>>>> Tim
>>
>

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

Tim Daly
Temptation...

I would like to remain faithful to Axiom's syntax for signatures

foo: (%,%) -> %

but the world seems to have converged on

foo: % -> % -> %

This shows up everywhere in logic, in haskell, etc.
It allows for a primitive kind of currying, the "right curry"
(Some form of scheme allows currying anywhere in the
argument list, including multiple currying)

I need to logic-style signatures for the axioms so the
tempation is to rewrite the Axiom signatures using the
logic notation.

I can do both but it might be clearer to follow the world
rather than follow our own, given the user base.

Tim


On 7/9/19, Tim Daly <[hidden email]> wrote:

> ---------- Forwarded message ----------
> From: Tim Daly <[hidden email]>
> Date: Tue, 9 Jul 2019 22:56:24 -0400
> Subject: Re: [EXTERNAL] Re: [Axiom-developer] Axiom's Sane redesign musings
> To: Martin Baker <[hidden email]>
>
> Progress is being made. Axiom was written long before Common
> Lisp existed. A lot of the code is due to translations from Maclisp
> and LispVM.
>
> I'm moving the Axiom hierarchy to Common Lisp CLOS code.
> This has several advantages.
>
> It eliminates the databases. They were created because there
> was not enough memory (at 16 megabytes).
>
> It uses native code to do dispatch.
>
> CLOS defclass creates Common Lisp types so all of the code
> is well-typed at the lisp level.
>
> CLOS is lisp so ordinary lisp code can use Axiom functions
> directly. This makes Axiom into an "embeddable library".
>
> CLOS is the target language for the Axiom compiler. The
> compiler is being re-architected to use nano passes [0] for
> compiling. This will make it much easier to maintain in
> the long term and much easier to extend.
>
> This technique allows building the compiler in stages from
> running code to running code at every step.
>
> The new interpreter is much simpler as it just runs the CLOS
> code directly.
>
> The choice of logic and specification languages are still unclear.
> These will also be created first at the lisp level and then covered
> by the compiler with additional nano-passes.
>
> Anyway, coding has begun.
>
> Tim
>
> [0]  Sarker, Dipanwita and Waddell, Oscar and Kybvig, R. Kent
> "A Nanopass Framework for Compiler Education" (2004)
>
>
> On 6/30/19, Tim Daly <[hidden email]> wrote:
>> There are thousands of hours of expertise and thousands of
>> functions embedded in Spad code. An important design goal
>> is to ensure that code continues to function. The Sane compiler
>> will output code that runs in the interpreter. It is important that
>> "nothing breaks".
>>
>> That said, the Sane compiler is "layered". The core of the design
>> is that Axiom categories, domains, and packages are represnted
>> as lisp classes and instances. This "core" is essentially what the
>> compiler people call an Abstract Syntax Tree (AST). But in this
>> case it is much more than syntax.
>>
>> Given this "core" there are several tasks.
>>
>> 1) compile spad code to the core. The "front end" should
>> accept and understand current Spad code, unwinding it
>> into the core class structure.
>>
>> 2) compile core classes to Axiom data structures. Thi
>> "back end" generates current Axiom data structures from
>> the core data structures.
>>
>> Now the compiler generates working code yet is in a state
>> to accept enhancements, essentially by extending the core
>> class objects.
>>
>> 3) In the interpreter, modify the getdatabase function to
>> extract data from the core rather than the databases. So
>> the databases go away but the interpreter continues to work.
>>
>> So now the interpreter has been "lifted" onto the core classes
>> but continues to function.
>>
>> 4) enhance the core to support the axioms / specifications /
>> proofs /etc. This involves adding fields to the lisp classes.
>> This core work gives additional fields to hold information.
>>
>> 5) extend the Spad language (Spad 2.0?) to handle the
>> additional axioms / specifications / proofs / etc. This
>> involves adding syntax to the Spad language to support
>> the new fields.
>>
>> 6) build back-end targets to proof systems, spec systems,
>> etc. Compilers like GCC have multiple back ends. The Sane
>> compiler targets the interpreter, a specification checker, a
>> proof system, etc. as separate back ends, all from the same
>> core structures.
>>
>> The Compiler Design
>>
>> A separate but parallel design goal is to build the compiler so
>> it can be type-checked. Each function has typed input and
>> typed output and is, for the most part, purely functional. So,
>> for example, a "Filename" is an instance of a "File" object.
>> A "Line" is an instance of "String". The "FileReader" is
>>
>> FileReader : Filename -> List Line
>>
>> Careful design of the language used to construct hte compiler
>> (as opposed to the Spad language it accepts) makes it easier
>> to type check the compiler itself.
>>
>> By REALLY careful design, the types are build on a layered
>> subset of lisp, like Milawa
>> https://www.cl.cam.ac.uk/~mom22/soundness.pdf
>> which is sound all the way down to the metal.
>>
>> It all goes in stages. Build the new core class structure in a
>> strongly typed fashion. Accept the Spad language. Generate
>> Interpreter code. Enhance the core to support proofs / specs.
>> Enhance the language to support proofs / specs. Accept the
>> new language. Generate back ends to target the interpreter
>> and a proof / spec system. Build it all on a sound base so
>> the compiler can be checked.
>>
>> To the initial end user, the Sane version is the same as the
>> current system. But in the long term all of the Axiom code
>> could be called from any Lisp function. The Sane version
>> can also be used as an "Oracle" for proof systems, since
>> the code has been proven correct.
>>
>> This is a huge project but it can be done in small steps.
>> In particular, the goal is to build a "thin thread" all the way
>> through the system to handle only the GCD algorithms.
>> Once that proof happens "the real work begins".
>>
>> Tim
>>
>>
>>
>>
>> On 6/30/19, Martin Baker <[hidden email]> wrote:
>>> Tim,
>>>
>>> This all seems to be about the lisp layer, obviously thats what
>>> interests you.
>>>
>>> It seems to me that if SPAD code is complicated and not aligned to type
>>> theory then, when SPAD is complied to Lisp, the List code will be
>>> complicated and hard to work with. Your previous remarks, about not
>>> seeing the whole elephant, suggest to me that you are drowning in
>>> complexity. Most languages, in their lifetime, acquire some messiness
>>> that needs to be cleared out occasionally.
>>>
>>> Don't you think its time for SPAD 2.0 ?
>>>
>>> Martin
>>>
>>> On 30/06/2019 02:17, Tim Daly wrote:
>>>> One major Sane design decision is the use of CLOS,
>>>> the Common Lisp Object System.
>>>>
>>>> First, since each CLOS object is a type it is possible to
>>>> create strong types everywhere. This helps with the ultimate
>>>> need to typecheck the compiler and the generated code.
>>>>
>>>> Second, CLOS is an integral part of common lisp. One of
>>>> the Sane design goals is to make it possible to use Axiom's
>>>> domains in ordinary lisp programs. Since Axiom is nothing
>>>> more than a domain specific language on common lisp it
>>>> makes sense to construct it so users can freely intermix
>>>> polynomials with non-algebraic code.
>>>>
>>>> Third, CLOS is designed for large program development,
>>>> hiding most of the implementation details and exposing
>>>> a well-defined API. This will make future maintenance and
>>>> documentation of Axiom easier, contributing to its longer
>>>> intended life.
>>>>
>>>> So for a traditional Axiom user nothing seems to have
>>>> changed. But for future users it will be easy to compute
>>>> an integral in the middle of regular programs.
>>>>
>>>> Tim
>>>>
>>>> On 6/27/19, Tim Daly <[hidden email]> wrote:
>>>>> Another thought....
>>>>>
>>>>> There has been a "step change" in computer science in the last few
>>>>> years.
>>>>>
>>>>> Guy Steele did a survey of the use of logic notation in conference
>>>>> papers.
>>>>> More than 50% of the papers in some conferences use logic notation
>>>>> (from one of the many logics).
>>>>>
>>>>> CMU teaches their CS courses all based on and requiring the use of
>>>>> logic and the associated notation. My college mathematics covered
>>>>> the use of truth tables. The graduate course covered the use of
>>>>> Karnaugh maps.
>>>>>
>>>>> Reading current papers, I have found several papers with multiple
>>>>> pages containing nothing but "judgements", pages of greek notation.
>>>>> If you think Axiom's learning curve is steep, you should look at
>>>>> Homotopy Type Theory (HoTT).
>>>>>
>>>>> I taught a compiler course at Vassar in the previous century but
>>>>> the Dragon book didn't cover CIC in any detail and I would not
>>>>> have understood it if it did.
>>>>>
>>>>> The original Axiom software predates most of the published logic
>>>>> papers, at least as they applied to software. Haskell Curry wrote
>>>>> from the logic side in 1934 and William Howard published in 1969
>>>>> but I only heard of the Curry-Howard correspondence in 1998.
>>>>>
>>>>> Writing a compiler these days requires the use of this approach.
>>>>> In all, that's a good thing as it makes it clear how to handle types
>>>>> and how to construct software that is marginally more correct.
>>>>>
>>>>> The new Sane compiler is building on these logic foundations,
>>>>> based on the Calculus of Inductive Construction and Dependent
>>>>> Type theory. The compiler itself is strongly typed as is the
>>>>> language it supports.
>>>>>
>>>>> Since dependent types are not decidable there will always be
>>>>> heuristics at runtime to try to disambiguate types, only now we
>>>>> will have to write the code in greek :-)
>>>>>
>>>>> Tim
>>>
>>
>

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

William Sit-3
Dear Tim:

According to Wiki on currying (https://en.wikipedia.org/wiki/Currying, paragraph on lambda calculi):

"Curried functions may be used in any programming language that supports closures; however, uncurried functions are generally preferred for efficiency reasons, since the overhead of partial application and closure creation can then be avoided for most function calls."
See also the paragraph on "Contrast with partial function application."

Moreover, under the paragraph on "Category theory,"

"Currying can break down in one of two ways. One is if a category is not closed, and thus lacks an internal hom functor (possibly because there is more than one choice for such a functor). Another ways is if it is not monoidal, and thus lacks a product (that is, lacks a way of writing down pairs of objects). Categories that do have both products and internal homs are exactly the closed monoidal categories."

The rest of that paragraph is also worth reading and by examples, it shows perhaps why the logic notation is preferred because closed monoidal categories are very common. In particular:

"By contrast, the product for monoidal categories (such as Hilbert space and the vector spaces of functional analysis) is the tensor product. The internal language of such categories is linear logic, a form of quantum logic; the corresponding type system is the linear type system. Such categories are suitable for describing entangled quantum states, and, more generally, allow a vast generalization of the Curry–Howard correspondence to quantum mechanics, to cobordisms in algebraic topology, and to string theory.[1] The linear type system, and linear logic are useful for describing synchronization primitives, such as mutual exclusion locks, and the operation of vending machines."

Even if the goal is to have compatibility with other systems that uses currying or logic convention, it may not make sense to change the Axiom signature syntax for multi-input functions (which requires changes to the compiler and other components). Perhaps a compromise would be to create a bidirectional translation Axiom package to facilitate the interfaces (that is, an implementation of currying and uncurrying in Axiom to the targeted CAS using the currying convention). This should not be too hard in Lisp (for you :-)). This has the advantage that if the computation is from a currying system but to be done in Axiom, there is only one overhead call (uncurrying) for input, and perhaps another for output (currying, if the output is a function type); and if the computation is from Axiom but to be done in a currying system, there is also just one overhead call (currying) for input (here, I assume the currying system has optimized its implementation), and perhaps another (decurrying, if the output is a function).

There is no need to reinvent the wheel for either system.

William

William Sit
Professor Emeritus
Department of Mathematics
The City College of The City University of New York
New York, NY 10031
homepage: wsit.ccny.cuny.edu

________________________________________
From: Axiom-developer <axiom-developer-bounces+wyscc=[hidden email]> on behalf of Tim Daly <[hidden email]>
Sent: Friday, July 12, 2019 5:11 AM
To: axiom-dev
Subject: Re: [Axiom-developer] [EXTERNAL] Re: Axiom's Sane redesign musings

Temptation...

I would like to remain faithful to Axiom's syntax for signatures

foo: (%,%) -> %

but the world seems to have converged on

foo: % -> % -> %

This shows up everywhere in logic, in haskell, etc.
It allows for a primitive kind of currying, the "right curry"
(Some form of scheme allows currying anywhere in the
argument list, including multiple currying)

I need to logic-style signatures for the axioms so the
tempation is to rewrite the Axiom signatures using the
logic notation.

I can do both but it might be clearer to follow the world
rather than follow our own, given the user base.

Tim


On 7/9/19, Tim Daly <[hidden email]> wrote:

> ---------- Forwarded message ----------
> From: Tim Daly <[hidden email]>
> Date: Tue, 9 Jul 2019 22:56:24 -0400
> Subject: Re: [EXTERNAL] Re: [Axiom-developer] Axiom's Sane redesign musings
> To: Martin Baker <[hidden email]>
>
> Progress is being made. Axiom was written long before Common
> Lisp existed. A lot of the code is due to translations from Maclisp
> and LispVM.
>
> I'm moving the Axiom hierarchy to Common Lisp CLOS code.
> This has several advantages.
>
> It eliminates the databases. They were created because there
> was not enough memory (at 16 megabytes).
>
> It uses native code to do dispatch.
>
> CLOS defclass creates Common Lisp types so all of the code
> is well-typed at the lisp level.
>
> CLOS is lisp so ordinary lisp code can use Axiom functions
> directly. This makes Axiom into an "embeddable library".
>
> CLOS is the target language for the Axiom compiler. The
> compiler is being re-architected to use nano passes [0] for
> compiling. This will make it much easier to maintain in
> the long term and much easier to extend.
>
> This technique allows building the compiler in stages from
> running code to running code at every step.
>
> The new interpreter is much simpler as it just runs the CLOS
> code directly.
>
> The choice of logic and specification languages are still unclear.
> These will also be created first at the lisp level and then covered
> by the compiler with additional nano-passes.
>
> Anyway, coding has begun.
>
> Tim
>
> [0]  Sarker, Dipanwita and Waddell, Oscar and Kybvig, R. Kent
> "A Nanopass Framework for Compiler Education" (2004)
>
>
> On 6/30/19, Tim Daly <[hidden email]> wrote:
>> There are thousands of hours of expertise and thousands of
>> functions embedded in Spad code. An important design goal
>> is to ensure that code continues to function. The Sane compiler
>> will output code that runs in the interpreter. It is important that
>> "nothing breaks".
>>
>> That said, the Sane compiler is "layered". The core of the design
>> is that Axiom categories, domains, and packages are represnted
>> as lisp classes and instances. This "core" is essentially what the
>> compiler people call an Abstract Syntax Tree (AST). But in this
>> case it is much more than syntax.
>>
>> Given this "core" there are several tasks.
>>
>> 1) compile spad code to the core. The "front end" should
>> accept and understand current Spad code, unwinding it
>> into the core class structure.
>>
>> 2) compile core classes to Axiom data structures. Thi
>> "back end" generates current Axiom data structures from
>> the core data structures.
>>
>> Now the compiler generates working code yet is in a state
>> to accept enhancements, essentially by extending the core
>> class objects.
>>
>> 3) In the interpreter, modify the getdatabase function to
>> extract data from the core rather than the databases. So
>> the databases go away but the interpreter continues to work.
>>
>> So now the interpreter has been "lifted" onto the core classes
>> but continues to function.
>>
>> 4) enhance the core to support the axioms / specifications /
>> proofs /etc. This involves adding fields to the lisp classes.
>> This core work gives additional fields to hold information.
>>
>> 5) extend the Spad language (Spad 2.0?) to handle the
>> additional axioms / specifications / proofs / etc. This
>> involves adding syntax to the Spad language to support
>> the new fields.
>>
>> 6) build back-end targets to proof systems, spec systems,
>> etc. Compilers like GCC have multiple back ends. The Sane
>> compiler targets the interpreter, a specification checker, a
>> proof system, etc. as separate back ends, all from the same
>> core structures.
>>
>> The Compiler Design
>>
>> A separate but parallel design goal is to build the compiler so
>> it can be type-checked. Each function has typed input and
>> typed output and is, for the most part, purely functional. So,
>> for example, a "Filename" is an instance of a "File" object.
>> A "Line" is an instance of "String". The "FileReader" is
>>
>> FileReader : Filename -> List Line
>>
>> Careful design of the language used to construct hte compiler
>> (as opposed to the Spad language it accepts) makes it easier
>> to type check the compiler itself.
>>
>> By REALLY careful design, the types are build on a layered
>> subset of lisp, like Milawa
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.cl.cam.ac.uk_-7Emom22_soundness.pdf&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=vSWzY44D3q0ZbuiiWC71z-d0735kVc-z56JhhJvTxWk&s=hlB5Qn3wSwx2kqsIJIM6yL63N8pGP_FZJQWPvOz_Anw&e=
>> which is sound all the way down to the metal.
>>
>> It all goes in stages. Build the new core class structure in a
>> strongly typed fashion. Accept the Spad language. Generate
>> Interpreter code. Enhance the core to support proofs / specs.
>> Enhance the language to support proofs / specs. Accept the
>> new language. Generate back ends to target the interpreter
>> and a proof / spec system. Build it all on a sound base so
>> the compiler can be checked.
>>
>> To the initial end user, the Sane version is the same as the
>> current system. But in the long term all of the Axiom code
>> could be called from any Lisp function. The Sane version
>> can also be used as an "Oracle" for proof systems, since
>> the code has been proven correct.
>>
>> This is a huge project but it can be done in small steps.
>> In particular, the goal is to build a "thin thread" all the way
>> through the system to handle only the GCD algorithms.
>> Once that proof happens "the real work begins".
>>
>> Tim
>>
>>
>>
>>
>> On 6/30/19, Martin Baker <[hidden email]> wrote:
>>> Tim,
>>>
>>> This all seems to be about the lisp layer, obviously thats what
>>> interests you.
>>>
>>> It seems to me that if SPAD code is complicated and not aligned to type
>>> theory then, when SPAD is complied to Lisp, the List code will be
>>> complicated and hard to work with. Your previous remarks, about not
>>> seeing the whole elephant, suggest to me that you are drowning in
>>> complexity. Most languages, in their lifetime, acquire some messiness
>>> that needs to be cleared out occasionally.
>>>
>>> Don't you think its time for SPAD 2.0 ?
>>>
>>> Martin
>>>
>>> On 30/06/2019 02:17, Tim Daly wrote:
>>>> One major Sane design decision is the use of CLOS,
>>>> the Common Lisp Object System.
>>>>
>>>> First, since each CLOS object is a type it is possible to
>>>> create strong types everywhere. This helps with the ultimate
>>>> need to typecheck the compiler and the generated code.
>>>>
>>>> Second, CLOS is an integral part of common lisp. One of
>>>> the Sane design goals is to make it possible to use Axiom's
>>>> domains in ordinary lisp programs. Since Axiom is nothing
>>>> more than a domain specific language on common lisp it
>>>> makes sense to construct it so users can freely intermix
>>>> polynomials with non-algebraic code.
>>>>
>>>> Third, CLOS is designed for large program development,
>>>> hiding most of the implementation details and exposing
>>>> a well-defined API. This will make future maintenance and
>>>> documentation of Axiom easier, contributing to its longer
>>>> intended life.
>>>>
>>>> So for a traditional Axiom user nothing seems to have
>>>> changed. But for future users it will be easy to compute
>>>> an integral in the middle of regular programs.
>>>>
>>>> Tim
>>>>
>>>> On 6/27/19, Tim Daly <[hidden email]> wrote:
>>>>> Another thought....
>>>>>
>>>>> There has been a "step change" in computer science in the last few
>>>>> years.
>>>>>
>>>>> Guy Steele did a survey of the use of logic notation in conference
>>>>> papers.
>>>>> More than 50% of the papers in some conferences use logic notation
>>>>> (from one of the many logics).
>>>>>
>>>>> CMU teaches their CS courses all based on and requiring the use of
>>>>> logic and the associated notation. My college mathematics covered
>>>>> the use of truth tables. The graduate course covered the use of
>>>>> Karnaugh maps.
>>>>>
>>>>> Reading current papers, I have found several papers with multiple
>>>>> pages containing nothing but "judgements", pages of greek notation.
>>>>> If you think Axiom's learning curve is steep, you should look at
>>>>> Homotopy Type Theory (HoTT).
>>>>>
>>>>> I taught a compiler course at Vassar in the previous century but
>>>>> the Dragon book didn't cover CIC in any detail and I would not
>>>>> have understood it if it did.
>>>>>
>>>>> The original Axiom software predates most of the published logic
>>>>> papers, at least as they applied to software. Haskell Curry wrote
>>>>> from the logic side in 1934 and William Howard published in 1969
>>>>> but I only heard of the Curry-Howard correspondence in 1998.
>>>>>
>>>>> Writing a compiler these days requires the use of this approach.
>>>>> In all, that's a good thing as it makes it clear how to handle types
>>>>> and how to construct software that is marginally more correct.
>>>>>
>>>>> The new Sane compiler is building on these logic foundations,
>>>>> based on the Calculus of Inductive Construction and Dependent
>>>>> Type theory. The compiler itself is strongly typed as is the
>>>>> language it supports.
>>>>>
>>>>> Since dependent types are not decidable there will always be
>>>>> heuristics at runtime to try to disambiguate types, only now we
>>>>> will have to write the code in greek :-)
>>>>>
>>>>> Tim
>>>
>>
>

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.nongnu.org_mailman_listinfo_axiom-2Ddeveloper&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=vSWzY44D3q0ZbuiiWC71z-d0735kVc-z56JhhJvTxWk&s=VFM7ohCxqRASi69pd3rtVjCg7DpNpqSP3M74R5bOO20&e=
_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom's Sane redesign musings

Tim Daly
>"Curried functions may be used in any programming language that supports closures;
>however, uncurried functions are generally preferred for efficiency reasons, since the
>overhead of partial application and closure creation can then be avoided for most function
>calls."See also the paragraph on "Contrast with partial function application."

Overhead of creating a curried lisp function is almost nothing. Lisp
handles closures
and function creation all the time. A macro to create a curried lisp
function is a small
classroom exercise.

>Moreover, under the paragraph on "Category theory,"

>"Currying can break down in one of two ways. One is if a category is not closed, and
>thus lacks an internal hom functor (possibly because there is more than one choice
>for such a functor). Another ways is if it is not monoidal, and thus lacks a product
>(that is, lacks a way of writing down pairs of objects). Categories that do have both
>products and internal homs are exactly the closed monoidal categories."

Indeed. I suspect, though I don't know, that most of the Axiom functions based
on group theory are closed monoidial. The same is likely true tor the data
structure manipulating code.

>The rest of that paragraph is also worth reading and by examples, it shows
>perhaps why the logic notation is preferred because closed monoidal categories
>are very common.

>"By contrast, the product for monoidal categories (such as Hilbert space and the
>vector spaces of functional analysis) is the tensor product. The internal language
>of such categories is linear logic, a form of quantum logic; the corresponding type
>system is the linear type system.

Frank Pfenning http://www.cs.cmu.edu/~fp/
taught me everything I know about linear logic. He has pointed
me at papers and talks (especially his Oregon lectures,
https://www.cs.uoregon.edu/research/summerschool/summer18/topics.php
as well as Stephanie Balzer's lectures). I admit I do not yet have a handle
on where I can use linear logic. It applies really well in communication but
I'm not sure how to apply it in the Axiom context, except in the conext of
parallel computation. However, I'm still climbing the (near-vertically) steep
hill of logic and proofs. The amount I don't know currently fills 3
shelves of my
bookcase.

>Even if the goal is to have compatibility with other systems that uses currying
>or logic convention, it may not make sense to change the Axiom signature
>syntax for multi-input functions (which requires changes to the compiler and
>other components).

I am rewriting the compiler / interpreter in Common Lisp CLOS. CLOS can
handle most of Axiom's inheritance automatically. I have most of the categories
built so far. For non-standard lookups it provides an override.

Since the new (Sane) compiler is going to handle logic syntax for axioms,
theorems, and specification it seems reasonable to use a syntax and semantics
from the existing proof systems (Coq, Lean, etc..). I will have to export proofs
to those systems both because they need to trust Axiom (I hope Axiom can be
the "Oracle" for proof systems doing computation) and because I'm not
smart enough to do otherwise.

>Perhaps a compromise would be to create a bidirectional
>translation Axiom package to facilitate the interfaces (that is, an implementation
>of currying and uncurrying in Axiom to the targeted CAS using the currying
>convention). This should not be too hard in Lisp (for you :-)). This has the
>advantage that if the computation is from a currying system but to be done
>in Axiom, there is only one overhead call (uncurrying) for input, and perhaps
>another for output (currying, if the output is a function type); and if the
>computation is from Axiom but to be done in a currying system, there is
>also just one overhead call (currying) for input (here, I assume the currying
>system has optimized its implementation), and perhaps another (decurrying,
>if the output is a function).

Currying isn't really that much of a motivation for adopting logic syntax.

The main reason is that most people in computer science these days are
generating papers using rules and judgements (witness the talk given at
CCNY by Andre Platzer combining logic and differential equations).
https://www.youtube.com/watch?v=4QDeZnw7_m0

Guy Steele gave a talk where he counted the use of logical notation trends
in published research papers. It is now well above 50 percent.
https://www.youtube.com/watch?v=dCuZkaaou0Q

I could easily accept both Axiom's input forms and Logic input forms
but the world seems to use the logic forms, even in code (e.g. Haskell).
Given that about a dozen people use the Axiom notation why would it
make sense to use it for Spad and a different notation for the axioms?

Axiom's syntax is really showing its age.

The more interesting (and not in a good way) change is my current
struggle over using unicode in the input/output syntax. Should the 'sum'
function be 'sum' or '\sum' (i.e. a sigma)? Is subscript I/O unicode?
Can you write an integral sign in a signature as a function name?

The unicode issue is further highlighted by the fact that X11 support
is going into "maintenance mode". That means I have to move the
documentation and graphics to a browser or jupiter front-end. I have
previously worked out the Axiom browser notebook (Volume 11) but
not the graphics.
https://en.wikipedia.org/wiki/Axiom_(computer_algebra_system)

More interesting is the issue of moving from the 1960s CHARYBDIS
I/O to in-browser latex I/O. The output equations are much prettier.
And the current trend for input seems to be a form of in-browser
literate programs, witness:
https://leanprover.github.io/tutorial/
and
https://x80.org/rhino-coq/

>There is no need to reinvent the wheel for either system

Axiom is an amazing collection of algorithms but it is not keeping
up with current trends. Nobody wants to work in a system that will
die of old age so we need to adapt and grow. "More of the same"
is not enough. By combining computer algebra and proof systems
with a more current front end and better literate documentation
we are breaking new ground, which should keep us relevant for at
least a few weeks at the current world speed.

I thought "The 30 Year Horizon" was further away...

Tim







On 7/12/19, William Sit <[hidden email]> wrote:

> Dear Tim:
>
> According to Wiki on currying (https://en.wikipedia.org/wiki/Currying,
> paragraph on lambda calculi):
>
> "Curried functions may be used in any programming language that supports
> closures; however, uncurried functions are generally preferred for
> efficiency reasons, since the overhead of partial application and closure
> creation can then be avoided for most function calls."
> See also the paragraph on "Contrast with partial function application."
>
> Moreover, under the paragraph on "Category theory,"
>
> "Currying can break down in one of two ways. One is if a category is not
> closed, and thus lacks an internal hom functor (possibly because there is
> more than one choice for such a functor). Another ways is if it is not
> monoidal, and thus lacks a product (that is, lacks a way of writing down
> pairs of objects). Categories that do have both products and internal homs
> are exactly the closed monoidal categories."
>
> The rest of that paragraph is also worth reading and by examples, it shows
> perhaps why the logic notation is preferred because closed monoidal
> categories are very common. In particular:
>
> "By contrast, the product for monoidal categories (such as Hilbert space and
> the vector spaces of functional analysis) is the tensor product. The
> internal language of such categories is linear logic, a form of quantum
> logic; the corresponding type system is the linear type system. Such
> categories are suitable for describing entangled quantum states, and, more
> generally, allow a vast generalization of the Curry–Howard correspondence to
> quantum mechanics, to cobordisms in algebraic topology, and to string
> theory.[1] The linear type system, and linear logic are useful for
> describing synchronization primitives, such as mutual exclusion locks, and
> the operation of vending machines."
>
> Even if the goal is to have compatibility with other systems that uses
> currying or logic convention, it may not make sense to change the Axiom
> signature syntax for multi-input functions (which requires changes to the
> compiler and other components). Perhaps a compromise would be to create a
> bidirectional translation Axiom package to facilitate the interfaces (that
> is, an implementation of currying and uncurrying in Axiom to the targeted
> CAS using the currying convention). This should not be too hard in Lisp (for
> you :-)). This has the advantage that if the computation is from a currying
> system but to be done in Axiom, there is only one overhead call (uncurrying)
> for input, and perhaps another for output (currying, if the output is a
> function type); and if the computation is from Axiom but to be done in a
> currying system, there is also just one overhead call (currying) for input
> (here, I assume the currying system has optimized its implementation), and
> perhaps another (decurrying, if the output is a function).
>
> There is no need to reinvent the wheel for either system.
>
> William
>
> William Sit
> Professor Emeritus
> Department of Mathematics
> The City College of The City University of New York
> New York, NY 10031
> homepage: wsit.ccny.cuny.edu
>
> ________________________________________
> From: Axiom-developer
> <axiom-developer-bounces+wyscc=[hidden email]> on behalf of
> Tim Daly <[hidden email]>
> Sent: Friday, July 12, 2019 5:11 AM
> To: axiom-dev
> Subject: Re: [Axiom-developer] [EXTERNAL] Re: Axiom's Sane redesign musings
>
> Temptation...
>
> I would like to remain faithful to Axiom's syntax for signatures
>
> foo: (%,%) -> %
>
> but the world seems to have converged on
>
> foo: % -> % -> %
>
> This shows up everywhere in logic, in haskell, etc.
> It allows for a primitive kind of currying, the "right curry"
> (Some form of scheme allows currying anywhere in the
> argument list, including multiple currying)
>
> I need to logic-style signatures for the axioms so the
> tempation is to rewrite the Axiom signatures using the
> logic notation.
>
> I can do both but it might be clearer to follow the world
> rather than follow our own, given the user base.
>
> Tim
>
>
> On 7/9/19, Tim Daly <[hidden email]> wrote:
>> ---------- Forwarded message ----------
>> From: Tim Daly <[hidden email]>
>> Date: Tue, 9 Jul 2019 22:56:24 -0400
>> Subject: Re: [EXTERNAL] Re: [Axiom-developer] Axiom's Sane redesign
>> musings
>> To: Martin Baker <[hidden email]>
>>
>> Progress is being made. Axiom was written long before Common
>> Lisp existed. A lot of the code is due to translations from Maclisp
>> and LispVM.
>>
>> I'm moving the Axiom hierarchy to Common Lisp CLOS code.
>> This has several advantages.
>>
>> It eliminates the databases. They were created because there
>> was not enough memory (at 16 megabytes).
>>
>> It uses native code to do dispatch.
>>
>> CLOS defclass creates Common Lisp types so all of the code
>> is well-typed at the lisp level.
>>
>> CLOS is lisp so ordinary lisp code can use Axiom functions
>> directly. This makes Axiom into an "embeddable library".
>>
>> CLOS is the target language for the Axiom compiler. The
>> compiler is being re-architected to use nano passes [0] for
>> compiling. This will make it much easier to maintain in
>> the long term and much easier to extend.
>>
>> This technique allows building the compiler in stages from
>> running code to running code at every step.
>>
>> The new interpreter is much simpler as it just runs the CLOS
>> code directly.
>>
>> The choice of logic and specification languages are still unclear.
>> These will also be created first at the lisp level and then covered
>> by the compiler with additional nano-passes.
>>
>> Anyway, coding has begun.
>>
>> Tim
>>
>> [0]  Sarker, Dipanwita and Waddell, Oscar and Kybvig, R. Kent
>> "A Nanopass Framework for Compiler Education" (2004)
>>
>>
>> On 6/30/19, Tim Daly <[hidden email]> wrote:
>>> There are thousands of hours of expertise and thousands of
>>> functions embedded in Spad code. An important design goal
>>> is to ensure that code continues to function. The Sane compiler
>>> will output code that runs in the interpreter. It is important that
>>> "nothing breaks".
>>>
>>> That said, the Sane compiler is "layered". The core of the design
>>> is that Axiom categories, domains, and packages are represnted
>>> as lisp classes and instances. This "core" is essentially what the
>>> compiler people call an Abstract Syntax Tree (AST). But in this
>>> case it is much more than syntax.
>>>
>>> Given this "core" there are several tasks.
>>>
>>> 1) compile spad code to the core. The "front end" should
>>> accept and understand current Spad code, unwinding it
>>> into the core class structure.
>>>
>>> 2) compile core classes to Axiom data structures. Thi
>>> "back end" generates current Axiom data structures from
>>> the core data structures.
>>>
>>> Now the compiler generates working code yet is in a state
>>> to accept enhancements, essentially by extending the core
>>> class objects.
>>>
>>> 3) In the interpreter, modify the getdatabase function to
>>> extract data from the core rather than the databases. So
>>> the databases go away but the interpreter continues to work.
>>>
>>> So now the interpreter has been "lifted" onto the core classes
>>> but continues to function.
>>>
>>> 4) enhance the core to support the axioms / specifications /
>>> proofs /etc. This involves adding fields to the lisp classes.
>>> This core work gives additional fields to hold information.
>>>
>>> 5) extend the Spad language (Spad 2.0?) to handle the
>>> additional axioms / specifications / proofs / etc. This
>>> involves adding syntax to the Spad language to support
>>> the new fields.
>>>
>>> 6) build back-end targets to proof systems, spec systems,
>>> etc. Compilers like GCC have multiple back ends. The Sane
>>> compiler targets the interpreter, a specification checker, a
>>> proof system, etc. as separate back ends, all from the same
>>> core structures.
>>>
>>> The Compiler Design
>>>
>>> A separate but parallel design goal is to build the compiler so
>>> it can be type-checked. Each function has typed input and
>>> typed output and is, for the most part, purely functional. So,
>>> for example, a "Filename" is an instance of a "File" object.
>>> A "Line" is an instance of "String". The "FileReader" is
>>>
>>> FileReader : Filename -> List Line
>>>
>>> Careful design of the language used to construct hte compiler
>>> (as opposed to the Spad language it accepts) makes it easier
>>> to type check the compiler itself.
>>>
>>> By REALLY careful design, the types are build on a layered
>>> subset of lisp, like Milawa
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.cl.cam.ac.uk_-7Emom22_soundness.pdf&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=vSWzY44D3q0ZbuiiWC71z-d0735kVc-z56JhhJvTxWk&s=hlB5Qn3wSwx2kqsIJIM6yL63N8pGP_FZJQWPvOz_Anw&e=
>>> which is sound all the way down to the metal.
>>>
>>> It all goes in stages. Build the new core class structure in a
>>> strongly typed fashion. Accept the Spad language. Generate
>>> Interpreter code. Enhance the core to support proofs / specs.
>>> Enhance the language to support proofs / specs. Accept the
>>> new language. Generate back ends to target the interpreter
>>> and a proof / spec system. Build it all on a sound base so
>>> the compiler can be checked.
>>>
>>> To the initial end user, the Sane version is the same as the
>>> current system. But in the long term all of the Axiom code
>>> could be called from any Lisp function. The Sane version
>>> can also be used as an "Oracle" for proof systems, since
>>> the code has been proven correct.
>>>
>>> This is a huge project but it can be done in small steps.
>>> In particular, the goal is to build a "thin thread" all the way
>>> through the system to handle only the GCD algorithms.
>>> Once that proof happens "the real work begins".
>>>
>>> Tim
>>>
>>>
>>>
>>>
>>> On 6/30/19, Martin Baker <[hidden email]> wrote:
>>>> Tim,
>>>>
>>>> This all seems to be about the lisp layer, obviously thats what
>>>> interests you.
>>>>
>>>> It seems to me that if SPAD code is complicated and not aligned to type
>>>> theory then, when SPAD is complied to Lisp, the List code will be
>>>> complicated and hard to work with. Your previous remarks, about not
>>>> seeing the whole elephant, suggest to me that you are drowning in
>>>> complexity. Most languages, in their lifetime, acquire some messiness
>>>> that needs to be cleared out occasionally.
>>>>
>>>> Don't you think its time for SPAD 2.0 ?
>>>>
>>>> Martin
>>>>
>>>> On 30/06/2019 02:17, Tim Daly wrote:
>>>>> One major Sane design decision is the use of CLOS,
>>>>> the Common Lisp Object System.
>>>>>
>>>>> First, since each CLOS object is a type it is possible to
>>>>> create strong types everywhere. This helps with the ultimate
>>>>> need to typecheck the compiler and the generated code.
>>>>>
>>>>> Second, CLOS is an integral part of common lisp. One of
>>>>> the Sane design goals is to make it possible to use Axiom's
>>>>> domains in ordinary lisp programs. Since Axiom is nothing
>>>>> more than a domain specific language on common lisp it
>>>>> makes sense to construct it so users can freely intermix
>>>>> polynomials with non-algebraic code.
>>>>>
>>>>> Third, CLOS is designed for large program development,
>>>>> hiding most of the implementation details and exposing
>>>>> a well-defined API. This will make future maintenance and
>>>>> documentation of Axiom easier, contributing to its longer
>>>>> intended life.
>>>>>
>>>>> So for a traditional Axiom user nothing seems to have
>>>>> changed. But for future users it will be easy to compute
>>>>> an integral in the middle of regular programs.
>>>>>
>>>>> Tim
>>>>>
>>>>> On 6/27/19, Tim Daly <[hidden email]> wrote:
>>>>>> Another thought....
>>>>>>
>>>>>> There has been a "step change" in computer science in the last few
>>>>>> years.
>>>>>>
>>>>>> Guy Steele did a survey of the use of logic notation in conference
>>>>>> papers.
>>>>>> More than 50% of the papers in some conferences use logic notation
>>>>>> (from one of the many logics).
>>>>>>
>>>>>> CMU teaches their CS courses all based on and requiring the use of
>>>>>> logic and the associated notation. My college mathematics covered
>>>>>> the use of truth tables. The graduate course covered the use of
>>>>>> Karnaugh maps.
>>>>>>
>>>>>> Reading current papers, I have found several papers with multiple
>>>>>> pages containing nothing but "judgements", pages of greek notation.
>>>>>> If you think Axiom's learning curve is steep, you should look at
>>>>>> Homotopy Type Theory (HoTT).
>>>>>>
>>>>>> I taught a compiler course at Vassar in the previous century but
>>>>>> the Dragon book didn't cover CIC in any detail and I would not
>>>>>> have understood it if it did.
>>>>>>
>>>>>> The original Axiom software predates most of the published logic
>>>>>> papers, at least as they applied to software. Haskell Curry wrote
>>>>>> from the logic side in 1934 and William Howard published in 1969
>>>>>> but I only heard of the Curry-Howard correspondence in 1998.
>>>>>>
>>>>>> Writing a compiler these days requires the use of this approach.
>>>>>> In all, that's a good thing as it makes it clear how to handle types
>>>>>> and how to construct software that is marginally more correct.
>>>>>>
>>>>>> The new Sane compiler is building on these logic foundations,
>>>>>> based on the Calculus of Inductive Construction and Dependent
>>>>>> Type theory. The compiler itself is strongly typed as is the
>>>>>> language it supports.
>>>>>>
>>>>>> Since dependent types are not decidable there will always be
>>>>>> heuristics at runtime to try to disambiguate types, only now we
>>>>>> will have to write the code in greek :-)
>>>>>>
>>>>>> Tim
>>>>
>>>
>>
>
> _______________________________________________
> Axiom-developer mailing list
> [hidden email]
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.nongnu.org_mailman_listinfo_axiom-2Ddeveloper&d=DwIGaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=vSWzY44D3q0ZbuiiWC71z-d0735kVc-z56JhhJvTxWk&s=VFM7ohCxqRASi69pd3rtVjCg7DpNpqSP3M74R5bOO20&e=

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Axiom's Sane redesign musings

Tim Daly
In reply to this post by Tim Daly
When injured, I often use the phrase "Well, learning occurred".

I just spent the last day chasing a circular reference problem
in the meta-object protocol (MOP) which is the basis of the
Common Lisp Object System (CLOS). It turns out that the
"circular reference" is a REALLY bad error message. It won't
bore you with the deails but it the usual "Wow, I found a bug
in the system" thing.

But I'm too experienced ("old") to believe that. After a day of
creating ever-smaller versions of the input I finally cornered it.

That's when I found a one-line paragraph in the MOP that
tells me when "finalize-inheritance" will be called. After much
Omphaloskepsis I translated that to "that's obvious, so it must
be my bug"... (spoiler alert) ... which it was.

Well, learning occurred.

So I'm going to concentrate REALLY hard on making Sane
give, not only an error message, but a likely thing to check.
The error message should also include references into the
literate documentation for a deeper explanation.

Tim

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: Axiom's Sane redesign musings

Tim Daly
I've discovered something pleasant about CLOS development.

Normal compiler development is top=down, from the syntax to
the intermediate representation to the target machine. By its
nature the intermediate representation is usually a hairy data
structure.

However, CLOS is both a surface syntax and an intermediate
representation. I can write the categories in CLOS directly
compile them, and execute them. The surface syntax is just
a "cover" over the CLOS.

Working from the "middle out" like this makes it easy to
construct the categories in the REPL. And since CLOS is
dynamic I can experiement and fix issues "on the fly" and
port the solution back to the file.

Since printing (print-object) can completely control the way an
object is printed it turns out to be nearly trivial to construct the
spad source file from the CLOS representation.

No databases are necessary. Generating Spad vs Logic syntax
is a simple parameter. Comment syntax can be changed by a
single parameter (hyperdoc, latex, user-level, etc).

I taught compilers years before CLOS was created. I wish I knew
then what I know now. It is so much easier "middle-out" with the
right tools.

Tim


On 7/20/19, Tim Daly <[hidden email]> wrote:

> When injured, I often use the phrase "Well, learning occurred".
>
> I just spent the last day chasing a circular reference problem
> in the meta-object protocol (MOP) which is the basis of the
> Common Lisp Object System (CLOS). It turns out that the
> "circular reference" is a REALLY bad error message. It won't
> bore you with the deails but it the usual "Wow, I found a bug
> in the system" thing.
>
> But I'm too experienced ("old") to believe that. After a day of
> creating ever-smaller versions of the input I finally cornered it.
>
> That's when I found a one-line paragraph in the MOP that
> tells me when "finalize-inheritance" will be called. After much
> Omphaloskepsis I translated that to "that's obvious, so it must
> be my bug"... (spoiler alert) ... which it was.
>
> Well, learning occurred.
>
> So I'm going to concentrate REALLY hard on making Sane
> give, not only an error message, but a likely thing to check.
> The error message should also include references into the
> literate documentation for a deeper explanation.
>
> Tim
>

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer