Axiom musings...

Next Topic
 
classic Classic list List threaded Threaded
44 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Axiom musings...

Tim Daly
Jason Gross and Adam Chlipala ("Parsing Parses") developed
a dependently typed general parser for context free grammar
in Coq.

They used the parser to prove its own completeness.

Unfortunately Spad is not a context-free grammar.
But it is an intersting thought exercise to consider
an "Axiom on Coq" implementation.

Tim

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
The current design and code base (in bookvol15) supports
multiple back ends. One will clearly be a common lisp.

Another possible design choice is to target the GNU
GCC intermediate representation, making Axiom "just
another front-end language" supported by GCC.

The current intermediate representation does not (yet)
make any decision about the runtime implementation.

Tim


On 11/26/19, Tim Daly <[hidden email]> wrote:

> Jason Gross and Adam Chlipala ("Parsing Parses") developed
> a dependently typed general parser for context free grammar
> in Coq.
>
> They used the parser to prove its own completeness.
>
> Unfortunately Spad is not a context-free grammar.
> But it is an intersting thought exercise to consider
> an "Axiom on Coq" implementation.
>
> Tim
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
The new Sane compiler is also being tested with the Fricas
algebra code. The compiler knows about the language but
does not depend on the algebra library (so far). It should be
possible, by design, to load different algebra towers.

In particular, one idea is to support the "tiny theories"
algebra from Carette and Farmer. This would allow much
finer grain separation of algebra and axioms.

This "flexible algebra" design would allow things like the
Lean theorem prover effort in a more natural style.

Tim


On 11/26/19, Tim Daly <[hidden email]> wrote:

> The current design and code base (in bookvol15) supports
> multiple back ends. One will clearly be a common lisp.
>
> Another possible design choice is to target the GNU
> GCC intermediate representation, making Axiom "just
> another front-end language" supported by GCC.
>
> The current intermediate representation does not (yet)
> make any decision about the runtime implementation.
>
> Tim
>
>
> On 11/26/19, Tim Daly <[hidden email]> wrote:
>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>> a dependently typed general parser for context free grammar
>> in Coq.
>>
>> They used the parser to prove its own completeness.
>>
>> Unfortunately Spad is not a context-free grammar.
>> But it is an intersting thought exercise to consider
>> an "Axiom on Coq" implementation.
>>
>> Tim
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
I'm still undecided about accepting unicode on input as opposed
to latex-style input (e.g. \pi vs the unicode pi character).

The logic syntax would really benefit from using things like
\forall as a unicode character on input.

It makes the math I/O much prettier but it makes parsing
much harder. What does one do with a function
  (code-char 958)(x:INT) ==

aka (greek small letter xi)....

Some experiments are "in the thought pipeline"....

Tim


On 11/27/19, Tim Daly <[hidden email]> wrote:

> The new Sane compiler is also being tested with the Fricas
> algebra code. The compiler knows about the language but
> does not depend on the algebra library (so far). It should be
> possible, by design, to load different algebra towers.
>
> In particular, one idea is to support the "tiny theories"
> algebra from Carette and Farmer. This would allow much
> finer grain separation of algebra and axioms.
>
> This "flexible algebra" design would allow things like the
> Lean theorem prover effort in a more natural style.
>
> Tim
>
>
> On 11/26/19, Tim Daly <[hidden email]> wrote:
>> The current design and code base (in bookvol15) supports
>> multiple back ends. One will clearly be a common lisp.
>>
>> Another possible design choice is to target the GNU
>> GCC intermediate representation, making Axiom "just
>> another front-end language" supported by GCC.
>>
>> The current intermediate representation does not (yet)
>> make any decision about the runtime implementation.
>>
>> Tim
>>
>>
>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>> a dependently typed general parser for context free grammar
>>> in Coq.
>>>
>>> They used the parser to prove its own completeness.
>>>
>>> Unfortunately Spad is not a context-free grammar.
>>> But it is an intersting thought exercise to consider
>>> an "Axiom on Coq" implementation.
>>>
>>> Tim
>>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Jacques Carette
In reply to this post by Tim Daly
The underlying technology to use for building such an algebra library is
documented in the paper " Building on the Diamonds between Theories:
Theory Presentation Combinators"
http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which will
also be on the arxiv by Monday, and has been submitted to a journal].

There is a rather full-fledged prototype, very well documented at
https://alhassy.github.io/next-700-module-systems/prototype/package-former.html 
(source at https://github.com/alhassy/next-700-module-systems). It is
literate source.

The old prototype was hard to find - it is now at
https://github.com/JacquesCarette/MathScheme.

There is also a third prototype in the MMT system, but it does not quite
function properly today, it is under repair.

The paper "A Language Feature to Unbundle Data at Will"
(https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
is also relevant, as it solves a problem with parametrized theories
(parametrized Categories in Axiom terminology) that all current systems
suffer from.

Jacques

On 2019-11-27 11:47 p.m., Tim Daly wrote:

> The new Sane compiler is also being tested with the Fricas
> algebra code. The compiler knows about the language but
> does not depend on the algebra library (so far). It should be
> possible, by design, to load different algebra towers.
>
> In particular, one idea is to support the "tiny theories"
> algebra from Carette and Farmer. This would allow much
> finer grain separation of algebra and axioms.
>
> This "flexible algebra" design would allow things like the
> Lean theorem prover effort in a more natural style.
>
> Tim
>
>
> On 11/26/19, Tim Daly <[hidden email]> wrote:
>> The current design and code base (in bookvol15) supports
>> multiple back ends. One will clearly be a common lisp.
>>
>> Another possible design choice is to target the GNU
>> GCC intermediate representation, making Axiom "just
>> another front-end language" supported by GCC.
>>
>> The current intermediate representation does not (yet)
>> make any decision about the runtime implementation.
>>
>> Tim
>>
>>
>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>> a dependently typed general parser for context free grammar
>>> in Coq.
>>>
>>> They used the parser to prove its own completeness.
>>>
>>> Unfortunately Spad is not a context-free grammar.
>>> But it is an intersting thought exercise to consider
>>> an "Axiom on Coq" implementation.
>>>
>>> Tim
>>>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
The Axiom Sane compiler is being "shaped by the hammer
blows of reality", to coin a phrase.

There are many goals. One of the primary goals is creating a
compiler that can be understood, maintained, and modified.

So the latest changes involved adding multiple index files.
These are documentation (links to where terms are mentioned
in the text), code (links to the implementation of things),
error (links to where errors are defined), signatures (links to
the signatures of lisp functions), figures (links to figures),
and separate category, domain, and package indexes.

The tikz package is now used to create "railroad diagrams"
of syntax (ala, the PASCAL report). The implementation of
those diagrams follows immediately. Collectively these will
eventually define at least the syntax of the language. In the
ideal, changing the diagram would change the code but I'm
not that clever.

Reality shows up with the curent constraint that the
compiler should accept the current Spad language as
closely as possible. Of course, plans are to include new
constructs (e.g. hypothesis, axiom, specification, etc)
but these are being postponed until "syntax complete".

All parse information is stored in a parse object, which
is a CLOS object (and therefore a Common Lisp type)
Fields within the parse object, e.g. variables are also
CLOS objects (and therefore a Common Lisp type).
It's types all the way down.

These types are being used as 'signatures' for the
lisp functions. The goal is to be able to type-check the
compiler implementation as well as the Sane language.

The parser is designed to "wrap around" so that the
user-level output of a parse should be the user-level
input (albeit in a 'canonical" form). This "mirror effect"
should make it easy to see that the parser properly
parsed the user input.

The parser is "first class" so it will be available at
runtime as a domain allowing Spad code to construct
Spad code.

One plan, not near implementation, is to "unify" some
CLOS types with the Axiom types (e.g. String). How
this will happen is still in the land of design. This would
"ground" Spad in lisp, making them co-equal.

Making lisp "co-equal" is a feature, especially as Spad is
really just a domain-specific language in lisp. Lisp
functions (with CLOS types as signatures) would be
avaiable for implementing Spad functions. This not
only improves the efficiency, it would make the
BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
.
On the theory front I plan to attend the Formal Methods
in Mathematics / Lean Together conference, mostly to
know how little I know, especially that I need to know.
http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/

Tim



On 11/28/19, Jacques Carette <[hidden email]> wrote:

> The underlying technology to use for building such an algebra library is
> documented in the paper " Building on the Diamonds between Theories:
> Theory Presentation Combinators"
> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which will
> also be on the arxiv by Monday, and has been submitted to a journal].
>
> There is a rather full-fledged prototype, very well documented at
> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>
> (source at https://github.com/alhassy/next-700-module-systems). It is
> literate source.
>
> The old prototype was hard to find - it is now at
> https://github.com/JacquesCarette/MathScheme.
>
> There is also a third prototype in the MMT system, but it does not quite
> function properly today, it is under repair.
>
> The paper "A Language Feature to Unbundle Data at Will"
> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>
> is also relevant, as it solves a problem with parametrized theories
> (parametrized Categories in Axiom terminology) that all current systems
> suffer from.
>
> Jacques
>
> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>> The new Sane compiler is also being tested with the Fricas
>> algebra code. The compiler knows about the language but
>> does not depend on the algebra library (so far). It should be
>> possible, by design, to load different algebra towers.
>>
>> In particular, one idea is to support the "tiny theories"
>> algebra from Carette and Farmer. This would allow much
>> finer grain separation of algebra and axioms.
>>
>> This "flexible algebra" design would allow things like the
>> Lean theorem prover effort in a more natural style.
>>
>> Tim
>>
>>
>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>> The current design and code base (in bookvol15) supports
>>> multiple back ends. One will clearly be a common lisp.
>>>
>>> Another possible design choice is to target the GNU
>>> GCC intermediate representation, making Axiom "just
>>> another front-end language" supported by GCC.
>>>
>>> The current intermediate representation does not (yet)
>>> make any decision about the runtime implementation.
>>>
>>> Tim
>>>
>>>
>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>> a dependently typed general parser for context free grammar
>>>> in Coq.
>>>>
>>>> They used the parser to prove its own completeness.
>>>>
>>>> Unfortunately Spad is not a context-free grammar.
>>>> But it is an intersting thought exercise to consider
>>>> an "Axiom on Coq" implementation.
>>>>
>>>> Tim
>>>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
I've been reading Stephen Kell's (Univ of Kent
https://www.cs.kent.ac.uk/people/staff/srk21/) on
Seven deadly sins of talking about “types”
https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/

He raised an interesting idea toward the end of the essay
that type-checking could be done outside the compiler.

I can see a way to do this in Axiom's Sane compiler.
It would be possible to run a program over the source code
to collect the information and write a stand-alone type
checker. This "unbundles" type checking and compiling.

Taken further I can think of several other kinds of checkers
(aka 'linters') that could be unbundled.

It is certainly something to explore.

Tim


On 12/8/19, Tim Daly <[hidden email]> wrote:

> The Axiom Sane compiler is being "shaped by the hammer
> blows of reality", to coin a phrase.
>
> There are many goals. One of the primary goals is creating a
> compiler that can be understood, maintained, and modified.
>
> So the latest changes involved adding multiple index files.
> These are documentation (links to where terms are mentioned
> in the text), code (links to the implementation of things),
> error (links to where errors are defined), signatures (links to
> the signatures of lisp functions), figures (links to figures),
> and separate category, domain, and package indexes.
>
> The tikz package is now used to create "railroad diagrams"
> of syntax (ala, the PASCAL report). The implementation of
> those diagrams follows immediately. Collectively these will
> eventually define at least the syntax of the language. In the
> ideal, changing the diagram would change the code but I'm
> not that clever.
>
> Reality shows up with the curent constraint that the
> compiler should accept the current Spad language as
> closely as possible. Of course, plans are to include new
> constructs (e.g. hypothesis, axiom, specification, etc)
> but these are being postponed until "syntax complete".
>
> All parse information is stored in a parse object, which
> is a CLOS object (and therefore a Common Lisp type)
> Fields within the parse object, e.g. variables are also
> CLOS objects (and therefore a Common Lisp type).
> It's types all the way down.
>
> These types are being used as 'signatures' for the
> lisp functions. The goal is to be able to type-check the
> compiler implementation as well as the Sane language.
>
> The parser is designed to "wrap around" so that the
> user-level output of a parse should be the user-level
> input (albeit in a 'canonical" form). This "mirror effect"
> should make it easy to see that the parser properly
> parsed the user input.
>
> The parser is "first class" so it will be available at
> runtime as a domain allowing Spad code to construct
> Spad code.
>
> One plan, not near implementation, is to "unify" some
> CLOS types with the Axiom types (e.g. String). How
> this will happen is still in the land of design. This would
> "ground" Spad in lisp, making them co-equal.
>
> Making lisp "co-equal" is a feature, especially as Spad is
> really just a domain-specific language in lisp. Lisp
> functions (with CLOS types as signatures) would be
> avaiable for implementing Spad functions. This not
> only improves the efficiency, it would make the
> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
> .
> On the theory front I plan to attend the Formal Methods
> in Mathematics / Lean Together conference, mostly to
> know how little I know, especially that I need to know.
> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>
> Tim
>
>
>
> On 11/28/19, Jacques Carette <[hidden email]> wrote:
>> The underlying technology to use for building such an algebra library is
>> documented in the paper " Building on the Diamonds between Theories:
>> Theory Presentation Combinators"
>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which will
>> also be on the arxiv by Monday, and has been submitted to a journal].
>>
>> There is a rather full-fledged prototype, very well documented at
>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>
>> (source at https://github.com/alhassy/next-700-module-systems). It is
>> literate source.
>>
>> The old prototype was hard to find - it is now at
>> https://github.com/JacquesCarette/MathScheme.
>>
>> There is also a third prototype in the MMT system, but it does not quite
>> function properly today, it is under repair.
>>
>> The paper "A Language Feature to Unbundle Data at Will"
>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>
>> is also relevant, as it solves a problem with parametrized theories
>> (parametrized Categories in Axiom terminology) that all current systems
>> suffer from.
>>
>> Jacques
>>
>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>> The new Sane compiler is also being tested with the Fricas
>>> algebra code. The compiler knows about the language but
>>> does not depend on the algebra library (so far). It should be
>>> possible, by design, to load different algebra towers.
>>>
>>> In particular, one idea is to support the "tiny theories"
>>> algebra from Carette and Farmer. This would allow much
>>> finer grain separation of algebra and axioms.
>>>
>>> This "flexible algebra" design would allow things like the
>>> Lean theorem prover effort in a more natural style.
>>>
>>> Tim
>>>
>>>
>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>> The current design and code base (in bookvol15) supports
>>>> multiple back ends. One will clearly be a common lisp.
>>>>
>>>> Another possible design choice is to target the GNU
>>>> GCC intermediate representation, making Axiom "just
>>>> another front-end language" supported by GCC.
>>>>
>>>> The current intermediate representation does not (yet)
>>>> make any decision about the runtime implementation.
>>>>
>>>> Tim
>>>>
>>>>
>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>> a dependently typed general parser for context free grammar
>>>>> in Coq.
>>>>>
>>>>> They used the parser to prove its own completeness.
>>>>>
>>>>> Unfortunately Spad is not a context-free grammar.
>>>>> But it is an intersting thought exercise to consider
>>>>> an "Axiom on Coq" implementation.
>>>>>
>>>>> Tim
>>>>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
Progress in happening on the new Sane Axiom compiler.

Recently I've been musing about methods to insert axioms
into categories so they could be inherited like signatures.
At the moment I've been thinking about adding axioms in
the same way that signatures are written, adding them to
the appropriate categories.

But this is an interesting design question.

Axiom already has a mechanism for inheriting signatures
from categories. That is, we can bet a plus signature from,
say, the Integer category.

Suppose we follow the same pattern. Currently Axiom
inherits certain so-called "attributes", such as ApproximateAttribute,
which implies that the results are only approximate.

We could adapt the same mechnaism to inherit the Transitive
property by defining it in its own category. In fact, if we
follow Carette and Farmer's "tiny theories" architecture,
where each property has its own inheritable category,
we can "mix and match" the axioms at will.

An "axiom" category would also export a function. This function
would essentially be a "tactic" used in a proof. It would modify
the proof step by applying the function to the step.

Theorems would have the same structure.

This allows theorems to be constructed at run time (since
Axiom supports "First Class Dynamic Types".

In addition, this design can be "pushed down" into the Spad
language so that Spad statements (e.g. assignment) had
proof-related properties. A range such as [1..10] would
provide explicit bounds in a proof "by language definition".
Defining the logical properties of language statements in
this way would make it easier to construct proofs since the
invariants would be partially constructed already.

This design merges the computer algebra inheritance
structure with the proof of algorithms structure, all under
the same mechanism.

Tim

On 12/11/19, Tim Daly <[hidden email]> wrote:

> I've been reading Stephen Kell's (Univ of Kent
> https://www.cs.kent.ac.uk/people/staff/srk21/) on
> Seven deadly sins of talking about “types”
> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>
> He raised an interesting idea toward the end of the essay
> that type-checking could be done outside the compiler.
>
> I can see a way to do this in Axiom's Sane compiler.
> It would be possible to run a program over the source code
> to collect the information and write a stand-alone type
> checker. This "unbundles" type checking and compiling.
>
> Taken further I can think of several other kinds of checkers
> (aka 'linters') that could be unbundled.
>
> It is certainly something to explore.
>
> Tim
>
>
> On 12/8/19, Tim Daly <[hidden email]> wrote:
>> The Axiom Sane compiler is being "shaped by the hammer
>> blows of reality", to coin a phrase.
>>
>> There are many goals. One of the primary goals is creating a
>> compiler that can be understood, maintained, and modified.
>>
>> So the latest changes involved adding multiple index files.
>> These are documentation (links to where terms are mentioned
>> in the text), code (links to the implementation of things),
>> error (links to where errors are defined), signatures (links to
>> the signatures of lisp functions), figures (links to figures),
>> and separate category, domain, and package indexes.
>>
>> The tikz package is now used to create "railroad diagrams"
>> of syntax (ala, the PASCAL report). The implementation of
>> those diagrams follows immediately. Collectively these will
>> eventually define at least the syntax of the language. In the
>> ideal, changing the diagram would change the code but I'm
>> not that clever.
>>
>> Reality shows up with the curent constraint that the
>> compiler should accept the current Spad language as
>> closely as possible. Of course, plans are to include new
>> constructs (e.g. hypothesis, axiom, specification, etc)
>> but these are being postponed until "syntax complete".
>>
>> All parse information is stored in a parse object, which
>> is a CLOS object (and therefore a Common Lisp type)
>> Fields within the parse object, e.g. variables are also
>> CLOS objects (and therefore a Common Lisp type).
>> It's types all the way down.
>>
>> These types are being used as 'signatures' for the
>> lisp functions. The goal is to be able to type-check the
>> compiler implementation as well as the Sane language.
>>
>> The parser is designed to "wrap around" so that the
>> user-level output of a parse should be the user-level
>> input (albeit in a 'canonical" form). This "mirror effect"
>> should make it easy to see that the parser properly
>> parsed the user input.
>>
>> The parser is "first class" so it will be available at
>> runtime as a domain allowing Spad code to construct
>> Spad code.
>>
>> One plan, not near implementation, is to "unify" some
>> CLOS types with the Axiom types (e.g. String). How
>> this will happen is still in the land of design. This would
>> "ground" Spad in lisp, making them co-equal.
>>
>> Making lisp "co-equal" is a feature, especially as Spad is
>> really just a domain-specific language in lisp. Lisp
>> functions (with CLOS types as signatures) would be
>> avaiable for implementing Spad functions. This not
>> only improves the efficiency, it would make the
>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>> .
>> On the theory front I plan to attend the Formal Methods
>> in Mathematics / Lean Together conference, mostly to
>> know how little I know, especially that I need to know.
>> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>>
>> Tim
>>
>>
>>
>> On 11/28/19, Jacques Carette <[hidden email]> wrote:
>>> The underlying technology to use for building such an algebra library is
>>> documented in the paper " Building on the Diamonds between Theories:
>>> Theory Presentation Combinators"
>>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which will
>>> also be on the arxiv by Monday, and has been submitted to a journal].
>>>
>>> There is a rather full-fledged prototype, very well documented at
>>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>>
>>> (source at https://github.com/alhassy/next-700-module-systems). It is
>>> literate source.
>>>
>>> The old prototype was hard to find - it is now at
>>> https://github.com/JacquesCarette/MathScheme.
>>>
>>> There is also a third prototype in the MMT system, but it does not quite
>>> function properly today, it is under repair.
>>>
>>> The paper "A Language Feature to Unbundle Data at Will"
>>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>>
>>> is also relevant, as it solves a problem with parametrized theories
>>> (parametrized Categories in Axiom terminology) that all current systems
>>> suffer from.
>>>
>>> Jacques
>>>
>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>> The new Sane compiler is also being tested with the Fricas
>>>> algebra code. The compiler knows about the language but
>>>> does not depend on the algebra library (so far). It should be
>>>> possible, by design, to load different algebra towers.
>>>>
>>>> In particular, one idea is to support the "tiny theories"
>>>> algebra from Carette and Farmer. This would allow much
>>>> finer grain separation of algebra and axioms.
>>>>
>>>> This "flexible algebra" design would allow things like the
>>>> Lean theorem prover effort in a more natural style.
>>>>
>>>> Tim
>>>>
>>>>
>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>> The current design and code base (in bookvol15) supports
>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>
>>>>> Another possible design choice is to target the GNU
>>>>> GCC intermediate representation, making Axiom "just
>>>>> another front-end language" supported by GCC.
>>>>>
>>>>> The current intermediate representation does not (yet)
>>>>> make any decision about the runtime implementation.
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>> a dependently typed general parser for context free grammar
>>>>>> in Coq.
>>>>>>
>>>>>> They used the parser to prove its own completeness.
>>>>>>
>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>> But it is an intersting thought exercise to consider
>>>>>> an "Axiom on Coq" implementation.
>>>>>>
>>>>>> Tim
>>>>>>
>>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Martin Baker-3
Tim,

Would this also be compatible with 'propositions as types' as in Idris
and various proof assistants? That is, could you have both Curry–Howard
and Carette-Farmer?

I ask this because types (and constructors for types) could also be
inherited by categories in the way you describe. So an equation (axiom
or identity) like a=a+0 has a constructor, Refl, if true which can be
inherited.

Or do you see computer algebra/deduction/proof programs forking into
constructive or axiomatic approaches?

Martin

On 16/12/2019 00:52, Tim Daly wrote:

> Progress in happening on the new Sane Axiom compiler.
>
> Recently I've been musing about methods to insert axioms
> into categories so they could be inherited like signatures.
> At the moment I've been thinking about adding axioms in
> the same way that signatures are written, adding them to
> the appropriate categories.
>
> But this is an interesting design question.
>
> Axiom already has a mechanism for inheriting signatures
> from categories. That is, we can bet a plus signature from,
> say, the Integer category.
>
> Suppose we follow the same pattern. Currently Axiom
> inherits certain so-called "attributes", such as ApproximateAttribute,
> which implies that the results are only approximate.
>
> We could adapt the same mechnaism to inherit the Transitive
> property by defining it in its own category. In fact, if we
> follow Carette and Farmer's "tiny theories" architecture,
> where each property has its own inheritable category,
> we can "mix and match" the axioms at will.
>
> An "axiom" category would also export a function. This function
> would essentially be a "tactic" used in a proof. It would modify
> the proof step by applying the function to the step.
>
> Theorems would have the same structure.
>
> This allows theorems to be constructed at run time (since
> Axiom supports "First Class Dynamic Types".
>
> In addition, this design can be "pushed down" into the Spad
> language so that Spad statements (e.g. assignment) had
> proof-related properties. A range such as [1..10] would
> provide explicit bounds in a proof "by language definition".
> Defining the logical properties of language statements in
> this way would make it easier to construct proofs since the
> invariants would be partially constructed already.
>
> This design merges the computer algebra inheritance
> structure with the proof of algorithms structure, all under
> the same mechanism.
>
> Tim

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
In reply to this post by Tim Daly
Trusted Kernel... all the way to the metal.

While building a trusted computer algebra system, the
SANE version of Axiom, I've been looking at questions of
trust at all levels.

One of the key tenets (the de Bruijn principle) calls for a
trusted kernel through which all other computations must
pass. Coq, Lean, and other systems do this. They base
their kernel  on logic like the Calculus of Construction or
something similar.

Andrej Bauer has been working on a smaller kernel (a
nucleus) that separates the trust from the logic. The rules
for the logic can be specified as needed but checked by
the nucleus code.

I've been studying Field Programmable Gate Arrays (FPGA)
that allow you to create your own hardware in a C-like
language (Verilog). It allows you to check the chip you build
all the way down to the transistor states. You can create
things as complex as a whole CPU or as simple as a trusted
nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
history of verifying hardware logic.

It appears that, assuming I can understand Bauers
Andromeda system, it would be possible and not that hard
to implement a trusted kernel on an FPGA the size and
form factor of a USB stick.

Trust "down to the metal".

Tim



On 12/15/19, Tim Daly <[hidden email]> wrote:

> Progress in happening on the new Sane Axiom compiler.
>
> Recently I've been musing about methods to insert axioms
> into categories so they could be inherited like signatures.
> At the moment I've been thinking about adding axioms in
> the same way that signatures are written, adding them to
> the appropriate categories.
>
> But this is an interesting design question.
>
> Axiom already has a mechanism for inheriting signatures
> from categories. That is, we can bet a plus signature from,
> say, the Integer category.
>
> Suppose we follow the same pattern. Currently Axiom
> inherits certain so-called "attributes", such as ApproximateAttribute,
> which implies that the results are only approximate.
>
> We could adapt the same mechnaism to inherit the Transitive
> property by defining it in its own category. In fact, if we
> follow Carette and Farmer's "tiny theories" architecture,
> where each property has its own inheritable category,
> we can "mix and match" the axioms at will.
>
> An "axiom" category would also export a function. This function
> would essentially be a "tactic" used in a proof. It would modify
> the proof step by applying the function to the step.
>
> Theorems would have the same structure.
>
> This allows theorems to be constructed at run time (since
> Axiom supports "First Class Dynamic Types".
>
> In addition, this design can be "pushed down" into the Spad
> language so that Spad statements (e.g. assignment) had
> proof-related properties. A range such as [1..10] would
> provide explicit bounds in a proof "by language definition".
> Defining the logical properties of language statements in
> this way would make it easier to construct proofs since the
> invariants would be partially constructed already.
>
> This design merges the computer algebra inheritance
> structure with the proof of algorithms structure, all under
> the same mechanism.
>
> Tim
>
> On 12/11/19, Tim Daly <[hidden email]> wrote:
>> I've been reading Stephen Kell's (Univ of Kent
>> https://www.cs.kent.ac.uk/people/staff/srk21/) on
>> Seven deadly sins of talking about “types”
>> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>>
>> He raised an interesting idea toward the end of the essay
>> that type-checking could be done outside the compiler.
>>
>> I can see a way to do this in Axiom's Sane compiler.
>> It would be possible to run a program over the source code
>> to collect the information and write a stand-alone type
>> checker. This "unbundles" type checking and compiling.
>>
>> Taken further I can think of several other kinds of checkers
>> (aka 'linters') that could be unbundled.
>>
>> It is certainly something to explore.
>>
>> Tim
>>
>>
>> On 12/8/19, Tim Daly <[hidden email]> wrote:
>>> The Axiom Sane compiler is being "shaped by the hammer
>>> blows of reality", to coin a phrase.
>>>
>>> There are many goals. One of the primary goals is creating a
>>> compiler that can be understood, maintained, and modified.
>>>
>>> So the latest changes involved adding multiple index files.
>>> These are documentation (links to where terms are mentioned
>>> in the text), code (links to the implementation of things),
>>> error (links to where errors are defined), signatures (links to
>>> the signatures of lisp functions), figures (links to figures),
>>> and separate category, domain, and package indexes.
>>>
>>> The tikz package is now used to create "railroad diagrams"
>>> of syntax (ala, the PASCAL report). The implementation of
>>> those diagrams follows immediately. Collectively these will
>>> eventually define at least the syntax of the language. In the
>>> ideal, changing the diagram would change the code but I'm
>>> not that clever.
>>>
>>> Reality shows up with the curent constraint that the
>>> compiler should accept the current Spad language as
>>> closely as possible. Of course, plans are to include new
>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>> but these are being postponed until "syntax complete".
>>>
>>> All parse information is stored in a parse object, which
>>> is a CLOS object (and therefore a Common Lisp type)
>>> Fields within the parse object, e.g. variables are also
>>> CLOS objects (and therefore a Common Lisp type).
>>> It's types all the way down.
>>>
>>> These types are being used as 'signatures' for the
>>> lisp functions. The goal is to be able to type-check the
>>> compiler implementation as well as the Sane language.
>>>
>>> The parser is designed to "wrap around" so that the
>>> user-level output of a parse should be the user-level
>>> input (albeit in a 'canonical" form). This "mirror effect"
>>> should make it easy to see that the parser properly
>>> parsed the user input.
>>>
>>> The parser is "first class" so it will be available at
>>> runtime as a domain allowing Spad code to construct
>>> Spad code.
>>>
>>> One plan, not near implementation, is to "unify" some
>>> CLOS types with the Axiom types (e.g. String). How
>>> this will happen is still in the land of design. This would
>>> "ground" Spad in lisp, making them co-equal.
>>>
>>> Making lisp "co-equal" is a feature, especially as Spad is
>>> really just a domain-specific language in lisp. Lisp
>>> functions (with CLOS types as signatures) would be
>>> avaiable for implementing Spad functions. This not
>>> only improves the efficiency, it would make the
>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>> .
>>> On the theory front I plan to attend the Formal Methods
>>> in Mathematics / Lean Together conference, mostly to
>>> know how little I know, especially that I need to know.
>>> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>>>
>>> Tim
>>>
>>>
>>>
>>> On 11/28/19, Jacques Carette <[hidden email]> wrote:
>>>> The underlying technology to use for building such an algebra library
>>>> is
>>>> documented in the paper " Building on the Diamonds between Theories:
>>>> Theory Presentation Combinators"
>>>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which will
>>>> also be on the arxiv by Monday, and has been submitted to a journal].
>>>>
>>>> There is a rather full-fledged prototype, very well documented at
>>>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>>>
>>>> (source at https://github.com/alhassy/next-700-module-systems). It is
>>>> literate source.
>>>>
>>>> The old prototype was hard to find - it is now at
>>>> https://github.com/JacquesCarette/MathScheme.
>>>>
>>>> There is also a third prototype in the MMT system, but it does not
>>>> quite
>>>> function properly today, it is under repair.
>>>>
>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>>>
>>>> is also relevant, as it solves a problem with parametrized theories
>>>> (parametrized Categories in Axiom terminology) that all current systems
>>>> suffer from.
>>>>
>>>> Jacques
>>>>
>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>> The new Sane compiler is also being tested with the Fricas
>>>>> algebra code. The compiler knows about the language but
>>>>> does not depend on the algebra library (so far). It should be
>>>>> possible, by design, to load different algebra towers.
>>>>>
>>>>> In particular, one idea is to support the "tiny theories"
>>>>> algebra from Carette and Farmer. This would allow much
>>>>> finer grain separation of algebra and axioms.
>>>>>
>>>>> This "flexible algebra" design would allow things like the
>>>>> Lean theorem prover effort in a more natural style.
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>> The current design and code base (in bookvol15) supports
>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>
>>>>>> Another possible design choice is to target the GNU
>>>>>> GCC intermediate representation, making Axiom "just
>>>>>> another front-end language" supported by GCC.
>>>>>>
>>>>>> The current intermediate representation does not (yet)
>>>>>> make any decision about the runtime implementation.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>
>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>>> a dependently typed general parser for context free grammar
>>>>>>> in Coq.
>>>>>>>
>>>>>>> They used the parser to prove its own completeness.
>>>>>>>
>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>> But it is an intersting thought exercise to consider
>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>
>>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
Provisos.... that is, 'formula SUCH pre/post-conditions'

A computer algebra system ought to know and ought to provide
information about the domain and range of a resulting formula.
I've been pushing this effort since the 1980s (hence the
SuchThat domain).

It turns out that computing with, carrying, and combining this
information is difficult if not impossible in the current system.
The information isn't available and isn't computed. In that sense,
the original Axiom system is 'showing its age'.

In the Sane implementation the information is available. It is
part of the specification and part of the proof steps. With a
careful design it will be possible to provide provisos for each
given result that are carried with the result for use in further
computation.

This raises interesting questions to be explored. For example,
if the formula is defined over an interval, how is the interval
arithmetic handled?

Exciting research ahead!

Tim



On 1/3/20, Tim Daly <[hidden email]> wrote:

> Trusted Kernel... all the way to the metal.
>
> While building a trusted computer algebra system, the
> SANE version of Axiom, I've been looking at questions of
> trust at all levels.
>
> One of the key tenets (the de Bruijn principle) calls for a
> trusted kernel through which all other computations must
> pass. Coq, Lean, and other systems do this. They base
> their kernel  on logic like the Calculus of Construction or
> something similar.
>
> Andrej Bauer has been working on a smaller kernel (a
> nucleus) that separates the trust from the logic. The rules
> for the logic can be specified as needed but checked by
> the nucleus code.
>
> I've been studying Field Programmable Gate Arrays (FPGA)
> that allow you to create your own hardware in a C-like
> language (Verilog). It allows you to check the chip you build
> all the way down to the transistor states. You can create
> things as complex as a whole CPU or as simple as a trusted
> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
> history of verifying hardware logic.
>
> It appears that, assuming I can understand Bauers
> Andromeda system, it would be possible and not that hard
> to implement a trusted kernel on an FPGA the size and
> form factor of a USB stick.
>
> Trust "down to the metal".
>
> Tim
>
>
>
> On 12/15/19, Tim Daly <[hidden email]> wrote:
>> Progress in happening on the new Sane Axiom compiler.
>>
>> Recently I've been musing about methods to insert axioms
>> into categories so they could be inherited like signatures.
>> At the moment I've been thinking about adding axioms in
>> the same way that signatures are written, adding them to
>> the appropriate categories.
>>
>> But this is an interesting design question.
>>
>> Axiom already has a mechanism for inheriting signatures
>> from categories. That is, we can bet a plus signature from,
>> say, the Integer category.
>>
>> Suppose we follow the same pattern. Currently Axiom
>> inherits certain so-called "attributes", such as ApproximateAttribute,
>> which implies that the results are only approximate.
>>
>> We could adapt the same mechnaism to inherit the Transitive
>> property by defining it in its own category. In fact, if we
>> follow Carette and Farmer's "tiny theories" architecture,
>> where each property has its own inheritable category,
>> we can "mix and match" the axioms at will.
>>
>> An "axiom" category would also export a function. This function
>> would essentially be a "tactic" used in a proof. It would modify
>> the proof step by applying the function to the step.
>>
>> Theorems would have the same structure.
>>
>> This allows theorems to be constructed at run time (since
>> Axiom supports "First Class Dynamic Types".
>>
>> In addition, this design can be "pushed down" into the Spad
>> language so that Spad statements (e.g. assignment) had
>> proof-related properties. A range such as [1..10] would
>> provide explicit bounds in a proof "by language definition".
>> Defining the logical properties of language statements in
>> this way would make it easier to construct proofs since the
>> invariants would be partially constructed already.
>>
>> This design merges the computer algebra inheritance
>> structure with the proof of algorithms structure, all under
>> the same mechanism.
>>
>> Tim
>>
>> On 12/11/19, Tim Daly <[hidden email]> wrote:
>>> I've been reading Stephen Kell's (Univ of Kent
>>> https://www.cs.kent.ac.uk/people/staff/srk21/) on
>>> Seven deadly sins of talking about “types”
>>> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>>>
>>> He raised an interesting idea toward the end of the essay
>>> that type-checking could be done outside the compiler.
>>>
>>> I can see a way to do this in Axiom's Sane compiler.
>>> It would be possible to run a program over the source code
>>> to collect the information and write a stand-alone type
>>> checker. This "unbundles" type checking and compiling.
>>>
>>> Taken further I can think of several other kinds of checkers
>>> (aka 'linters') that could be unbundled.
>>>
>>> It is certainly something to explore.
>>>
>>> Tim
>>>
>>>
>>> On 12/8/19, Tim Daly <[hidden email]> wrote:
>>>> The Axiom Sane compiler is being "shaped by the hammer
>>>> blows of reality", to coin a phrase.
>>>>
>>>> There are many goals. One of the primary goals is creating a
>>>> compiler that can be understood, maintained, and modified.
>>>>
>>>> So the latest changes involved adding multiple index files.
>>>> These are documentation (links to where terms are mentioned
>>>> in the text), code (links to the implementation of things),
>>>> error (links to where errors are defined), signatures (links to
>>>> the signatures of lisp functions), figures (links to figures),
>>>> and separate category, domain, and package indexes.
>>>>
>>>> The tikz package is now used to create "railroad diagrams"
>>>> of syntax (ala, the PASCAL report). The implementation of
>>>> those diagrams follows immediately. Collectively these will
>>>> eventually define at least the syntax of the language. In the
>>>> ideal, changing the diagram would change the code but I'm
>>>> not that clever.
>>>>
>>>> Reality shows up with the curent constraint that the
>>>> compiler should accept the current Spad language as
>>>> closely as possible. Of course, plans are to include new
>>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>>> but these are being postponed until "syntax complete".
>>>>
>>>> All parse information is stored in a parse object, which
>>>> is a CLOS object (and therefore a Common Lisp type)
>>>> Fields within the parse object, e.g. variables are also
>>>> CLOS objects (and therefore a Common Lisp type).
>>>> It's types all the way down.
>>>>
>>>> These types are being used as 'signatures' for the
>>>> lisp functions. The goal is to be able to type-check the
>>>> compiler implementation as well as the Sane language.
>>>>
>>>> The parser is designed to "wrap around" so that the
>>>> user-level output of a parse should be the user-level
>>>> input (albeit in a 'canonical" form). This "mirror effect"
>>>> should make it easy to see that the parser properly
>>>> parsed the user input.
>>>>
>>>> The parser is "first class" so it will be available at
>>>> runtime as a domain allowing Spad code to construct
>>>> Spad code.
>>>>
>>>> One plan, not near implementation, is to "unify" some
>>>> CLOS types with the Axiom types (e.g. String). How
>>>> this will happen is still in the land of design. This would
>>>> "ground" Spad in lisp, making them co-equal.
>>>>
>>>> Making lisp "co-equal" is a feature, especially as Spad is
>>>> really just a domain-specific language in lisp. Lisp
>>>> functions (with CLOS types as signatures) would be
>>>> avaiable for implementing Spad functions. This not
>>>> only improves the efficiency, it would make the
>>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>>> .
>>>> On the theory front I plan to attend the Formal Methods
>>>> in Mathematics / Lean Together conference, mostly to
>>>> know how little I know, especially that I need to know.
>>>> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>>>>
>>>> Tim
>>>>
>>>>
>>>>
>>>> On 11/28/19, Jacques Carette <[hidden email]> wrote:
>>>>> The underlying technology to use for building such an algebra library
>>>>> is
>>>>> documented in the paper " Building on the Diamonds between Theories:
>>>>> Theory Presentation Combinators"
>>>>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which will
>>>>> also be on the arxiv by Monday, and has been submitted to a journal].
>>>>>
>>>>> There is a rather full-fledged prototype, very well documented at
>>>>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>>>>
>>>>> (source at https://github.com/alhassy/next-700-module-systems). It is
>>>>> literate source.
>>>>>
>>>>> The old prototype was hard to find - it is now at
>>>>> https://github.com/JacquesCarette/MathScheme.
>>>>>
>>>>> There is also a third prototype in the MMT system, but it does not
>>>>> quite
>>>>> function properly today, it is under repair.
>>>>>
>>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>>>>
>>>>> is also relevant, as it solves a problem with parametrized theories
>>>>> (parametrized Categories in Axiom terminology) that all current
>>>>> systems
>>>>> suffer from.
>>>>>
>>>>> Jacques
>>>>>
>>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>>> The new Sane compiler is also being tested with the Fricas
>>>>>> algebra code. The compiler knows about the language but
>>>>>> does not depend on the algebra library (so far). It should be
>>>>>> possible, by design, to load different algebra towers.
>>>>>>
>>>>>> In particular, one idea is to support the "tiny theories"
>>>>>> algebra from Carette and Farmer. This would allow much
>>>>>> finer grain separation of algebra and axioms.
>>>>>>
>>>>>> This "flexible algebra" design would allow things like the
>>>>>> Lean theorem prover effort in a more natural style.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>
>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>> The current design and code base (in bookvol15) supports
>>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>>
>>>>>>> Another possible design choice is to target the GNU
>>>>>>> GCC intermediate representation, making Axiom "just
>>>>>>> another front-end language" supported by GCC.
>>>>>>>
>>>>>>> The current intermediate representation does not (yet)
>>>>>>> make any decision about the runtime implementation.
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>>>> a dependently typed general parser for context free grammar
>>>>>>>> in Coq.
>>>>>>>>
>>>>>>>> They used the parser to prove its own completeness.
>>>>>>>>
>>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>>> But it is an intersting thought exercise to consider
>>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
When Axiom Sane is paired with a proof checker (e.g. with Lean)
there is a certain amount of verification that is involved.

Axiom will provide proofs (presumably validated by Lean) for its
algorithms. Ideally, when a computation is requested from Lean
for a GCD, the result as well as a proof of the GCD algorithm is
returned. Lean can the verify that the proof is valid. But it is
computationally more efficient if Axiom and Lean use a cryptographic
hash, such as SHA1. That way the proof doesn't need to be
'reproven', only a hash computation over the proof text needs to
be performed. Hashes are blazingly fast. This allows proofs to be
exchanged without re-running the proof mechanism. Since a large
computation request from Lean might involve many algorithms
there would be considerable overhead to recompute each proof.
A hash simplifies the issue yet provides proof integrity.

Tim


On 1/9/20, Tim Daly <[hidden email]> wrote:

> Provisos.... that is, 'formula SUCH pre/post-conditions'
>
> A computer algebra system ought to know and ought to provide
> information about the domain and range of a resulting formula.
> I've been pushing this effort since the 1980s (hence the
> SuchThat domain).
>
> It turns out that computing with, carrying, and combining this
> information is difficult if not impossible in the current system.
> The information isn't available and isn't computed. In that sense,
> the original Axiom system is 'showing its age'.
>
> In the Sane implementation the information is available. It is
> part of the specification and part of the proof steps. With a
> careful design it will be possible to provide provisos for each
> given result that are carried with the result for use in further
> computation.
>
> This raises interesting questions to be explored. For example,
> if the formula is defined over an interval, how is the interval
> arithmetic handled?
>
> Exciting research ahead!
>
> Tim
>
>
>
> On 1/3/20, Tim Daly <[hidden email]> wrote:
>> Trusted Kernel... all the way to the metal.
>>
>> While building a trusted computer algebra system, the
>> SANE version of Axiom, I've been looking at questions of
>> trust at all levels.
>>
>> One of the key tenets (the de Bruijn principle) calls for a
>> trusted kernel through which all other computations must
>> pass. Coq, Lean, and other systems do this. They base
>> their kernel  on logic like the Calculus of Construction or
>> something similar.
>>
>> Andrej Bauer has been working on a smaller kernel (a
>> nucleus) that separates the trust from the logic. The rules
>> for the logic can be specified as needed but checked by
>> the nucleus code.
>>
>> I've been studying Field Programmable Gate Arrays (FPGA)
>> that allow you to create your own hardware in a C-like
>> language (Verilog). It allows you to check the chip you build
>> all the way down to the transistor states. You can create
>> things as complex as a whole CPU or as simple as a trusted
>> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
>> history of verifying hardware logic.
>>
>> It appears that, assuming I can understand Bauers
>> Andromeda system, it would be possible and not that hard
>> to implement a trusted kernel on an FPGA the size and
>> form factor of a USB stick.
>>
>> Trust "down to the metal".
>>
>> Tim
>>
>>
>>
>> On 12/15/19, Tim Daly <[hidden email]> wrote:
>>> Progress in happening on the new Sane Axiom compiler.
>>>
>>> Recently I've been musing about methods to insert axioms
>>> into categories so they could be inherited like signatures.
>>> At the moment I've been thinking about adding axioms in
>>> the same way that signatures are written, adding them to
>>> the appropriate categories.
>>>
>>> But this is an interesting design question.
>>>
>>> Axiom already has a mechanism for inheriting signatures
>>> from categories. That is, we can bet a plus signature from,
>>> say, the Integer category.
>>>
>>> Suppose we follow the same pattern. Currently Axiom
>>> inherits certain so-called "attributes", such as ApproximateAttribute,
>>> which implies that the results are only approximate.
>>>
>>> We could adapt the same mechnaism to inherit the Transitive
>>> property by defining it in its own category. In fact, if we
>>> follow Carette and Farmer's "tiny theories" architecture,
>>> where each property has its own inheritable category,
>>> we can "mix and match" the axioms at will.
>>>
>>> An "axiom" category would also export a function. This function
>>> would essentially be a "tactic" used in a proof. It would modify
>>> the proof step by applying the function to the step.
>>>
>>> Theorems would have the same structure.
>>>
>>> This allows theorems to be constructed at run time (since
>>> Axiom supports "First Class Dynamic Types".
>>>
>>> In addition, this design can be "pushed down" into the Spad
>>> language so that Spad statements (e.g. assignment) had
>>> proof-related properties. A range such as [1..10] would
>>> provide explicit bounds in a proof "by language definition".
>>> Defining the logical properties of language statements in
>>> this way would make it easier to construct proofs since the
>>> invariants would be partially constructed already.
>>>
>>> This design merges the computer algebra inheritance
>>> structure with the proof of algorithms structure, all under
>>> the same mechanism.
>>>
>>> Tim
>>>
>>> On 12/11/19, Tim Daly <[hidden email]> wrote:
>>>> I've been reading Stephen Kell's (Univ of Kent
>>>> https://www.cs.kent.ac.uk/people/staff/srk21/) on
>>>> Seven deadly sins of talking about “types”
>>>> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>>>>
>>>> He raised an interesting idea toward the end of the essay
>>>> that type-checking could be done outside the compiler.
>>>>
>>>> I can see a way to do this in Axiom's Sane compiler.
>>>> It would be possible to run a program over the source code
>>>> to collect the information and write a stand-alone type
>>>> checker. This "unbundles" type checking and compiling.
>>>>
>>>> Taken further I can think of several other kinds of checkers
>>>> (aka 'linters') that could be unbundled.
>>>>
>>>> It is certainly something to explore.
>>>>
>>>> Tim
>>>>
>>>>
>>>> On 12/8/19, Tim Daly <[hidden email]> wrote:
>>>>> The Axiom Sane compiler is being "shaped by the hammer
>>>>> blows of reality", to coin a phrase.
>>>>>
>>>>> There are many goals. One of the primary goals is creating a
>>>>> compiler that can be understood, maintained, and modified.
>>>>>
>>>>> So the latest changes involved adding multiple index files.
>>>>> These are documentation (links to where terms are mentioned
>>>>> in the text), code (links to the implementation of things),
>>>>> error (links to where errors are defined), signatures (links to
>>>>> the signatures of lisp functions), figures (links to figures),
>>>>> and separate category, domain, and package indexes.
>>>>>
>>>>> The tikz package is now used to create "railroad diagrams"
>>>>> of syntax (ala, the PASCAL report). The implementation of
>>>>> those diagrams follows immediately. Collectively these will
>>>>> eventually define at least the syntax of the language. In the
>>>>> ideal, changing the diagram would change the code but I'm
>>>>> not that clever.
>>>>>
>>>>> Reality shows up with the curent constraint that the
>>>>> compiler should accept the current Spad language as
>>>>> closely as possible. Of course, plans are to include new
>>>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>>>> but these are being postponed until "syntax complete".
>>>>>
>>>>> All parse information is stored in a parse object, which
>>>>> is a CLOS object (and therefore a Common Lisp type)
>>>>> Fields within the parse object, e.g. variables are also
>>>>> CLOS objects (and therefore a Common Lisp type).
>>>>> It's types all the way down.
>>>>>
>>>>> These types are being used as 'signatures' for the
>>>>> lisp functions. The goal is to be able to type-check the
>>>>> compiler implementation as well as the Sane language.
>>>>>
>>>>> The parser is designed to "wrap around" so that the
>>>>> user-level output of a parse should be the user-level
>>>>> input (albeit in a 'canonical" form). This "mirror effect"
>>>>> should make it easy to see that the parser properly
>>>>> parsed the user input.
>>>>>
>>>>> The parser is "first class" so it will be available at
>>>>> runtime as a domain allowing Spad code to construct
>>>>> Spad code.
>>>>>
>>>>> One plan, not near implementation, is to "unify" some
>>>>> CLOS types with the Axiom types (e.g. String). How
>>>>> this will happen is still in the land of design. This would
>>>>> "ground" Spad in lisp, making them co-equal.
>>>>>
>>>>> Making lisp "co-equal" is a feature, especially as Spad is
>>>>> really just a domain-specific language in lisp. Lisp
>>>>> functions (with CLOS types as signatures) would be
>>>>> avaiable for implementing Spad functions. This not
>>>>> only improves the efficiency, it would make the
>>>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>>>> .
>>>>> On the theory front I plan to attend the Formal Methods
>>>>> in Mathematics / Lean Together conference, mostly to
>>>>> know how little I know, especially that I need to know.
>>>>> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>>
>>>>> On 11/28/19, Jacques Carette <[hidden email]> wrote:
>>>>>> The underlying technology to use for building such an algebra library
>>>>>> is
>>>>>> documented in the paper " Building on the Diamonds between Theories:
>>>>>> Theory Presentation Combinators"
>>>>>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which will
>>>>>> also be on the arxiv by Monday, and has been submitted to a journal].
>>>>>>
>>>>>> There is a rather full-fledged prototype, very well documented at
>>>>>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>>>>>
>>>>>> (source at https://github.com/alhassy/next-700-module-systems). It is
>>>>>> literate source.
>>>>>>
>>>>>> The old prototype was hard to find - it is now at
>>>>>> https://github.com/JacquesCarette/MathScheme.
>>>>>>
>>>>>> There is also a third prototype in the MMT system, but it does not
>>>>>> quite
>>>>>> function properly today, it is under repair.
>>>>>>
>>>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>>>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>>>>>
>>>>>> is also relevant, as it solves a problem with parametrized theories
>>>>>> (parametrized Categories in Axiom terminology) that all current
>>>>>> systems
>>>>>> suffer from.
>>>>>>
>>>>>> Jacques
>>>>>>
>>>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>>>> The new Sane compiler is also being tested with the Fricas
>>>>>>> algebra code. The compiler knows about the language but
>>>>>>> does not depend on the algebra library (so far). It should be
>>>>>>> possible, by design, to load different algebra towers.
>>>>>>>
>>>>>>> In particular, one idea is to support the "tiny theories"
>>>>>>> algebra from Carette and Farmer. This would allow much
>>>>>>> finer grain separation of algebra and axioms.
>>>>>>>
>>>>>>> This "flexible algebra" design would allow things like the
>>>>>>> Lean theorem prover effort in a more natural style.
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>> The current design and code base (in bookvol15) supports
>>>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>>>
>>>>>>>> Another possible design choice is to target the GNU
>>>>>>>> GCC intermediate representation, making Axiom "just
>>>>>>>> another front-end language" supported by GCC.
>>>>>>>>
>>>>>>>> The current intermediate representation does not (yet)
>>>>>>>> make any decision about the runtime implementation.
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>>>>> a dependently typed general parser for context free grammar
>>>>>>>>> in Coq.
>>>>>>>>>
>>>>>>>>> They used the parser to prove its own completeness.
>>>>>>>>>
>>>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>>>> But it is an intersting thought exercise to consider
>>>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
As a mathematician, it is difficult to use a system like Axiom,
mostly because it keeps muttering about Types. If you're not
familiar with type theory (most mathematicians aren't) then it
seems pointless and painful.

So Axiom has a steep learning curve.

As a mathematician with an algorithmic approach, it is difficult
to use a system like Axiom, mostly because you have to find
or create "domains" or "packages", understand categories
with their inheritance model, and learn a new language with
a painful compiler always complaining about types.

So Axiom has a steep learning curve.

The Sane version of Axiom requires knowing the mathematics.
It also assumes a background in type theory, inductive logic,
homotopy type theory, ML (meta-language, not machine
learning (yet)), interactive theorem proving, kernels, tactics,
and tacticals. Also assumed is knowledge of specification languages,
Hoare triples, proof techniques, soundness, and completeness.
Oh, and there is a whole new syntax and semantics added to
specify definitions, axioms, and theorems, not to mention whole
libraries of the same.

So Axiom Sane has a steep learning curve.

I've taken 10 courses at CMU and spent the last 4-5 years
learning to read the leading edge literature (also known
as "greek studies", since every paper has pages of greek).

I'm trying to unify computer algebra and proof theory into a
"computational mathematics" framework. I suspect that the only
way this system will ever be useful is after Universities have a
"Computational Mathematics" major course of study and degree.

Creating a new department is harder than creating Axiom Sane
because, you know, ... people.

I think such a department is inevitable given the deep and wide
impact of computers, just not in my lifetime. That's ok. When I
started programming there was no computer science degree.

Somebody has to be the first lemming over the cliff.

Tim

On 1/9/20, Tim Daly <[hidden email]> wrote:

> When Axiom Sane is paired with a proof checker (e.g. with Lean)
> there is a certain amount of verification that is involved.
>
> Axiom will provide proofs (presumably validated by Lean) for its
> algorithms. Ideally, when a computation is requested from Lean
> for a GCD, the result as well as a proof of the GCD algorithm is
> returned. Lean can the verify that the proof is valid. But it is
> computationally more efficient if Axiom and Lean use a cryptographic
> hash, such as SHA1. That way the proof doesn't need to be
> 'reproven', only a hash computation over the proof text needs to
> be performed. Hashes are blazingly fast. This allows proofs to be
> exchanged without re-running the proof mechanism. Since a large
> computation request from Lean might involve many algorithms
> there would be considerable overhead to recompute each proof.
> A hash simplifies the issue yet provides proof integrity.
>
> Tim
>
>
> On 1/9/20, Tim Daly <[hidden email]> wrote:
>> Provisos.... that is, 'formula SUCH pre/post-conditions'
>>
>> A computer algebra system ought to know and ought to provide
>> information about the domain and range of a resulting formula.
>> I've been pushing this effort since the 1980s (hence the
>> SuchThat domain).
>>
>> It turns out that computing with, carrying, and combining this
>> information is difficult if not impossible in the current system.
>> The information isn't available and isn't computed. In that sense,
>> the original Axiom system is 'showing its age'.
>>
>> In the Sane implementation the information is available. It is
>> part of the specification and part of the proof steps. With a
>> careful design it will be possible to provide provisos for each
>> given result that are carried with the result for use in further
>> computation.
>>
>> This raises interesting questions to be explored. For example,
>> if the formula is defined over an interval, how is the interval
>> arithmetic handled?
>>
>> Exciting research ahead!
>>
>> Tim
>>
>>
>>
>> On 1/3/20, Tim Daly <[hidden email]> wrote:
>>> Trusted Kernel... all the way to the metal.
>>>
>>> While building a trusted computer algebra system, the
>>> SANE version of Axiom, I've been looking at questions of
>>> trust at all levels.
>>>
>>> One of the key tenets (the de Bruijn principle) calls for a
>>> trusted kernel through which all other computations must
>>> pass. Coq, Lean, and other systems do this. They base
>>> their kernel  on logic like the Calculus of Construction or
>>> something similar.
>>>
>>> Andrej Bauer has been working on a smaller kernel (a
>>> nucleus) that separates the trust from the logic. The rules
>>> for the logic can be specified as needed but checked by
>>> the nucleus code.
>>>
>>> I've been studying Field Programmable Gate Arrays (FPGA)
>>> that allow you to create your own hardware in a C-like
>>> language (Verilog). It allows you to check the chip you build
>>> all the way down to the transistor states. You can create
>>> things as complex as a whole CPU or as simple as a trusted
>>> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
>>> history of verifying hardware logic.
>>>
>>> It appears that, assuming I can understand Bauers
>>> Andromeda system, it would be possible and not that hard
>>> to implement a trusted kernel on an FPGA the size and
>>> form factor of a USB stick.
>>>
>>> Trust "down to the metal".
>>>
>>> Tim
>>>
>>>
>>>
>>> On 12/15/19, Tim Daly <[hidden email]> wrote:
>>>> Progress in happening on the new Sane Axiom compiler.
>>>>
>>>> Recently I've been musing about methods to insert axioms
>>>> into categories so they could be inherited like signatures.
>>>> At the moment I've been thinking about adding axioms in
>>>> the same way that signatures are written, adding them to
>>>> the appropriate categories.
>>>>
>>>> But this is an interesting design question.
>>>>
>>>> Axiom already has a mechanism for inheriting signatures
>>>> from categories. That is, we can bet a plus signature from,
>>>> say, the Integer category.
>>>>
>>>> Suppose we follow the same pattern. Currently Axiom
>>>> inherits certain so-called "attributes", such as ApproximateAttribute,
>>>> which implies that the results are only approximate.
>>>>
>>>> We could adapt the same mechnaism to inherit the Transitive
>>>> property by defining it in its own category. In fact, if we
>>>> follow Carette and Farmer's "tiny theories" architecture,
>>>> where each property has its own inheritable category,
>>>> we can "mix and match" the axioms at will.
>>>>
>>>> An "axiom" category would also export a function. This function
>>>> would essentially be a "tactic" used in a proof. It would modify
>>>> the proof step by applying the function to the step.
>>>>
>>>> Theorems would have the same structure.
>>>>
>>>> This allows theorems to be constructed at run time (since
>>>> Axiom supports "First Class Dynamic Types".
>>>>
>>>> In addition, this design can be "pushed down" into the Spad
>>>> language so that Spad statements (e.g. assignment) had
>>>> proof-related properties. A range such as [1..10] would
>>>> provide explicit bounds in a proof "by language definition".
>>>> Defining the logical properties of language statements in
>>>> this way would make it easier to construct proofs since the
>>>> invariants would be partially constructed already.
>>>>
>>>> This design merges the computer algebra inheritance
>>>> structure with the proof of algorithms structure, all under
>>>> the same mechanism.
>>>>
>>>> Tim
>>>>
>>>> On 12/11/19, Tim Daly <[hidden email]> wrote:
>>>>> I've been reading Stephen Kell's (Univ of Kent
>>>>> https://www.cs.kent.ac.uk/people/staff/srk21/) on
>>>>> Seven deadly sins of talking about “types”
>>>>> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>>>>>
>>>>> He raised an interesting idea toward the end of the essay
>>>>> that type-checking could be done outside the compiler.
>>>>>
>>>>> I can see a way to do this in Axiom's Sane compiler.
>>>>> It would be possible to run a program over the source code
>>>>> to collect the information and write a stand-alone type
>>>>> checker. This "unbundles" type checking and compiling.
>>>>>
>>>>> Taken further I can think of several other kinds of checkers
>>>>> (aka 'linters') that could be unbundled.
>>>>>
>>>>> It is certainly something to explore.
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>> On 12/8/19, Tim Daly <[hidden email]> wrote:
>>>>>> The Axiom Sane compiler is being "shaped by the hammer
>>>>>> blows of reality", to coin a phrase.
>>>>>>
>>>>>> There are many goals. One of the primary goals is creating a
>>>>>> compiler that can be understood, maintained, and modified.
>>>>>>
>>>>>> So the latest changes involved adding multiple index files.
>>>>>> These are documentation (links to where terms are mentioned
>>>>>> in the text), code (links to the implementation of things),
>>>>>> error (links to where errors are defined), signatures (links to
>>>>>> the signatures of lisp functions), figures (links to figures),
>>>>>> and separate category, domain, and package indexes.
>>>>>>
>>>>>> The tikz package is now used to create "railroad diagrams"
>>>>>> of syntax (ala, the PASCAL report). The implementation of
>>>>>> those diagrams follows immediately. Collectively these will
>>>>>> eventually define at least the syntax of the language. In the
>>>>>> ideal, changing the diagram would change the code but I'm
>>>>>> not that clever.
>>>>>>
>>>>>> Reality shows up with the curent constraint that the
>>>>>> compiler should accept the current Spad language as
>>>>>> closely as possible. Of course, plans are to include new
>>>>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>>>>> but these are being postponed until "syntax complete".
>>>>>>
>>>>>> All parse information is stored in a parse object, which
>>>>>> is a CLOS object (and therefore a Common Lisp type)
>>>>>> Fields within the parse object, e.g. variables are also
>>>>>> CLOS objects (and therefore a Common Lisp type).
>>>>>> It's types all the way down.
>>>>>>
>>>>>> These types are being used as 'signatures' for the
>>>>>> lisp functions. The goal is to be able to type-check the
>>>>>> compiler implementation as well as the Sane language.
>>>>>>
>>>>>> The parser is designed to "wrap around" so that the
>>>>>> user-level output of a parse should be the user-level
>>>>>> input (albeit in a 'canonical" form). This "mirror effect"
>>>>>> should make it easy to see that the parser properly
>>>>>> parsed the user input.
>>>>>>
>>>>>> The parser is "first class" so it will be available at
>>>>>> runtime as a domain allowing Spad code to construct
>>>>>> Spad code.
>>>>>>
>>>>>> One plan, not near implementation, is to "unify" some
>>>>>> CLOS types with the Axiom types (e.g. String). How
>>>>>> this will happen is still in the land of design. This would
>>>>>> "ground" Spad in lisp, making them co-equal.
>>>>>>
>>>>>> Making lisp "co-equal" is a feature, especially as Spad is
>>>>>> really just a domain-specific language in lisp. Lisp
>>>>>> functions (with CLOS types as signatures) would be
>>>>>> avaiable for implementing Spad functions. This not
>>>>>> only improves the efficiency, it would make the
>>>>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>>>>> .
>>>>>> On the theory front I plan to attend the Formal Methods
>>>>>> in Mathematics / Lean Together conference, mostly to
>>>>>> know how little I know, especially that I need to know.
>>>>>> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 11/28/19, Jacques Carette <[hidden email]> wrote:
>>>>>>> The underlying technology to use for building such an algebra
>>>>>>> library
>>>>>>> is
>>>>>>> documented in the paper " Building on the Diamonds between Theories:
>>>>>>> Theory Presentation Combinators"
>>>>>>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which
>>>>>>> will
>>>>>>> also be on the arxiv by Monday, and has been submitted to a
>>>>>>> journal].
>>>>>>>
>>>>>>> There is a rather full-fledged prototype, very well documented at
>>>>>>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>>>>>>
>>>>>>> (source at https://github.com/alhassy/next-700-module-systems). It
>>>>>>> is
>>>>>>> literate source.
>>>>>>>
>>>>>>> The old prototype was hard to find - it is now at
>>>>>>> https://github.com/JacquesCarette/MathScheme.
>>>>>>>
>>>>>>> There is also a third prototype in the MMT system, but it does not
>>>>>>> quite
>>>>>>> function properly today, it is under repair.
>>>>>>>
>>>>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>>>>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>>>>>>
>>>>>>> is also relevant, as it solves a problem with parametrized theories
>>>>>>> (parametrized Categories in Axiom terminology) that all current
>>>>>>> systems
>>>>>>> suffer from.
>>>>>>>
>>>>>>> Jacques
>>>>>>>
>>>>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>>>>> The new Sane compiler is also being tested with the Fricas
>>>>>>>> algebra code. The compiler knows about the language but
>>>>>>>> does not depend on the algebra library (so far). It should be
>>>>>>>> possible, by design, to load different algebra towers.
>>>>>>>>
>>>>>>>> In particular, one idea is to support the "tiny theories"
>>>>>>>> algebra from Carette and Farmer. This would allow much
>>>>>>>> finer grain separation of algebra and axioms.
>>>>>>>>
>>>>>>>> This "flexible algebra" design would allow things like the
>>>>>>>> Lean theorem prover effort in a more natural style.
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>> The current design and code base (in bookvol15) supports
>>>>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>>>>
>>>>>>>>> Another possible design choice is to target the GNU
>>>>>>>>> GCC intermediate representation, making Axiom "just
>>>>>>>>> another front-end language" supported by GCC.
>>>>>>>>>
>>>>>>>>> The current intermediate representation does not (yet)
>>>>>>>>> make any decision about the runtime implementation.
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>>>>>> a dependently typed general parser for context free grammar
>>>>>>>>>> in Coq.
>>>>>>>>>>
>>>>>>>>>> They used the parser to prove its own completeness.
>>>>>>>>>>
>>>>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>>>>> But it is an intersting thought exercise to consider
>>>>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
I've pushed the lastest version of Axiom. The plan, followed
so far, is to push once a month on the 7th.

After some chat room interactions it was brought home
again that the proof world really does not seem to like the
idea of proving programs correct. And, given that it was is
of the main Axiom goals and a point of friction during the fork,
the computer algebra world does not like the idea of proving
programs correct either.

So the idea of "computational mathematics", which includes
both disciplines (as well as type theory) seems still a long
way off.

Nevertheless, the primary change in these past and future
updates is focused on merging proof and computer algebra.

Proof systems are able to split the problem of creating a
proof and the problem of verifying a proof, which is much
cheaper. Ideally the proof checker would run on verified
hardware so the proof is checked "down to the metal".

I have a background in Field Programmable Gate Arrays
(FPGAs) as I tried to do a startup using them. So now I'm
looking at creating a hardware proof checker using a
dedicated instruction set, one designed to be verifed.
New CPUs used in data centers (not yet available to us
mortals) have built-in FPGAs so it would be possible to
"side-load" a proof of a program to be checked while the
program is run. I have the FPGA and am doing a gate-level
special instruction design for such a proof checker.


On 2/7/20, Tim Daly <[hidden email]> wrote:

> As a mathematician, it is difficult to use a system like Axiom,
> mostly because it keeps muttering about Types. If you're not
> familiar with type theory (most mathematicians aren't) then it
> seems pointless and painful.
>
> So Axiom has a steep learning curve.
>
> As a mathematician with an algorithmic approach, it is difficult
> to use a system like Axiom, mostly because you have to find
> or create "domains" or "packages", understand categories
> with their inheritance model, and learn a new language with
> a painful compiler always complaining about types.
>
> So Axiom has a steep learning curve.
>
> The Sane version of Axiom requires knowing the mathematics.
> It also assumes a background in type theory, inductive logic,
> homotopy type theory, ML (meta-language, not machine
> learning (yet)), interactive theorem proving, kernels, tactics,
> and tacticals. Also assumed is knowledge of specification languages,
> Hoare triples, proof techniques, soundness, and completeness.
> Oh, and there is a whole new syntax and semantics added to
> specify definitions, axioms, and theorems, not to mention whole
> libraries of the same.
>
> So Axiom Sane has a steep learning curve.
>
> I've taken 10 courses at CMU and spent the last 4-5 years
> learning to read the leading edge literature (also known
> as "greek studies", since every paper has pages of greek).
>
> I'm trying to unify computer algebra and proof theory into a
> "computational mathematics" framework. I suspect that the only
> way this system will ever be useful is after Universities have a
> "Computational Mathematics" major course of study and degree.
>
> Creating a new department is harder than creating Axiom Sane
> because, you know, ... people.
>
> I think such a department is inevitable given the deep and wide
> impact of computers, just not in my lifetime. That's ok. When I
> started programming there was no computer science degree.
>
> Somebody has to be the first lemming over the cliff.
>
> Tim
>
> On 1/9/20, Tim Daly <[hidden email]> wrote:
>> When Axiom Sane is paired with a proof checker (e.g. with Lean)
>> there is a certain amount of verification that is involved.
>>
>> Axiom will provide proofs (presumably validated by Lean) for its
>> algorithms. Ideally, when a computation is requested from Lean
>> for a GCD, the result as well as a proof of the GCD algorithm is
>> returned. Lean can the verify that the proof is valid. But it is
>> computationally more efficient if Axiom and Lean use a cryptographic
>> hash, such as SHA1. That way the proof doesn't need to be
>> 'reproven', only a hash computation over the proof text needs to
>> be performed. Hashes are blazingly fast. This allows proofs to be
>> exchanged without re-running the proof mechanism. Since a large
>> computation request from Lean might involve many algorithms
>> there would be considerable overhead to recompute each proof.
>> A hash simplifies the issue yet provides proof integrity.
>>
>> Tim
>>
>>
>> On 1/9/20, Tim Daly <[hidden email]> wrote:
>>> Provisos.... that is, 'formula SUCH pre/post-conditions'
>>>
>>> A computer algebra system ought to know and ought to provide
>>> information about the domain and range of a resulting formula.
>>> I've been pushing this effort since the 1980s (hence the
>>> SuchThat domain).
>>>
>>> It turns out that computing with, carrying, and combining this
>>> information is difficult if not impossible in the current system.
>>> The information isn't available and isn't computed. In that sense,
>>> the original Axiom system is 'showing its age'.
>>>
>>> In the Sane implementation the information is available. It is
>>> part of the specification and part of the proof steps. With a
>>> careful design it will be possible to provide provisos for each
>>> given result that are carried with the result for use in further
>>> computation.
>>>
>>> This raises interesting questions to be explored. For example,
>>> if the formula is defined over an interval, how is the interval
>>> arithmetic handled?
>>>
>>> Exciting research ahead!
>>>
>>> Tim
>>>
>>>
>>>
>>> On 1/3/20, Tim Daly <[hidden email]> wrote:
>>>> Trusted Kernel... all the way to the metal.
>>>>
>>>> While building a trusted computer algebra system, the
>>>> SANE version of Axiom, I've been looking at questions of
>>>> trust at all levels.
>>>>
>>>> One of the key tenets (the de Bruijn principle) calls for a
>>>> trusted kernel through which all other computations must
>>>> pass. Coq, Lean, and other systems do this. They base
>>>> their kernel  on logic like the Calculus of Construction or
>>>> something similar.
>>>>
>>>> Andrej Bauer has been working on a smaller kernel (a
>>>> nucleus) that separates the trust from the logic. The rules
>>>> for the logic can be specified as needed but checked by
>>>> the nucleus code.
>>>>
>>>> I've been studying Field Programmable Gate Arrays (FPGA)
>>>> that allow you to create your own hardware in a C-like
>>>> language (Verilog). It allows you to check the chip you build
>>>> all the way down to the transistor states. You can create
>>>> things as complex as a whole CPU or as simple as a trusted
>>>> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
>>>> history of verifying hardware logic.
>>>>
>>>> It appears that, assuming I can understand Bauers
>>>> Andromeda system, it would be possible and not that hard
>>>> to implement a trusted kernel on an FPGA the size and
>>>> form factor of a USB stick.
>>>>
>>>> Trust "down to the metal".
>>>>
>>>> Tim
>>>>
>>>>
>>>>
>>>> On 12/15/19, Tim Daly <[hidden email]> wrote:
>>>>> Progress in happening on the new Sane Axiom compiler.
>>>>>
>>>>> Recently I've been musing about methods to insert axioms
>>>>> into categories so they could be inherited like signatures.
>>>>> At the moment I've been thinking about adding axioms in
>>>>> the same way that signatures are written, adding them to
>>>>> the appropriate categories.
>>>>>
>>>>> But this is an interesting design question.
>>>>>
>>>>> Axiom already has a mechanism for inheriting signatures
>>>>> from categories. That is, we can bet a plus signature from,
>>>>> say, the Integer category.
>>>>>
>>>>> Suppose we follow the same pattern. Currently Axiom
>>>>> inherits certain so-called "attributes", such as ApproximateAttribute,
>>>>> which implies that the results are only approximate.
>>>>>
>>>>> We could adapt the same mechnaism to inherit the Transitive
>>>>> property by defining it in its own category. In fact, if we
>>>>> follow Carette and Farmer's "tiny theories" architecture,
>>>>> where each property has its own inheritable category,
>>>>> we can "mix and match" the axioms at will.
>>>>>
>>>>> An "axiom" category would also export a function. This function
>>>>> would essentially be a "tactic" used in a proof. It would modify
>>>>> the proof step by applying the function to the step.
>>>>>
>>>>> Theorems would have the same structure.
>>>>>
>>>>> This allows theorems to be constructed at run time (since
>>>>> Axiom supports "First Class Dynamic Types".
>>>>>
>>>>> In addition, this design can be "pushed down" into the Spad
>>>>> language so that Spad statements (e.g. assignment) had
>>>>> proof-related properties. A range such as [1..10] would
>>>>> provide explicit bounds in a proof "by language definition".
>>>>> Defining the logical properties of language statements in
>>>>> this way would make it easier to construct proofs since the
>>>>> invariants would be partially constructed already.
>>>>>
>>>>> This design merges the computer algebra inheritance
>>>>> structure with the proof of algorithms structure, all under
>>>>> the same mechanism.
>>>>>
>>>>> Tim
>>>>>
>>>>> On 12/11/19, Tim Daly <[hidden email]> wrote:
>>>>>> I've been reading Stephen Kell's (Univ of Kent
>>>>>> https://www.cs.kent.ac.uk/people/staff/srk21/) on
>>>>>> Seven deadly sins of talking about “types”
>>>>>> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>>>>>>
>>>>>> He raised an interesting idea toward the end of the essay
>>>>>> that type-checking could be done outside the compiler.
>>>>>>
>>>>>> I can see a way to do this in Axiom's Sane compiler.
>>>>>> It would be possible to run a program over the source code
>>>>>> to collect the information and write a stand-alone type
>>>>>> checker. This "unbundles" type checking and compiling.
>>>>>>
>>>>>> Taken further I can think of several other kinds of checkers
>>>>>> (aka 'linters') that could be unbundled.
>>>>>>
>>>>>> It is certainly something to explore.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>
>>>>>> On 12/8/19, Tim Daly <[hidden email]> wrote:
>>>>>>> The Axiom Sane compiler is being "shaped by the hammer
>>>>>>> blows of reality", to coin a phrase.
>>>>>>>
>>>>>>> There are many goals. One of the primary goals is creating a
>>>>>>> compiler that can be understood, maintained, and modified.
>>>>>>>
>>>>>>> So the latest changes involved adding multiple index files.
>>>>>>> These are documentation (links to where terms are mentioned
>>>>>>> in the text), code (links to the implementation of things),
>>>>>>> error (links to where errors are defined), signatures (links to
>>>>>>> the signatures of lisp functions), figures (links to figures),
>>>>>>> and separate category, domain, and package indexes.
>>>>>>>
>>>>>>> The tikz package is now used to create "railroad diagrams"
>>>>>>> of syntax (ala, the PASCAL report). The implementation of
>>>>>>> those diagrams follows immediately. Collectively these will
>>>>>>> eventually define at least the syntax of the language. In the
>>>>>>> ideal, changing the diagram would change the code but I'm
>>>>>>> not that clever.
>>>>>>>
>>>>>>> Reality shows up with the curent constraint that the
>>>>>>> compiler should accept the current Spad language as
>>>>>>> closely as possible. Of course, plans are to include new
>>>>>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>>>>>> but these are being postponed until "syntax complete".
>>>>>>>
>>>>>>> All parse information is stored in a parse object, which
>>>>>>> is a CLOS object (and therefore a Common Lisp type)
>>>>>>> Fields within the parse object, e.g. variables are also
>>>>>>> CLOS objects (and therefore a Common Lisp type).
>>>>>>> It's types all the way down.
>>>>>>>
>>>>>>> These types are being used as 'signatures' for the
>>>>>>> lisp functions. The goal is to be able to type-check the
>>>>>>> compiler implementation as well as the Sane language.
>>>>>>>
>>>>>>> The parser is designed to "wrap around" so that the
>>>>>>> user-level output of a parse should be the user-level
>>>>>>> input (albeit in a 'canonical" form). This "mirror effect"
>>>>>>> should make it easy to see that the parser properly
>>>>>>> parsed the user input.
>>>>>>>
>>>>>>> The parser is "first class" so it will be available at
>>>>>>> runtime as a domain allowing Spad code to construct
>>>>>>> Spad code.
>>>>>>>
>>>>>>> One plan, not near implementation, is to "unify" some
>>>>>>> CLOS types with the Axiom types (e.g. String). How
>>>>>>> this will happen is still in the land of design. This would
>>>>>>> "ground" Spad in lisp, making them co-equal.
>>>>>>>
>>>>>>> Making lisp "co-equal" is a feature, especially as Spad is
>>>>>>> really just a domain-specific language in lisp. Lisp
>>>>>>> functions (with CLOS types as signatures) would be
>>>>>>> avaiable for implementing Spad functions. This not
>>>>>>> only improves the efficiency, it would make the
>>>>>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>>>>>> .
>>>>>>> On the theory front I plan to attend the Formal Methods
>>>>>>> in Mathematics / Lean Together conference, mostly to
>>>>>>> know how little I know, especially that I need to know.
>>>>>>> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 11/28/19, Jacques Carette <[hidden email]> wrote:
>>>>>>>> The underlying technology to use for building such an algebra
>>>>>>>> library
>>>>>>>> is
>>>>>>>> documented in the paper " Building on the Diamonds between
>>>>>>>> Theories:
>>>>>>>> Theory Presentation Combinators"
>>>>>>>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which
>>>>>>>> will
>>>>>>>> also be on the arxiv by Monday, and has been submitted to a
>>>>>>>> journal].
>>>>>>>>
>>>>>>>> There is a rather full-fledged prototype, very well documented at
>>>>>>>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>>>>>>>
>>>>>>>> (source at https://github.com/alhassy/next-700-module-systems). It
>>>>>>>> is
>>>>>>>> literate source.
>>>>>>>>
>>>>>>>> The old prototype was hard to find - it is now at
>>>>>>>> https://github.com/JacquesCarette/MathScheme.
>>>>>>>>
>>>>>>>> There is also a third prototype in the MMT system, but it does not
>>>>>>>> quite
>>>>>>>> function properly today, it is under repair.
>>>>>>>>
>>>>>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>>>>>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>>>>>>>
>>>>>>>> is also relevant, as it solves a problem with parametrized theories
>>>>>>>> (parametrized Categories in Axiom terminology) that all current
>>>>>>>> systems
>>>>>>>> suffer from.
>>>>>>>>
>>>>>>>> Jacques
>>>>>>>>
>>>>>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>>>>>> The new Sane compiler is also being tested with the Fricas
>>>>>>>>> algebra code. The compiler knows about the language but
>>>>>>>>> does not depend on the algebra library (so far). It should be
>>>>>>>>> possible, by design, to load different algebra towers.
>>>>>>>>>
>>>>>>>>> In particular, one idea is to support the "tiny theories"
>>>>>>>>> algebra from Carette and Farmer. This would allow much
>>>>>>>>> finer grain separation of algebra and axioms.
>>>>>>>>>
>>>>>>>>> This "flexible algebra" design would allow things like the
>>>>>>>>> Lean theorem prover effort in a more natural style.
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>> The current design and code base (in bookvol15) supports
>>>>>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>>>>>
>>>>>>>>>> Another possible design choice is to target the GNU
>>>>>>>>>> GCC intermediate representation, making Axiom "just
>>>>>>>>>> another front-end language" supported by GCC.
>>>>>>>>>>
>>>>>>>>>> The current intermediate representation does not (yet)
>>>>>>>>>> make any decision about the runtime implementation.
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>>>>>>> a dependently typed general parser for context free grammar
>>>>>>>>>>> in Coq.
>>>>>>>>>>>
>>>>>>>>>>> They used the parser to prove its own completeness.
>>>>>>>>>>>
>>>>>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>>>>>> But it is an intersting thought exercise to consider
>>>>>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>>>>>
>>>>>>>>>>> Tim
>>>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
I've spent entirely too much time studing the legal issues
of free and open source software. There are copyright,
trademark, and intellectual property laws. I have read
several books, listened to lectures, and read papers on
the subject. I've spoken to lawyers about it. I've even
been required, by law, to coerce people I respect.
You would think it was all perfectly clear. It isn't.

The most entertaining and enlightening lectures were
by Robert Lefkowitz at OSCON 2004. His talk is
"The Semasiology of Open Source", which sounds
horrible but I assure you, this is a real treat.

THE THESIS

Semasiology, n. The science of meanings or
sense development (of words); the explanation
of the development and changes of the meanings
of words. Source: Webster's Revised Unabridged
Dictionary, � 1996, 1998 MICRA, Inc.

"Open source doesn't just mean access to the
source code." So begins the Open Source Definition.
What then, does access to the source code mean?
Seen through the lens of an Enterprise user, what
does open source mean? When is (or isn't) it
significant? And a catalogue of open source
related arbitrage opportunities.

http://origin.conversationsnetwork.org/Robert%20Lefkowitz%20-%20The%20Semasiology%20of%20Open%20Source.mp3

Computer source code has words and sentence
structure like actual prose or even poetry. Writing
code for the computer is like writing an essay. It
should be written for other people to read,
understand and modify. These are some of the
thoughts behind literate programming proposed
by Donald Knuth. This is also one of the ideas
behind Open Source.

 THE ANTITHESIS

"Open Source" is a phrase like "Object Oriented"
- weird at first, but when it became popular, the
meaning began to depend on the context of the
speaker or listener. "Object Oriented" meant that
PERL, C++, Java, Smalltalk, Basic and the newest
version of Cobol are all "Object Oriented" - for some
specific definition of "Object Oriented". Similar is
the case of the phrase "Open Source".

In Part I, Lefkowitz talked about the shift of the
meaning of "Open Source" away from any reference
to the actual "source code," and more towards other
phases of the software development life cycle. In
Part II, he returns to the consideration of the
relationship between "open source" and the actual
"source code," and reflects upon both the way
forward and the road behind, drawing inspiration
from Charlemagne, King Louis XIV, Donald Knuth,
and others.

http://origin.conversationsnetwork.org/ITC.OSCON05-RobertLefkowitz-2005.08.03.mp3

THE SYNTHESIS

In a fascinating synthesis, Robert “r0ml” Lefkowitz
polishes up his exposition on the evolving meaning
of the term ‘open source’. This intellectual joy-ride
draws on some of the key ideas in artificial intelligence
to probe the role of language, meaning and context
in computing and the software development process.
Like Wittgenstein’s famous thought experiment, the
open source ‘beetle in a box’ can represent different
things to different people, bearing interesting fruit for
philosophers and software creators alike.

http://itc.conversationsnetwork.org/audio/download/itconversations-1502.mp3


On 3/7/20, Tim Daly <[hidden email]> wrote:

> I've pushed the lastest version of Axiom. The plan, followed
> so far, is to push once a month on the 7th.
>
> After some chat room interactions it was brought home
> again that the proof world really does not seem to like the
> idea of proving programs correct. And, given that it was is
> of the main Axiom goals and a point of friction during the fork,
> the computer algebra world does not like the idea of proving
> programs correct either.
>
> So the idea of "computational mathematics", which includes
> both disciplines (as well as type theory) seems still a long
> way off.
>
> Nevertheless, the primary change in these past and future
> updates is focused on merging proof and computer algebra.
>
> Proof systems are able to split the problem of creating a
> proof and the problem of verifying a proof, which is much
> cheaper. Ideally the proof checker would run on verified
> hardware so the proof is checked "down to the metal".
>
> I have a background in Field Programmable Gate Arrays
> (FPGAs) as I tried to do a startup using them. So now I'm
> looking at creating a hardware proof checker using a
> dedicated instruction set, one designed to be verifed.
> New CPUs used in data centers (not yet available to us
> mortals) have built-in FPGAs so it would be possible to
> "side-load" a proof of a program to be checked while the
> program is run. I have the FPGA and am doing a gate-level
> special instruction design for such a proof checker.
>
>
> On 2/7/20, Tim Daly <[hidden email]> wrote:
>> As a mathematician, it is difficult to use a system like Axiom,
>> mostly because it keeps muttering about Types. If you're not
>> familiar with type theory (most mathematicians aren't) then it
>> seems pointless and painful.
>>
>> So Axiom has a steep learning curve.
>>
>> As a mathematician with an algorithmic approach, it is difficult
>> to use a system like Axiom, mostly because you have to find
>> or create "domains" or "packages", understand categories
>> with their inheritance model, and learn a new language with
>> a painful compiler always complaining about types.
>>
>> So Axiom has a steep learning curve.
>>
>> The Sane version of Axiom requires knowing the mathematics.
>> It also assumes a background in type theory, inductive logic,
>> homotopy type theory, ML (meta-language, not machine
>> learning (yet)), interactive theorem proving, kernels, tactics,
>> and tacticals. Also assumed is knowledge of specification languages,
>> Hoare triples, proof techniques, soundness, and completeness.
>> Oh, and there is a whole new syntax and semantics added to
>> specify definitions, axioms, and theorems, not to mention whole
>> libraries of the same.
>>
>> So Axiom Sane has a steep learning curve.
>>
>> I've taken 10 courses at CMU and spent the last 4-5 years
>> learning to read the leading edge literature (also known
>> as "greek studies", since every paper has pages of greek).
>>
>> I'm trying to unify computer algebra and proof theory into a
>> "computational mathematics" framework. I suspect that the only
>> way this system will ever be useful is after Universities have a
>> "Computational Mathematics" major course of study and degree.
>>
>> Creating a new department is harder than creating Axiom Sane
>> because, you know, ... people.
>>
>> I think such a department is inevitable given the deep and wide
>> impact of computers, just not in my lifetime. That's ok. When I
>> started programming there was no computer science degree.
>>
>> Somebody has to be the first lemming over the cliff.
>>
>> Tim
>>
>> On 1/9/20, Tim Daly <[hidden email]> wrote:
>>> When Axiom Sane is paired with a proof checker (e.g. with Lean)
>>> there is a certain amount of verification that is involved.
>>>
>>> Axiom will provide proofs (presumably validated by Lean) for its
>>> algorithms. Ideally, when a computation is requested from Lean
>>> for a GCD, the result as well as a proof of the GCD algorithm is
>>> returned. Lean can the verify that the proof is valid. But it is
>>> computationally more efficient if Axiom and Lean use a cryptographic
>>> hash, such as SHA1. That way the proof doesn't need to be
>>> 'reproven', only a hash computation over the proof text needs to
>>> be performed. Hashes are blazingly fast. This allows proofs to be
>>> exchanged without re-running the proof mechanism. Since a large
>>> computation request from Lean might involve many algorithms
>>> there would be considerable overhead to recompute each proof.
>>> A hash simplifies the issue yet provides proof integrity.
>>>
>>> Tim
>>>
>>>
>>> On 1/9/20, Tim Daly <[hidden email]> wrote:
>>>> Provisos.... that is, 'formula SUCH pre/post-conditions'
>>>>
>>>> A computer algebra system ought to know and ought to provide
>>>> information about the domain and range of a resulting formula.
>>>> I've been pushing this effort since the 1980s (hence the
>>>> SuchThat domain).
>>>>
>>>> It turns out that computing with, carrying, and combining this
>>>> information is difficult if not impossible in the current system.
>>>> The information isn't available and isn't computed. In that sense,
>>>> the original Axiom system is 'showing its age'.
>>>>
>>>> In the Sane implementation the information is available. It is
>>>> part of the specification and part of the proof steps. With a
>>>> careful design it will be possible to provide provisos for each
>>>> given result that are carried with the result for use in further
>>>> computation.
>>>>
>>>> This raises interesting questions to be explored. For example,
>>>> if the formula is defined over an interval, how is the interval
>>>> arithmetic handled?
>>>>
>>>> Exciting research ahead!
>>>>
>>>> Tim
>>>>
>>>>
>>>>
>>>> On 1/3/20, Tim Daly <[hidden email]> wrote:
>>>>> Trusted Kernel... all the way to the metal.
>>>>>
>>>>> While building a trusted computer algebra system, the
>>>>> SANE version of Axiom, I've been looking at questions of
>>>>> trust at all levels.
>>>>>
>>>>> One of the key tenets (the de Bruijn principle) calls for a
>>>>> trusted kernel through which all other computations must
>>>>> pass. Coq, Lean, and other systems do this. They base
>>>>> their kernel  on logic like the Calculus of Construction or
>>>>> something similar.
>>>>>
>>>>> Andrej Bauer has been working on a smaller kernel (a
>>>>> nucleus) that separates the trust from the logic. The rules
>>>>> for the logic can be specified as needed but checked by
>>>>> the nucleus code.
>>>>>
>>>>> I've been studying Field Programmable Gate Arrays (FPGA)
>>>>> that allow you to create your own hardware in a C-like
>>>>> language (Verilog). It allows you to check the chip you build
>>>>> all the way down to the transistor states. You can create
>>>>> things as complex as a whole CPU or as simple as a trusted
>>>>> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
>>>>> history of verifying hardware logic.
>>>>>
>>>>> It appears that, assuming I can understand Bauers
>>>>> Andromeda system, it would be possible and not that hard
>>>>> to implement a trusted kernel on an FPGA the size and
>>>>> form factor of a USB stick.
>>>>>
>>>>> Trust "down to the metal".
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>>
>>>>> On 12/15/19, Tim Daly <[hidden email]> wrote:
>>>>>> Progress in happening on the new Sane Axiom compiler.
>>>>>>
>>>>>> Recently I've been musing about methods to insert axioms
>>>>>> into categories so they could be inherited like signatures.
>>>>>> At the moment I've been thinking about adding axioms in
>>>>>> the same way that signatures are written, adding them to
>>>>>> the appropriate categories.
>>>>>>
>>>>>> But this is an interesting design question.
>>>>>>
>>>>>> Axiom already has a mechanism for inheriting signatures
>>>>>> from categories. That is, we can bet a plus signature from,
>>>>>> say, the Integer category.
>>>>>>
>>>>>> Suppose we follow the same pattern. Currently Axiom
>>>>>> inherits certain so-called "attributes", such as
>>>>>> ApproximateAttribute,
>>>>>> which implies that the results are only approximate.
>>>>>>
>>>>>> We could adapt the same mechnaism to inherit the Transitive
>>>>>> property by defining it in its own category. In fact, if we
>>>>>> follow Carette and Farmer's "tiny theories" architecture,
>>>>>> where each property has its own inheritable category,
>>>>>> we can "mix and match" the axioms at will.
>>>>>>
>>>>>> An "axiom" category would also export a function. This function
>>>>>> would essentially be a "tactic" used in a proof. It would modify
>>>>>> the proof step by applying the function to the step.
>>>>>>
>>>>>> Theorems would have the same structure.
>>>>>>
>>>>>> This allows theorems to be constructed at run time (since
>>>>>> Axiom supports "First Class Dynamic Types".
>>>>>>
>>>>>> In addition, this design can be "pushed down" into the Spad
>>>>>> language so that Spad statements (e.g. assignment) had
>>>>>> proof-related properties. A range such as [1..10] would
>>>>>> provide explicit bounds in a proof "by language definition".
>>>>>> Defining the logical properties of language statements in
>>>>>> this way would make it easier to construct proofs since the
>>>>>> invariants would be partially constructed already.
>>>>>>
>>>>>> This design merges the computer algebra inheritance
>>>>>> structure with the proof of algorithms structure, all under
>>>>>> the same mechanism.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>> On 12/11/19, Tim Daly <[hidden email]> wrote:
>>>>>>> I've been reading Stephen Kell's (Univ of Kent
>>>>>>> https://www.cs.kent.ac.uk/people/staff/srk21/) on
>>>>>>> Seven deadly sins of talking about “types”
>>>>>>> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>>>>>>>
>>>>>>> He raised an interesting idea toward the end of the essay
>>>>>>> that type-checking could be done outside the compiler.
>>>>>>>
>>>>>>> I can see a way to do this in Axiom's Sane compiler.
>>>>>>> It would be possible to run a program over the source code
>>>>>>> to collect the information and write a stand-alone type
>>>>>>> checker. This "unbundles" type checking and compiling.
>>>>>>>
>>>>>>> Taken further I can think of several other kinds of checkers
>>>>>>> (aka 'linters') that could be unbundled.
>>>>>>>
>>>>>>> It is certainly something to explore.
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>> On 12/8/19, Tim Daly <[hidden email]> wrote:
>>>>>>>> The Axiom Sane compiler is being "shaped by the hammer
>>>>>>>> blows of reality", to coin a phrase.
>>>>>>>>
>>>>>>>> There are many goals. One of the primary goals is creating a
>>>>>>>> compiler that can be understood, maintained, and modified.
>>>>>>>>
>>>>>>>> So the latest changes involved adding multiple index files.
>>>>>>>> These are documentation (links to where terms are mentioned
>>>>>>>> in the text), code (links to the implementation of things),
>>>>>>>> error (links to where errors are defined), signatures (links to
>>>>>>>> the signatures of lisp functions), figures (links to figures),
>>>>>>>> and separate category, domain, and package indexes.
>>>>>>>>
>>>>>>>> The tikz package is now used to create "railroad diagrams"
>>>>>>>> of syntax (ala, the PASCAL report). The implementation of
>>>>>>>> those diagrams follows immediately. Collectively these will
>>>>>>>> eventually define at least the syntax of the language. In the
>>>>>>>> ideal, changing the diagram would change the code but I'm
>>>>>>>> not that clever.
>>>>>>>>
>>>>>>>> Reality shows up with the curent constraint that the
>>>>>>>> compiler should accept the current Spad language as
>>>>>>>> closely as possible. Of course, plans are to include new
>>>>>>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>>>>>>> but these are being postponed until "syntax complete".
>>>>>>>>
>>>>>>>> All parse information is stored in a parse object, which
>>>>>>>> is a CLOS object (and therefore a Common Lisp type)
>>>>>>>> Fields within the parse object, e.g. variables are also
>>>>>>>> CLOS objects (and therefore a Common Lisp type).
>>>>>>>> It's types all the way down.
>>>>>>>>
>>>>>>>> These types are being used as 'signatures' for the
>>>>>>>> lisp functions. The goal is to be able to type-check the
>>>>>>>> compiler implementation as well as the Sane language.
>>>>>>>>
>>>>>>>> The parser is designed to "wrap around" so that the
>>>>>>>> user-level output of a parse should be the user-level
>>>>>>>> input (albeit in a 'canonical" form). This "mirror effect"
>>>>>>>> should make it easy to see that the parser properly
>>>>>>>> parsed the user input.
>>>>>>>>
>>>>>>>> The parser is "first class" so it will be available at
>>>>>>>> runtime as a domain allowing Spad code to construct
>>>>>>>> Spad code.
>>>>>>>>
>>>>>>>> One plan, not near implementation, is to "unify" some
>>>>>>>> CLOS types with the Axiom types (e.g. String). How
>>>>>>>> this will happen is still in the land of design. This would
>>>>>>>> "ground" Spad in lisp, making them co-equal.
>>>>>>>>
>>>>>>>> Making lisp "co-equal" is a feature, especially as Spad is
>>>>>>>> really just a domain-specific language in lisp. Lisp
>>>>>>>> functions (with CLOS types as signatures) would be
>>>>>>>> avaiable for implementing Spad functions. This not
>>>>>>>> only improves the efficiency, it would make the
>>>>>>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>>>>>>> .
>>>>>>>> On the theory front I plan to attend the Formal Methods
>>>>>>>> in Mathematics / Lean Together conference, mostly to
>>>>>>>> know how little I know, especially that I need to know.
>>>>>>>> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 11/28/19, Jacques Carette <[hidden email]> wrote:
>>>>>>>>> The underlying technology to use for building such an algebra
>>>>>>>>> library
>>>>>>>>> is
>>>>>>>>> documented in the paper " Building on the Diamonds between
>>>>>>>>> Theories:
>>>>>>>>> Theory Presentation Combinators"
>>>>>>>>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which
>>>>>>>>> will
>>>>>>>>> also be on the arxiv by Monday, and has been submitted to a
>>>>>>>>> journal].
>>>>>>>>>
>>>>>>>>> There is a rather full-fledged prototype, very well documented at
>>>>>>>>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>>>>>>>>
>>>>>>>>> (source at https://github.com/alhassy/next-700-module-systems). It
>>>>>>>>> is
>>>>>>>>> literate source.
>>>>>>>>>
>>>>>>>>> The old prototype was hard to find - it is now at
>>>>>>>>> https://github.com/JacquesCarette/MathScheme.
>>>>>>>>>
>>>>>>>>> There is also a third prototype in the MMT system, but it does not
>>>>>>>>> quite
>>>>>>>>> function properly today, it is under repair.
>>>>>>>>>
>>>>>>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>>>>>>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>>>>>>>>
>>>>>>>>> is also relevant, as it solves a problem with parametrized
>>>>>>>>> theories
>>>>>>>>> (parametrized Categories in Axiom terminology) that all current
>>>>>>>>> systems
>>>>>>>>> suffer from.
>>>>>>>>>
>>>>>>>>> Jacques
>>>>>>>>>
>>>>>>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>>>>>>> The new Sane compiler is also being tested with the Fricas
>>>>>>>>>> algebra code. The compiler knows about the language but
>>>>>>>>>> does not depend on the algebra library (so far). It should be
>>>>>>>>>> possible, by design, to load different algebra towers.
>>>>>>>>>>
>>>>>>>>>> In particular, one idea is to support the "tiny theories"
>>>>>>>>>> algebra from Carette and Farmer. This would allow much
>>>>>>>>>> finer grain separation of algebra and axioms.
>>>>>>>>>>
>>>>>>>>>> This "flexible algebra" design would allow things like the
>>>>>>>>>> Lean theorem prover effort in a more natural style.
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>> The current design and code base (in bookvol15) supports
>>>>>>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>>>>>>
>>>>>>>>>>> Another possible design choice is to target the GNU
>>>>>>>>>>> GCC intermediate representation, making Axiom "just
>>>>>>>>>>> another front-end language" supported by GCC.
>>>>>>>>>>>
>>>>>>>>>>> The current intermediate representation does not (yet)
>>>>>>>>>>> make any decision about the runtime implementation.
>>>>>>>>>>>
>>>>>>>>>>> Tim
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>>>>>>>> a dependently typed general parser for context free grammar
>>>>>>>>>>>> in Coq.
>>>>>>>>>>>>
>>>>>>>>>>>> They used the parser to prove its own completeness.
>>>>>>>>>>>>
>>>>>>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>>>>>>> But it is an intersting thought exercise to consider
>>>>>>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>>>>>>
>>>>>>>>>>>> Tim
>>>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
WHY PROVE AXIOM CORRECT (SANE)?

Historically, Axiom credits CLU, the Cluster language by
Barbara Liskov, with the essential ideas behind the Spad
language. Barbara gave a talk (a partial transcript below)
that gives the rational behind the ``where clause'' used by
Spad.

She talks about the limits of the compile time capablity.
In particular, she says:

   To go further, where we would say that T,
   in addition, has to be an equality relation, that requires
   much more sophisticated techniques that, even today, are
   beyond the capabilities of the compiler.

Showing that the ``equal'' function satisfies the equality
relation is no longer ``beyond the capabilities of the compiler''.
We have the required formalisms and mechanisms to
prove properties at compile time.

The SANE effort is essentially trying to push compile
time checking into proving that, for categories that use
``equal'', we prove that the equal function implements
equality.

I strongly encourage you to watch her video.

Tim

===========================================
Barbara Liskov
May 2012
MIT CSAIL
Programming the Turing Machine
https://www.youtube.com/watch?v=ibRar7sWulM

POLYMORPHISM

We don't just want a set, we want polymorphism or
generics, as they are called today. We wanted to
have a generic set which was paramaterized by type
so you could instantiate it as:

Set = [T:type] create, insert,...
  % representation for Set object
  % implementation of Set operations
  Set

Set[int] s := Set[int]$create()
Set[int]$insert(s,3)

We wanted a static solution to this problem. The
problem is, not every type makes sense as a parameter
to Set of T. For sets, per se, you need an equality
relation. If it has been a sorted set we would have
some ordering relation. And a type that didn't have
one of those things would not have been a legitimate
parameter. We needed a way of expressing that in a
compile-time, checkable manner. Otherwise we would
have had to resort to runtime checking.

Our solution was

Set = [T:  ] create, insert,...
  T equal: (T,T) (bool)


Our solution, what we call the ``where clause''. So we
added this to the header. The ``where clause'' tells you
what operations the parameter type has to have.

If you have the ``where'' clause you can do the static
checking because when you instantiate, when you provide
an actual type, the compiler can check that the type has
the operations that are required. And then, when you write
the implementation of Set the compiler knows it's ok to
call those operations because you can guarantee they are
actually there when you get around to running.

Of course, you notice that there's just syntax here; there's
no semantics.

As I'm sure you all know, compile-time type checking is
basically a proof technique of a very limited sort and
this was about as far as we can push what you could get out of the
static analysis. To go further, where we would say that T,
in addition, has to be an equality relation, that requires
much more sophisticated techniques that, even today, are
beyond the capabilities of the compiler.




On 3/24/20, Tim Daly <[hidden email]> wrote:

> I've spent entirely too much time studing the legal issues
> of free and open source software. There are copyright,
> trademark, and intellectual property laws. I have read
> several books, listened to lectures, and read papers on
> the subject. I've spoken to lawyers about it. I've even
> been required, by law, to coerce people I respect.
> You would think it was all perfectly clear. It isn't.
>
> The most entertaining and enlightening lectures were
> by Robert Lefkowitz at OSCON 2004. His talk is
> "The Semasiology of Open Source", which sounds
> horrible but I assure you, this is a real treat.
>
> THE THESIS
>
> Semasiology, n. The science of meanings or
> sense development (of words); the explanation
> of the development and changes of the meanings
> of words. Source: Webster's Revised Unabridged
> Dictionary, � 1996, 1998 MICRA, Inc.
>
> "Open source doesn't just mean access to the
> source code." So begins the Open Source Definition.
> What then, does access to the source code mean?
> Seen through the lens of an Enterprise user, what
> does open source mean? When is (or isn't) it
> significant? And a catalogue of open source
> related arbitrage opportunities.
>
> http://origin.conversationsnetwork.org/Robert%20Lefkowitz%20-%20The%20Semasiology%20of%20Open%20Source.mp3
>
> Computer source code has words and sentence
> structure like actual prose or even poetry. Writing
> code for the computer is like writing an essay. It
> should be written for other people to read,
> understand and modify. These are some of the
> thoughts behind literate programming proposed
> by Donald Knuth. This is also one of the ideas
> behind Open Source.
>
>  THE ANTITHESIS
>
> "Open Source" is a phrase like "Object Oriented"
> - weird at first, but when it became popular, the
> meaning began to depend on the context of the
> speaker or listener. "Object Oriented" meant that
> PERL, C++, Java, Smalltalk, Basic and the newest
> version of Cobol are all "Object Oriented" - for some
> specific definition of "Object Oriented". Similar is
> the case of the phrase "Open Source".
>
> In Part I, Lefkowitz talked about the shift of the
> meaning of "Open Source" away from any reference
> to the actual "source code," and more towards other
> phases of the software development life cycle. In
> Part II, he returns to the consideration of the
> relationship between "open source" and the actual
> "source code," and reflects upon both the way
> forward and the road behind, drawing inspiration
> from Charlemagne, King Louis XIV, Donald Knuth,
> and others.
>
> http://origin.conversationsnetwork.org/ITC.OSCON05-RobertLefkowitz-2005.08.03.mp3
>
> THE SYNTHESIS
>
> In a fascinating synthesis, Robert “r0ml” Lefkowitz
> polishes up his exposition on the evolving meaning
> of the term ‘open source’. This intellectual joy-ride
> draws on some of the key ideas in artificial intelligence
> to probe the role of language, meaning and context
> in computing and the software development process.
> Like Wittgenstein’s famous thought experiment, the
> open source ‘beetle in a box’ can represent different
> things to different people, bearing interesting fruit for
> philosophers and software creators alike.
>
> http://itc.conversationsnetwork.org/audio/download/itconversations-1502.mp3
>
>
> On 3/7/20, Tim Daly <[hidden email]> wrote:
>> I've pushed the lastest version of Axiom. The plan, followed
>> so far, is to push once a month on the 7th.
>>
>> After some chat room interactions it was brought home
>> again that the proof world really does not seem to like the
>> idea of proving programs correct. And, given that it was is
>> of the main Axiom goals and a point of friction during the fork,
>> the computer algebra world does not like the idea of proving
>> programs correct either.
>>
>> So the idea of "computational mathematics", which includes
>> both disciplines (as well as type theory) seems still a long
>> way off.
>>
>> Nevertheless, the primary change in these past and future
>> updates is focused on merging proof and computer algebra.
>>
>> Proof systems are able to split the problem of creating a
>> proof and the problem of verifying a proof, which is much
>> cheaper. Ideally the proof checker would run on verified
>> hardware so the proof is checked "down to the metal".
>>
>> I have a background in Field Programmable Gate Arrays
>> (FPGAs) as I tried to do a startup using them. So now I'm
>> looking at creating a hardware proof checker using a
>> dedicated instruction set, one designed to be verifed.
>> New CPUs used in data centers (not yet available to us
>> mortals) have built-in FPGAs so it would be possible to
>> "side-load" a proof of a program to be checked while the
>> program is run. I have the FPGA and am doing a gate-level
>> special instruction design for such a proof checker.
>>
>>
>> On 2/7/20, Tim Daly <[hidden email]> wrote:
>>> As a mathematician, it is difficult to use a system like Axiom,
>>> mostly because it keeps muttering about Types. If you're not
>>> familiar with type theory (most mathematicians aren't) then it
>>> seems pointless and painful.
>>>
>>> So Axiom has a steep learning curve.
>>>
>>> As a mathematician with an algorithmic approach, it is difficult
>>> to use a system like Axiom, mostly because you have to find
>>> or create "domains" or "packages", understand categories
>>> with their inheritance model, and learn a new language with
>>> a painful compiler always complaining about types.
>>>
>>> So Axiom has a steep learning curve.
>>>
>>> The Sane version of Axiom requires knowing the mathematics.
>>> It also assumes a background in type theory, inductive logic,
>>> homotopy type theory, ML (meta-language, not machine
>>> learning (yet)), interactive theorem proving, kernels, tactics,
>>> and tacticals. Also assumed is knowledge of specification languages,
>>> Hoare triples, proof techniques, soundness, and completeness.
>>> Oh, and there is a whole new syntax and semantics added to
>>> specify definitions, axioms, and theorems, not to mention whole
>>> libraries of the same.
>>>
>>> So Axiom Sane has a steep learning curve.
>>>
>>> I've taken 10 courses at CMU and spent the last 4-5 years
>>> learning to read the leading edge literature (also known
>>> as "greek studies", since every paper has pages of greek).
>>>
>>> I'm trying to unify computer algebra and proof theory into a
>>> "computational mathematics" framework. I suspect that the only
>>> way this system will ever be useful is after Universities have a
>>> "Computational Mathematics" major course of study and degree.
>>>
>>> Creating a new department is harder than creating Axiom Sane
>>> because, you know, ... people.
>>>
>>> I think such a department is inevitable given the deep and wide
>>> impact of computers, just not in my lifetime. That's ok. When I
>>> started programming there was no computer science degree.
>>>
>>> Somebody has to be the first lemming over the cliff.
>>>
>>> Tim
>>>
>>> On 1/9/20, Tim Daly <[hidden email]> wrote:
>>>> When Axiom Sane is paired with a proof checker (e.g. with Lean)
>>>> there is a certain amount of verification that is involved.
>>>>
>>>> Axiom will provide proofs (presumably validated by Lean) for its
>>>> algorithms. Ideally, when a computation is requested from Lean
>>>> for a GCD, the result as well as a proof of the GCD algorithm is
>>>> returned. Lean can the verify that the proof is valid. But it is
>>>> computationally more efficient if Axiom and Lean use a cryptographic
>>>> hash, such as SHA1. That way the proof doesn't need to be
>>>> 'reproven', only a hash computation over the proof text needs to
>>>> be performed. Hashes are blazingly fast. This allows proofs to be
>>>> exchanged without re-running the proof mechanism. Since a large
>>>> computation request from Lean might involve many algorithms
>>>> there would be considerable overhead to recompute each proof.
>>>> A hash simplifies the issue yet provides proof integrity.
>>>>
>>>> Tim
>>>>
>>>>
>>>> On 1/9/20, Tim Daly <[hidden email]> wrote:
>>>>> Provisos.... that is, 'formula SUCH pre/post-conditions'
>>>>>
>>>>> A computer algebra system ought to know and ought to provide
>>>>> information about the domain and range of a resulting formula.
>>>>> I've been pushing this effort since the 1980s (hence the
>>>>> SuchThat domain).
>>>>>
>>>>> It turns out that computing with, carrying, and combining this
>>>>> information is difficult if not impossible in the current system.
>>>>> The information isn't available and isn't computed. In that sense,
>>>>> the original Axiom system is 'showing its age'.
>>>>>
>>>>> In the Sane implementation the information is available. It is
>>>>> part of the specification and part of the proof steps. With a
>>>>> careful design it will be possible to provide provisos for each
>>>>> given result that are carried with the result for use in further
>>>>> computation.
>>>>>
>>>>> This raises interesting questions to be explored. For example,
>>>>> if the formula is defined over an interval, how is the interval
>>>>> arithmetic handled?
>>>>>
>>>>> Exciting research ahead!
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>>
>>>>> On 1/3/20, Tim Daly <[hidden email]> wrote:
>>>>>> Trusted Kernel... all the way to the metal.
>>>>>>
>>>>>> While building a trusted computer algebra system, the
>>>>>> SANE version of Axiom, I've been looking at questions of
>>>>>> trust at all levels.
>>>>>>
>>>>>> One of the key tenets (the de Bruijn principle) calls for a
>>>>>> trusted kernel through which all other computations must
>>>>>> pass. Coq, Lean, and other systems do this. They base
>>>>>> their kernel  on logic like the Calculus of Construction or
>>>>>> something similar.
>>>>>>
>>>>>> Andrej Bauer has been working on a smaller kernel (a
>>>>>> nucleus) that separates the trust from the logic. The rules
>>>>>> for the logic can be specified as needed but checked by
>>>>>> the nucleus code.
>>>>>>
>>>>>> I've been studying Field Programmable Gate Arrays (FPGA)
>>>>>> that allow you to create your own hardware in a C-like
>>>>>> language (Verilog). It allows you to check the chip you build
>>>>>> all the way down to the transistor states. You can create
>>>>>> things as complex as a whole CPU or as simple as a trusted
>>>>>> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
>>>>>> history of verifying hardware logic.
>>>>>>
>>>>>> It appears that, assuming I can understand Bauers
>>>>>> Andromeda system, it would be possible and not that hard
>>>>>> to implement a trusted kernel on an FPGA the size and
>>>>>> form factor of a USB stick.
>>>>>>
>>>>>> Trust "down to the metal".
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 12/15/19, Tim Daly <[hidden email]> wrote:
>>>>>>> Progress in happening on the new Sane Axiom compiler.
>>>>>>>
>>>>>>> Recently I've been musing about methods to insert axioms
>>>>>>> into categories so they could be inherited like signatures.
>>>>>>> At the moment I've been thinking about adding axioms in
>>>>>>> the same way that signatures are written, adding them to
>>>>>>> the appropriate categories.
>>>>>>>
>>>>>>> But this is an interesting design question.
>>>>>>>
>>>>>>> Axiom already has a mechanism for inheriting signatures
>>>>>>> from categories. That is, we can bet a plus signature from,
>>>>>>> say, the Integer category.
>>>>>>>
>>>>>>> Suppose we follow the same pattern. Currently Axiom
>>>>>>> inherits certain so-called "attributes", such as
>>>>>>> ApproximateAttribute,
>>>>>>> which implies that the results are only approximate.
>>>>>>>
>>>>>>> We could adapt the same mechnaism to inherit the Transitive
>>>>>>> property by defining it in its own category. In fact, if we
>>>>>>> follow Carette and Farmer's "tiny theories" architecture,
>>>>>>> where each property has its own inheritable category,
>>>>>>> we can "mix and match" the axioms at will.
>>>>>>>
>>>>>>> An "axiom" category would also export a function. This function
>>>>>>> would essentially be a "tactic" used in a proof. It would modify
>>>>>>> the proof step by applying the function to the step.
>>>>>>>
>>>>>>> Theorems would have the same structure.
>>>>>>>
>>>>>>> This allows theorems to be constructed at run time (since
>>>>>>> Axiom supports "First Class Dynamic Types".
>>>>>>>
>>>>>>> In addition, this design can be "pushed down" into the Spad
>>>>>>> language so that Spad statements (e.g. assignment) had
>>>>>>> proof-related properties. A range such as [1..10] would
>>>>>>> provide explicit bounds in a proof "by language definition".
>>>>>>> Defining the logical properties of language statements in
>>>>>>> this way would make it easier to construct proofs since the
>>>>>>> invariants would be partially constructed already.
>>>>>>>
>>>>>>> This design merges the computer algebra inheritance
>>>>>>> structure with the proof of algorithms structure, all under
>>>>>>> the same mechanism.
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>> On 12/11/19, Tim Daly <[hidden email]> wrote:
>>>>>>>> I've been reading Stephen Kell's (Univ of Kent
>>>>>>>> https://www.cs.kent.ac.uk/people/staff/srk21/) on
>>>>>>>> Seven deadly sins of talking about “types”
>>>>>>>> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>>>>>>>>
>>>>>>>> He raised an interesting idea toward the end of the essay
>>>>>>>> that type-checking could be done outside the compiler.
>>>>>>>>
>>>>>>>> I can see a way to do this in Axiom's Sane compiler.
>>>>>>>> It would be possible to run a program over the source code
>>>>>>>> to collect the information and write a stand-alone type
>>>>>>>> checker. This "unbundles" type checking and compiling.
>>>>>>>>
>>>>>>>> Taken further I can think of several other kinds of checkers
>>>>>>>> (aka 'linters') that could be unbundled.
>>>>>>>>
>>>>>>>> It is certainly something to explore.
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>> On 12/8/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>> The Axiom Sane compiler is being "shaped by the hammer
>>>>>>>>> blows of reality", to coin a phrase.
>>>>>>>>>
>>>>>>>>> There are many goals. One of the primary goals is creating a
>>>>>>>>> compiler that can be understood, maintained, and modified.
>>>>>>>>>
>>>>>>>>> So the latest changes involved adding multiple index files.
>>>>>>>>> These are documentation (links to where terms are mentioned
>>>>>>>>> in the text), code (links to the implementation of things),
>>>>>>>>> error (links to where errors are defined), signatures (links to
>>>>>>>>> the signatures of lisp functions), figures (links to figures),
>>>>>>>>> and separate category, domain, and package indexes.
>>>>>>>>>
>>>>>>>>> The tikz package is now used to create "railroad diagrams"
>>>>>>>>> of syntax (ala, the PASCAL report). The implementation of
>>>>>>>>> those diagrams follows immediately. Collectively these will
>>>>>>>>> eventually define at least the syntax of the language. In the
>>>>>>>>> ideal, changing the diagram would change the code but I'm
>>>>>>>>> not that clever.
>>>>>>>>>
>>>>>>>>> Reality shows up with the curent constraint that the
>>>>>>>>> compiler should accept the current Spad language as
>>>>>>>>> closely as possible. Of course, plans are to include new
>>>>>>>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>>>>>>>> but these are being postponed until "syntax complete".
>>>>>>>>>
>>>>>>>>> All parse information is stored in a parse object, which
>>>>>>>>> is a CLOS object (and therefore a Common Lisp type)
>>>>>>>>> Fields within the parse object, e.g. variables are also
>>>>>>>>> CLOS objects (and therefore a Common Lisp type).
>>>>>>>>> It's types all the way down.
>>>>>>>>>
>>>>>>>>> These types are being used as 'signatures' for the
>>>>>>>>> lisp functions. The goal is to be able to type-check the
>>>>>>>>> compiler implementation as well as the Sane language.
>>>>>>>>>
>>>>>>>>> The parser is designed to "wrap around" so that the
>>>>>>>>> user-level output of a parse should be the user-level
>>>>>>>>> input (albeit in a 'canonical" form). This "mirror effect"
>>>>>>>>> should make it easy to see that the parser properly
>>>>>>>>> parsed the user input.
>>>>>>>>>
>>>>>>>>> The parser is "first class" so it will be available at
>>>>>>>>> runtime as a domain allowing Spad code to construct
>>>>>>>>> Spad code.
>>>>>>>>>
>>>>>>>>> One plan, not near implementation, is to "unify" some
>>>>>>>>> CLOS types with the Axiom types (e.g. String). How
>>>>>>>>> this will happen is still in the land of design. This would
>>>>>>>>> "ground" Spad in lisp, making them co-equal.
>>>>>>>>>
>>>>>>>>> Making lisp "co-equal" is a feature, especially as Spad is
>>>>>>>>> really just a domain-specific language in lisp. Lisp
>>>>>>>>> functions (with CLOS types as signatures) would be
>>>>>>>>> avaiable for implementing Spad functions. This not
>>>>>>>>> only improves the efficiency, it would make the
>>>>>>>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>>>>>>>> .
>>>>>>>>> On the theory front I plan to attend the Formal Methods
>>>>>>>>> in Mathematics / Lean Together conference, mostly to
>>>>>>>>> know how little I know, especially that I need to know.
>>>>>>>>> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 11/28/19, Jacques Carette <[hidden email]> wrote:
>>>>>>>>>> The underlying technology to use for building such an algebra
>>>>>>>>>> library
>>>>>>>>>> is
>>>>>>>>>> documented in the paper " Building on the Diamonds between
>>>>>>>>>> Theories:
>>>>>>>>>> Theory Presentation Combinators"
>>>>>>>>>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which
>>>>>>>>>> will
>>>>>>>>>> also be on the arxiv by Monday, and has been submitted to a
>>>>>>>>>> journal].
>>>>>>>>>>
>>>>>>>>>> There is a rather full-fledged prototype, very well documented at
>>>>>>>>>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>>>>>>>>>
>>>>>>>>>> (source at https://github.com/alhassy/next-700-module-systems).
>>>>>>>>>> It
>>>>>>>>>> is
>>>>>>>>>> literate source.
>>>>>>>>>>
>>>>>>>>>> The old prototype was hard to find - it is now at
>>>>>>>>>> https://github.com/JacquesCarette/MathScheme.
>>>>>>>>>>
>>>>>>>>>> There is also a third prototype in the MMT system, but it does
>>>>>>>>>> not
>>>>>>>>>> quite
>>>>>>>>>> function properly today, it is under repair.
>>>>>>>>>>
>>>>>>>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>>>>>>>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>>>>>>>>>
>>>>>>>>>> is also relevant, as it solves a problem with parametrized
>>>>>>>>>> theories
>>>>>>>>>> (parametrized Categories in Axiom terminology) that all current
>>>>>>>>>> systems
>>>>>>>>>> suffer from.
>>>>>>>>>>
>>>>>>>>>> Jacques
>>>>>>>>>>
>>>>>>>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>>>>>>>> The new Sane compiler is also being tested with the Fricas
>>>>>>>>>>> algebra code. The compiler knows about the language but
>>>>>>>>>>> does not depend on the algebra library (so far). It should be
>>>>>>>>>>> possible, by design, to load different algebra towers.
>>>>>>>>>>>
>>>>>>>>>>> In particular, one idea is to support the "tiny theories"
>>>>>>>>>>> algebra from Carette and Farmer. This would allow much
>>>>>>>>>>> finer grain separation of algebra and axioms.
>>>>>>>>>>>
>>>>>>>>>>> This "flexible algebra" design would allow things like the
>>>>>>>>>>> Lean theorem prover effort in a more natural style.
>>>>>>>>>>>
>>>>>>>>>>> Tim
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>>> The current design and code base (in bookvol15) supports
>>>>>>>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>>>>>>>
>>>>>>>>>>>> Another possible design choice is to target the GNU
>>>>>>>>>>>> GCC intermediate representation, making Axiom "just
>>>>>>>>>>>> another front-end language" supported by GCC.
>>>>>>>>>>>>
>>>>>>>>>>>> The current intermediate representation does not (yet)
>>>>>>>>>>>> make any decision about the runtime implementation.
>>>>>>>>>>>>
>>>>>>>>>>>> Tim
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>>>>>>>>> a dependently typed general parser for context free grammar
>>>>>>>>>>>>> in Coq.
>>>>>>>>>>>>>
>>>>>>>>>>>>> They used the parser to prove its own completeness.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>>>>>>>> But it is an intersting thought exercise to consider
>>>>>>>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
Time for another update.

The latest Intel processors, available only to data centers
so far, have a built-in FPGA. This allows you to design
your own circuits and have them loaded "on the fly",
running in parallel with the CPU.

I bought a Lattice ICEstick FPGA development board. For
the first time there are open source tools that support it so
it is a great test bench for ideas and development. It is a
USB drive so it can be easily ported to any PC.
(https://www.latticesemi.com/products/developmentboardsandkits/icestick)

I also bought a large Intel Cyclone FPGA development board.
(http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&No=836)
which has 2 embedded ARM processors. Unfortunately
the tools (which are freely available) are not open source.
It has sufficient size and power to do anything.


I've got 2 threads of work in progress, both of which
involve FPGAs (Field Programmable Gate Arrays).

Thread 1

The first thread involves proving programs correct. Once
a proof has been made it is rather easier to check the proof.
If code is shipped with a proof, the proof can be loaded into
an FPGA running a proof-checker which verifies the program
in parallel with running the code on the CPU.

I am researching the question of writing a proof checker that
runs on an FPGA, thus verifying the code "down to the metal".
The Lean proof checker is the current target.

The idea is to make "Oracle" algorithms that, because they
are proven correct and verified at runtime, can be trusted
by other mathematical software (e.g. Lean, Coq, Agda)
when used in proofs.

Thread 2


The second thread involves arithmetic. Axiom currently ships
with numeric routines (BLAS and LAPACK, see bookvol10.5).
These routines have a known set of numeric failures such as
cancellation, underflow, and scaling.

John Gustafson has designed a 'unum' numeric format that can
eliminate many of these errors. (See
Gustafson, John "The End of Error" CRC Press 2015
https://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868/ref=sr_1_1?dchild=1&keywords=gustafson+the+end+of+error&qid=1593685423&sr=8-1)

The research goal is to implement Axiom's floating-point
arithmetic that can be offloaded onto an FPGA implementing
the unum format. Such a system would radically simplify
the implementation of BLAS and LAPACK as most of the
errors can't occur. The impact would be similar to using
multi-precision integer arithmetic, only now its floating-point.

SANE, the greater goal.

The Axiom SANE compiler / interpreter can use both of
these tools to implement trusted mathematical software.
It's a long, ambitious research effort but even if only pieces
of it succeed, it changes computational mathematics.

Tim

"A person's reach should exceed their grasp,
or what's a computer for?"  (misquoting Robert Browning)

(https://www.quotetab.com/quote/by-robert-browning/ah-but-a-mans-reach-should-exceed-his-grasp-or-whats-a-heaven-for)




On 6/16/20, Tim Daly <[hidden email]> wrote:

> WHY PROVE AXIOM CORRECT (SANE)?
>
> Historically, Axiom credits CLU, the Cluster language by
> Barbara Liskov, with the essential ideas behind the Spad
> language. Barbara gave a talk (a partial transcript below)
> that gives the rational behind the ``where clause'' used by
> Spad.
>
> She talks about the limits of the compile time capablity.
> In particular, she says:
>
>    To go further, where we would say that T,
>    in addition, has to be an equality relation, that requires
>    much more sophisticated techniques that, even today, are
>    beyond the capabilities of the compiler.
>
> Showing that the ``equal'' function satisfies the equality
> relation is no longer ``beyond the capabilities of the compiler''.
> We have the required formalisms and mechanisms to
> prove properties at compile time.
>
> The SANE effort is essentially trying to push compile
> time checking into proving that, for categories that use
> ``equal'', we prove that the equal function implements
> equality.
>
> I strongly encourage you to watch her video.
>
> Tim
>
> ===========================================
> Barbara Liskov
> May 2012
> MIT CSAIL
> Programming the Turing Machine
> https://www.youtube.com/watch?v=ibRar7sWulM
>
> POLYMORPHISM
>
> We don't just want a set, we want polymorphism or
> generics, as they are called today. We wanted to
> have a generic set which was paramaterized by type
> so you could instantiate it as:
>
> Set = [T:type] create, insert,...
>   % representation for Set object
>   % implementation of Set operations
>   Set
>
> Set[int] s := Set[int]$create()
> Set[int]$insert(s,3)
>
> We wanted a static solution to this problem. The
> problem is, not every type makes sense as a parameter
> to Set of T. For sets, per se, you need an equality
> relation. If it has been a sorted set we would have
> some ordering relation. And a type that didn't have
> one of those things would not have been a legitimate
> parameter. We needed a way of expressing that in a
> compile-time, checkable manner. Otherwise we would
> have had to resort to runtime checking.
>
> Our solution was
>
> Set = [T:  ] create, insert,...
>   T equal: (T,T) (bool)
>
>
> Our solution, what we call the ``where clause''. So we
> added this to the header. The ``where clause'' tells you
> what operations the parameter type has to have.
>
> If you have the ``where'' clause you can do the static
> checking because when you instantiate, when you provide
> an actual type, the compiler can check that the type has
> the operations that are required. And then, when you write
> the implementation of Set the compiler knows it's ok to
> call those operations because you can guarantee they are
> actually there when you get around to running.
>
> Of course, you notice that there's just syntax here; there's
> no semantics.
>
> As I'm sure you all know, compile-time type checking is
> basically a proof technique of a very limited sort and
> this was about as far as we can push what you could get out of the
> static analysis. To go further, where we would say that T,
> in addition, has to be an equality relation, that requires
> much more sophisticated techniques that, even today, are
> beyond the capabilities of the compiler.
>
>
>
>
> On 3/24/20, Tim Daly <[hidden email]> wrote:
>> I've spent entirely too much time studing the legal issues
>> of free and open source software. There are copyright,
>> trademark, and intellectual property laws. I have read
>> several books, listened to lectures, and read papers on
>> the subject. I've spoken to lawyers about it. I've even
>> been required, by law, to coerce people I respect.
>> You would think it was all perfectly clear. It isn't.
>>
>> The most entertaining and enlightening lectures were
>> by Robert Lefkowitz at OSCON 2004. His talk is
>> "The Semasiology of Open Source", which sounds
>> horrible but I assure you, this is a real treat.
>>
>> THE THESIS
>>
>> Semasiology, n. The science of meanings or
>> sense development (of words); the explanation
>> of the development and changes of the meanings
>> of words. Source: Webster's Revised Unabridged
>> Dictionary, � 1996, 1998 MICRA, Inc.
>>
>> "Open source doesn't just mean access to the
>> source code." So begins the Open Source Definition.
>> What then, does access to the source code mean?
>> Seen through the lens of an Enterprise user, what
>> does open source mean? When is (or isn't) it
>> significant? And a catalogue of open source
>> related arbitrage opportunities.
>>
>> http://origin.conversationsnetwork.org/Robert%20Lefkowitz%20-%20The%20Semasiology%20of%20Open%20Source.mp3
>>
>> Computer source code has words and sentence
>> structure like actual prose or even poetry. Writing
>> code for the computer is like writing an essay. It
>> should be written for other people to read,
>> understand and modify. These are some of the
>> thoughts behind literate programming proposed
>> by Donald Knuth. This is also one of the ideas
>> behind Open Source.
>>
>>  THE ANTITHESIS
>>
>> "Open Source" is a phrase like "Object Oriented"
>> - weird at first, but when it became popular, the
>> meaning began to depend on the context of the
>> speaker or listener. "Object Oriented" meant that
>> PERL, C++, Java, Smalltalk, Basic and the newest
>> version of Cobol are all "Object Oriented" - for some
>> specific definition of "Object Oriented". Similar is
>> the case of the phrase "Open Source".
>>
>> In Part I, Lefkowitz talked about the shift of the
>> meaning of "Open Source" away from any reference
>> to the actual "source code," and more towards other
>> phases of the software development life cycle. In
>> Part II, he returns to the consideration of the
>> relationship between "open source" and the actual
>> "source code," and reflects upon both the way
>> forward and the road behind, drawing inspiration
>> from Charlemagne, King Louis XIV, Donald Knuth,
>> and others.
>>
>> http://origin.conversationsnetwork.org/ITC.OSCON05-RobertLefkowitz-2005.08.03.mp3
>>
>> THE SYNTHESIS
>>
>> In a fascinating synthesis, Robert “r0ml” Lefkowitz
>> polishes up his exposition on the evolving meaning
>> of the term ‘open source’. This intellectual joy-ride
>> draws on some of the key ideas in artificial intelligence
>> to probe the role of language, meaning and context
>> in computing and the software development process.
>> Like Wittgenstein’s famous thought experiment, the
>> open source ‘beetle in a box’ can represent different
>> things to different people, bearing interesting fruit for
>> philosophers and software creators alike.
>>
>> http://itc.conversationsnetwork.org/audio/download/itconversations-1502.mp3
>>
>>
>> On 3/7/20, Tim Daly <[hidden email]> wrote:
>>> I've pushed the lastest version of Axiom. The plan, followed
>>> so far, is to push once a month on the 7th.
>>>
>>> After some chat room interactions it was brought home
>>> again that the proof world really does not seem to like the
>>> idea of proving programs correct. And, given that it was is
>>> of the main Axiom goals and a point of friction during the fork,
>>> the computer algebra world does not like the idea of proving
>>> programs correct either.
>>>
>>> So the idea of "computational mathematics", which includes
>>> both disciplines (as well as type theory) seems still a long
>>> way off.
>>>
>>> Nevertheless, the primary change in these past and future
>>> updates is focused on merging proof and computer algebra.
>>>
>>> Proof systems are able to split the problem of creating a
>>> proof and the problem of verifying a proof, which is much
>>> cheaper. Ideally the proof checker would run on verified
>>> hardware so the proof is checked "down to the metal".
>>>
>>> I have a background in Field Programmable Gate Arrays
>>> (FPGAs) as I tried to do a startup using them. So now I'm
>>> looking at creating a hardware proof checker using a
>>> dedicated instruction set, one designed to be verifed.
>>> New CPUs used in data centers (not yet available to us
>>> mortals) have built-in FPGAs so it would be possible to
>>> "side-load" a proof of a program to be checked while the
>>> program is run. I have the FPGA and am doing a gate-level
>>> special instruction design for such a proof checker.
>>>
>>>
>>> On 2/7/20, Tim Daly <[hidden email]> wrote:
>>>> As a mathematician, it is difficult to use a system like Axiom,
>>>> mostly because it keeps muttering about Types. If you're not
>>>> familiar with type theory (most mathematicians aren't) then it
>>>> seems pointless and painful.
>>>>
>>>> So Axiom has a steep learning curve.
>>>>
>>>> As a mathematician with an algorithmic approach, it is difficult
>>>> to use a system like Axiom, mostly because you have to find
>>>> or create "domains" or "packages", understand categories
>>>> with their inheritance model, and learn a new language with
>>>> a painful compiler always complaining about types.
>>>>
>>>> So Axiom has a steep learning curve.
>>>>
>>>> The Sane version of Axiom requires knowing the mathematics.
>>>> It also assumes a background in type theory, inductive logic,
>>>> homotopy type theory, ML (meta-language, not machine
>>>> learning (yet)), interactive theorem proving, kernels, tactics,
>>>> and tacticals. Also assumed is knowledge of specification languages,
>>>> Hoare triples, proof techniques, soundness, and completeness.
>>>> Oh, and there is a whole new syntax and semantics added to
>>>> specify definitions, axioms, and theorems, not to mention whole
>>>> libraries of the same.
>>>>
>>>> So Axiom Sane has a steep learning curve.
>>>>
>>>> I've taken 10 courses at CMU and spent the last 4-5 years
>>>> learning to read the leading edge literature (also known
>>>> as "greek studies", since every paper has pages of greek).
>>>>
>>>> I'm trying to unify computer algebra and proof theory into a
>>>> "computational mathematics" framework. I suspect that the only
>>>> way this system will ever be useful is after Universities have a
>>>> "Computational Mathematics" major course of study and degree.
>>>>
>>>> Creating a new department is harder than creating Axiom Sane
>>>> because, you know, ... people.
>>>>
>>>> I think such a department is inevitable given the deep and wide
>>>> impact of computers, just not in my lifetime. That's ok. When I
>>>> started programming there was no computer science degree.
>>>>
>>>> Somebody has to be the first lemming over the cliff.
>>>>
>>>> Tim
>>>>
>>>> On 1/9/20, Tim Daly <[hidden email]> wrote:
>>>>> When Axiom Sane is paired with a proof checker (e.g. with Lean)
>>>>> there is a certain amount of verification that is involved.
>>>>>
>>>>> Axiom will provide proofs (presumably validated by Lean) for its
>>>>> algorithms. Ideally, when a computation is requested from Lean
>>>>> for a GCD, the result as well as a proof of the GCD algorithm is
>>>>> returned. Lean can the verify that the proof is valid. But it is
>>>>> computationally more efficient if Axiom and Lean use a cryptographic
>>>>> hash, such as SHA1. That way the proof doesn't need to be
>>>>> 'reproven', only a hash computation over the proof text needs to
>>>>> be performed. Hashes are blazingly fast. This allows proofs to be
>>>>> exchanged without re-running the proof mechanism. Since a large
>>>>> computation request from Lean might involve many algorithms
>>>>> there would be considerable overhead to recompute each proof.
>>>>> A hash simplifies the issue yet provides proof integrity.
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>> On 1/9/20, Tim Daly <[hidden email]> wrote:
>>>>>> Provisos.... that is, 'formula SUCH pre/post-conditions'
>>>>>>
>>>>>> A computer algebra system ought to know and ought to provide
>>>>>> information about the domain and range of a resulting formula.
>>>>>> I've been pushing this effort since the 1980s (hence the
>>>>>> SuchThat domain).
>>>>>>
>>>>>> It turns out that computing with, carrying, and combining this
>>>>>> information is difficult if not impossible in the current system.
>>>>>> The information isn't available and isn't computed. In that sense,
>>>>>> the original Axiom system is 'showing its age'.
>>>>>>
>>>>>> In the Sane implementation the information is available. It is
>>>>>> part of the specification and part of the proof steps. With a
>>>>>> careful design it will be possible to provide provisos for each
>>>>>> given result that are carried with the result for use in further
>>>>>> computation.
>>>>>>
>>>>>> This raises interesting questions to be explored. For example,
>>>>>> if the formula is defined over an interval, how is the interval
>>>>>> arithmetic handled?
>>>>>>
>>>>>> Exciting research ahead!
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 1/3/20, Tim Daly <[hidden email]> wrote:
>>>>>>> Trusted Kernel... all the way to the metal.
>>>>>>>
>>>>>>> While building a trusted computer algebra system, the
>>>>>>> SANE version of Axiom, I've been looking at questions of
>>>>>>> trust at all levels.
>>>>>>>
>>>>>>> One of the key tenets (the de Bruijn principle) calls for a
>>>>>>> trusted kernel through which all other computations must
>>>>>>> pass. Coq, Lean, and other systems do this. They base
>>>>>>> their kernel  on logic like the Calculus of Construction or
>>>>>>> something similar.
>>>>>>>
>>>>>>> Andrej Bauer has been working on a smaller kernel (a
>>>>>>> nucleus) that separates the trust from the logic. The rules
>>>>>>> for the logic can be specified as needed but checked by
>>>>>>> the nucleus code.
>>>>>>>
>>>>>>> I've been studying Field Programmable Gate Arrays (FPGA)
>>>>>>> that allow you to create your own hardware in a C-like
>>>>>>> language (Verilog). It allows you to check the chip you build
>>>>>>> all the way down to the transistor states. You can create
>>>>>>> things as complex as a whole CPU or as simple as a trusted
>>>>>>> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
>>>>>>> history of verifying hardware logic.
>>>>>>>
>>>>>>> It appears that, assuming I can understand Bauers
>>>>>>> Andromeda system, it would be possible and not that hard
>>>>>>> to implement a trusted kernel on an FPGA the size and
>>>>>>> form factor of a USB stick.
>>>>>>>
>>>>>>> Trust "down to the metal".
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 12/15/19, Tim Daly <[hidden email]> wrote:
>>>>>>>> Progress in happening on the new Sane Axiom compiler.
>>>>>>>>
>>>>>>>> Recently I've been musing about methods to insert axioms
>>>>>>>> into categories so they could be inherited like signatures.
>>>>>>>> At the moment I've been thinking about adding axioms in
>>>>>>>> the same way that signatures are written, adding them to
>>>>>>>> the appropriate categories.
>>>>>>>>
>>>>>>>> But this is an interesting design question.
>>>>>>>>
>>>>>>>> Axiom already has a mechanism for inheriting signatures
>>>>>>>> from categories. That is, we can bet a plus signature from,
>>>>>>>> say, the Integer category.
>>>>>>>>
>>>>>>>> Suppose we follow the same pattern. Currently Axiom
>>>>>>>> inherits certain so-called "attributes", such as
>>>>>>>> ApproximateAttribute,
>>>>>>>> which implies that the results are only approximate.
>>>>>>>>
>>>>>>>> We could adapt the same mechnaism to inherit the Transitive
>>>>>>>> property by defining it in its own category. In fact, if we
>>>>>>>> follow Carette and Farmer's "tiny theories" architecture,
>>>>>>>> where each property has its own inheritable category,
>>>>>>>> we can "mix and match" the axioms at will.
>>>>>>>>
>>>>>>>> An "axiom" category would also export a function. This function
>>>>>>>> would essentially be a "tactic" used in a proof. It would modify
>>>>>>>> the proof step by applying the function to the step.
>>>>>>>>
>>>>>>>> Theorems would have the same structure.
>>>>>>>>
>>>>>>>> This allows theorems to be constructed at run time (since
>>>>>>>> Axiom supports "First Class Dynamic Types".
>>>>>>>>
>>>>>>>> In addition, this design can be "pushed down" into the Spad
>>>>>>>> language so that Spad statements (e.g. assignment) had
>>>>>>>> proof-related properties. A range such as [1..10] would
>>>>>>>> provide explicit bounds in a proof "by language definition".
>>>>>>>> Defining the logical properties of language statements in
>>>>>>>> this way would make it easier to construct proofs since the
>>>>>>>> invariants would be partially constructed already.
>>>>>>>>
>>>>>>>> This design merges the computer algebra inheritance
>>>>>>>> structure with the proof of algorithms structure, all under
>>>>>>>> the same mechanism.
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>> On 12/11/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>> I've been reading Stephen Kell's (Univ of Kent
>>>>>>>>> https://www.cs.kent.ac.uk/people/staff/srk21/) on
>>>>>>>>> Seven deadly sins of talking about “types”
>>>>>>>>> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>>>>>>>>>
>>>>>>>>> He raised an interesting idea toward the end of the essay
>>>>>>>>> that type-checking could be done outside the compiler.
>>>>>>>>>
>>>>>>>>> I can see a way to do this in Axiom's Sane compiler.
>>>>>>>>> It would be possible to run a program over the source code
>>>>>>>>> to collect the information and write a stand-alone type
>>>>>>>>> checker. This "unbundles" type checking and compiling.
>>>>>>>>>
>>>>>>>>> Taken further I can think of several other kinds of checkers
>>>>>>>>> (aka 'linters') that could be unbundled.
>>>>>>>>>
>>>>>>>>> It is certainly something to explore.
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 12/8/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>> The Axiom Sane compiler is being "shaped by the hammer
>>>>>>>>>> blows of reality", to coin a phrase.
>>>>>>>>>>
>>>>>>>>>> There are many goals. One of the primary goals is creating a
>>>>>>>>>> compiler that can be understood, maintained, and modified.
>>>>>>>>>>
>>>>>>>>>> So the latest changes involved adding multiple index files.
>>>>>>>>>> These are documentation (links to where terms are mentioned
>>>>>>>>>> in the text), code (links to the implementation of things),
>>>>>>>>>> error (links to where errors are defined), signatures (links to
>>>>>>>>>> the signatures of lisp functions), figures (links to figures),
>>>>>>>>>> and separate category, domain, and package indexes.
>>>>>>>>>>
>>>>>>>>>> The tikz package is now used to create "railroad diagrams"
>>>>>>>>>> of syntax (ala, the PASCAL report). The implementation of
>>>>>>>>>> those diagrams follows immediately. Collectively these will
>>>>>>>>>> eventually define at least the syntax of the language. In the
>>>>>>>>>> ideal, changing the diagram would change the code but I'm
>>>>>>>>>> not that clever.
>>>>>>>>>>
>>>>>>>>>> Reality shows up with the curent constraint that the
>>>>>>>>>> compiler should accept the current Spad language as
>>>>>>>>>> closely as possible. Of course, plans are to include new
>>>>>>>>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>>>>>>>>> but these are being postponed until "syntax complete".
>>>>>>>>>>
>>>>>>>>>> All parse information is stored in a parse object, which
>>>>>>>>>> is a CLOS object (and therefore a Common Lisp type)
>>>>>>>>>> Fields within the parse object, e.g. variables are also
>>>>>>>>>> CLOS objects (and therefore a Common Lisp type).
>>>>>>>>>> It's types all the way down.
>>>>>>>>>>
>>>>>>>>>> These types are being used as 'signatures' for the
>>>>>>>>>> lisp functions. The goal is to be able to type-check the
>>>>>>>>>> compiler implementation as well as the Sane language.
>>>>>>>>>>
>>>>>>>>>> The parser is designed to "wrap around" so that the
>>>>>>>>>> user-level output of a parse should be the user-level
>>>>>>>>>> input (albeit in a 'canonical" form). This "mirror effect"
>>>>>>>>>> should make it easy to see that the parser properly
>>>>>>>>>> parsed the user input.
>>>>>>>>>>
>>>>>>>>>> The parser is "first class" so it will be available at
>>>>>>>>>> runtime as a domain allowing Spad code to construct
>>>>>>>>>> Spad code.
>>>>>>>>>>
>>>>>>>>>> One plan, not near implementation, is to "unify" some
>>>>>>>>>> CLOS types with the Axiom types (e.g. String). How
>>>>>>>>>> this will happen is still in the land of design. This would
>>>>>>>>>> "ground" Spad in lisp, making them co-equal.
>>>>>>>>>>
>>>>>>>>>> Making lisp "co-equal" is a feature, especially as Spad is
>>>>>>>>>> really just a domain-specific language in lisp. Lisp
>>>>>>>>>> functions (with CLOS types as signatures) would be
>>>>>>>>>> avaiable for implementing Spad functions. This not
>>>>>>>>>> only improves the efficiency, it would make the
>>>>>>>>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>>>>>>>>> .
>>>>>>>>>> On the theory front I plan to attend the Formal Methods
>>>>>>>>>> in Mathematics / Lean Together conference, mostly to
>>>>>>>>>> know how little I know, especially that I need to know.
>>>>>>>>>> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 11/28/19, Jacques Carette <[hidden email]> wrote:
>>>>>>>>>>> The underlying technology to use for building such an algebra
>>>>>>>>>>> library
>>>>>>>>>>> is
>>>>>>>>>>> documented in the paper " Building on the Diamonds between
>>>>>>>>>>> Theories:
>>>>>>>>>>> Theory Presentation Combinators"
>>>>>>>>>>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf [which
>>>>>>>>>>> will
>>>>>>>>>>> also be on the arxiv by Monday, and has been submitted to a
>>>>>>>>>>> journal].
>>>>>>>>>>>
>>>>>>>>>>> There is a rather full-fledged prototype, very well documented
>>>>>>>>>>> at
>>>>>>>>>>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>>>>>>>>>>
>>>>>>>>>>> (source at https://github.com/alhassy/next-700-module-systems).
>>>>>>>>>>> It
>>>>>>>>>>> is
>>>>>>>>>>> literate source.
>>>>>>>>>>>
>>>>>>>>>>> The old prototype was hard to find - it is now at
>>>>>>>>>>> https://github.com/JacquesCarette/MathScheme.
>>>>>>>>>>>
>>>>>>>>>>> There is also a third prototype in the MMT system, but it does
>>>>>>>>>>> not
>>>>>>>>>>> quite
>>>>>>>>>>> function properly today, it is under repair.
>>>>>>>>>>>
>>>>>>>>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>>>>>>>>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>>>>>>>>>>
>>>>>>>>>>> is also relevant, as it solves a problem with parametrized
>>>>>>>>>>> theories
>>>>>>>>>>> (parametrized Categories in Axiom terminology) that all current
>>>>>>>>>>> systems
>>>>>>>>>>> suffer from.
>>>>>>>>>>>
>>>>>>>>>>> Jacques
>>>>>>>>>>>
>>>>>>>>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>>>>>>>>> The new Sane compiler is also being tested with the Fricas
>>>>>>>>>>>> algebra code. The compiler knows about the language but
>>>>>>>>>>>> does not depend on the algebra library (so far). It should be
>>>>>>>>>>>> possible, by design, to load different algebra towers.
>>>>>>>>>>>>
>>>>>>>>>>>> In particular, one idea is to support the "tiny theories"
>>>>>>>>>>>> algebra from Carette and Farmer. This would allow much
>>>>>>>>>>>> finer grain separation of algebra and axioms.
>>>>>>>>>>>>
>>>>>>>>>>>> This "flexible algebra" design would allow things like the
>>>>>>>>>>>> Lean theorem prover effort in a more natural style.
>>>>>>>>>>>>
>>>>>>>>>>>> Tim
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>>>> The current design and code base (in bookvol15) supports
>>>>>>>>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Another possible design choice is to target the GNU
>>>>>>>>>>>>> GCC intermediate representation, making Axiom "just
>>>>>>>>>>>>> another front-end language" supported by GCC.
>>>>>>>>>>>>>
>>>>>>>>>>>>> The current intermediate representation does not (yet)
>>>>>>>>>>>>> make any decision about the runtime implementation.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>>>>>>>>>> a dependently typed general parser for context free grammar
>>>>>>>>>>>>>> in Coq.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> They used the parser to prove its own completeness.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>>>>>>>>> But it is an intersting thought exercise to consider
>>>>>>>>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Axiom musings...

Tim Daly
Richard Hamming gave a great talk. "You and Your Research"
https://www.youtube.com/watch?v=a1zDuOPkMSw

His big question is:

"What is the most important problem in your field
and why aren't you working on it?"

To my mind, the most important problem in the field of
computational mathematics is grounding computer
algebra in proofs.

Computer mathematical algorithms that "maybe,
possibly, give correct answers sometimes" is a problem.
Indeed, for computer algebra, it is the most important
problem. We need proven algorithms.

New algorithms, better graphics, better documentation,
are all "nice to have" but, as Hamming would say,
they are not "the most important problem".

Tim



On 7/2/20, Tim Daly <[hidden email]> wrote:

> Time for another update.
>
> The latest Intel processors, available only to data centers
> so far, have a built-in FPGA. This allows you to design
> your own circuits and have them loaded "on the fly",
> running in parallel with the CPU.
>
> I bought a Lattice ICEstick FPGA development board. For
> the first time there are open source tools that support it so
> it is a great test bench for ideas and development. It is a
> USB drive so it can be easily ported to any PC.
> (https://www.latticesemi.com/products/developmentboardsandkits/icestick)
>
> I also bought a large Intel Cyclone FPGA development board.
> (http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&No=836)
> which has 2 embedded ARM processors. Unfortunately
> the tools (which are freely available) are not open source.
> It has sufficient size and power to do anything.
>
>
> I've got 2 threads of work in progress, both of which
> involve FPGAs (Field Programmable Gate Arrays).
>
> Thread 1
>
> The first thread involves proving programs correct. Once
> a proof has been made it is rather easier to check the proof.
> If code is shipped with a proof, the proof can be loaded into
> an FPGA running a proof-checker which verifies the program
> in parallel with running the code on the CPU.
>
> I am researching the question of writing a proof checker that
> runs on an FPGA, thus verifying the code "down to the metal".
> The Lean proof checker is the current target.
>
> The idea is to make "Oracle" algorithms that, because they
> are proven correct and verified at runtime, can be trusted
> by other mathematical software (e.g. Lean, Coq, Agda)
> when used in proofs.
>
> Thread 2
>
>
> The second thread involves arithmetic. Axiom currently ships
> with numeric routines (BLAS and LAPACK, see bookvol10.5).
> These routines have a known set of numeric failures such as
> cancellation, underflow, and scaling.
>
> John Gustafson has designed a 'unum' numeric format that can
> eliminate many of these errors. (See
> Gustafson, John "The End of Error" CRC Press 2015
> https://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868/ref=sr_1_1?dchild=1&keywords=gustafson+the+end+of+error&qid=1593685423&sr=8-1)
>
> The research goal is to implement Axiom's floating-point
> arithmetic that can be offloaded onto an FPGA implementing
> the unum format. Such a system would radically simplify
> the implementation of BLAS and LAPACK as most of the
> errors can't occur. The impact would be similar to using
> multi-precision integer arithmetic, only now its floating-point.
>
> SANE, the greater goal.
>
> The Axiom SANE compiler / interpreter can use both of
> these tools to implement trusted mathematical software.
> It's a long, ambitious research effort but even if only pieces
> of it succeed, it changes computational mathematics.
>
> Tim
>
> "A person's reach should exceed their grasp,
> or what's a computer for?"  (misquoting Robert Browning)
>
> (https://www.quotetab.com/quote/by-robert-browning/ah-but-a-mans-reach-should-exceed-his-grasp-or-whats-a-heaven-for)
>
>
>
>
> On 6/16/20, Tim Daly <[hidden email]> wrote:
>> WHY PROVE AXIOM CORRECT (SANE)?
>>
>> Historically, Axiom credits CLU, the Cluster language by
>> Barbara Liskov, with the essential ideas behind the Spad
>> language. Barbara gave a talk (a partial transcript below)
>> that gives the rational behind the ``where clause'' used by
>> Spad.
>>
>> She talks about the limits of the compile time capablity.
>> In particular, she says:
>>
>>    To go further, where we would say that T,
>>    in addition, has to be an equality relation, that requires
>>    much more sophisticated techniques that, even today, are
>>    beyond the capabilities of the compiler.
>>
>> Showing that the ``equal'' function satisfies the equality
>> relation is no longer ``beyond the capabilities of the compiler''.
>> We have the required formalisms and mechanisms to
>> prove properties at compile time.
>>
>> The SANE effort is essentially trying to push compile
>> time checking into proving that, for categories that use
>> ``equal'', we prove that the equal function implements
>> equality.
>>
>> I strongly encourage you to watch her video.
>>
>> Tim
>>
>> ===========================================
>> Barbara Liskov
>> May 2012
>> MIT CSAIL
>> Programming the Turing Machine
>> https://www.youtube.com/watch?v=ibRar7sWulM
>>
>> POLYMORPHISM
>>
>> We don't just want a set, we want polymorphism or
>> generics, as they are called today. We wanted to
>> have a generic set which was paramaterized by type
>> so you could instantiate it as:
>>
>> Set = [T:type] create, insert,...
>>   % representation for Set object
>>   % implementation of Set operations
>>   Set
>>
>> Set[int] s := Set[int]$create()
>> Set[int]$insert(s,3)
>>
>> We wanted a static solution to this problem. The
>> problem is, not every type makes sense as a parameter
>> to Set of T. For sets, per se, you need an equality
>> relation. If it has been a sorted set we would have
>> some ordering relation. And a type that didn't have
>> one of those things would not have been a legitimate
>> parameter. We needed a way of expressing that in a
>> compile-time, checkable manner. Otherwise we would
>> have had to resort to runtime checking.
>>
>> Our solution was
>>
>> Set = [T:  ] create, insert,...
>>   T equal: (T,T) (bool)
>>
>>
>> Our solution, what we call the ``where clause''. So we
>> added this to the header. The ``where clause'' tells you
>> what operations the parameter type has to have.
>>
>> If you have the ``where'' clause you can do the static
>> checking because when you instantiate, when you provide
>> an actual type, the compiler can check that the type has
>> the operations that are required. And then, when you write
>> the implementation of Set the compiler knows it's ok to
>> call those operations because you can guarantee they are
>> actually there when you get around to running.
>>
>> Of course, you notice that there's just syntax here; there's
>> no semantics.
>>
>> As I'm sure you all know, compile-time type checking is
>> basically a proof technique of a very limited sort and
>> this was about as far as we can push what you could get out of the
>> static analysis. To go further, where we would say that T,
>> in addition, has to be an equality relation, that requires
>> much more sophisticated techniques that, even today, are
>> beyond the capabilities of the compiler.
>>
>>
>>
>>
>> On 3/24/20, Tim Daly <[hidden email]> wrote:
>>> I've spent entirely too much time studing the legal issues
>>> of free and open source software. There are copyright,
>>> trademark, and intellectual property laws. I have read
>>> several books, listened to lectures, and read papers on
>>> the subject. I've spoken to lawyers about it. I've even
>>> been required, by law, to coerce people I respect.
>>> You would think it was all perfectly clear. It isn't.
>>>
>>> The most entertaining and enlightening lectures were
>>> by Robert Lefkowitz at OSCON 2004. His talk is
>>> "The Semasiology of Open Source", which sounds
>>> horrible but I assure you, this is a real treat.
>>>
>>> THE THESIS
>>>
>>> Semasiology, n. The science of meanings or
>>> sense development (of words); the explanation
>>> of the development and changes of the meanings
>>> of words. Source: Webster's Revised Unabridged
>>> Dictionary, � 1996, 1998 MICRA, Inc.
>>>
>>> "Open source doesn't just mean access to the
>>> source code." So begins the Open Source Definition.
>>> What then, does access to the source code mean?
>>> Seen through the lens of an Enterprise user, what
>>> does open source mean? When is (or isn't) it
>>> significant? And a catalogue of open source
>>> related arbitrage opportunities.
>>>
>>> http://origin.conversationsnetwork.org/Robert%20Lefkowitz%20-%20The%20Semasiology%20of%20Open%20Source.mp3
>>>
>>> Computer source code has words and sentence
>>> structure like actual prose or even poetry. Writing
>>> code for the computer is like writing an essay. It
>>> should be written for other people to read,
>>> understand and modify. These are some of the
>>> thoughts behind literate programming proposed
>>> by Donald Knuth. This is also one of the ideas
>>> behind Open Source.
>>>
>>>  THE ANTITHESIS
>>>
>>> "Open Source" is a phrase like "Object Oriented"
>>> - weird at first, but when it became popular, the
>>> meaning began to depend on the context of the
>>> speaker or listener. "Object Oriented" meant that
>>> PERL, C++, Java, Smalltalk, Basic and the newest
>>> version of Cobol are all "Object Oriented" - for some
>>> specific definition of "Object Oriented". Similar is
>>> the case of the phrase "Open Source".
>>>
>>> In Part I, Lefkowitz talked about the shift of the
>>> meaning of "Open Source" away from any reference
>>> to the actual "source code," and more towards other
>>> phases of the software development life cycle. In
>>> Part II, he returns to the consideration of the
>>> relationship between "open source" and the actual
>>> "source code," and reflects upon both the way
>>> forward and the road behind, drawing inspiration
>>> from Charlemagne, King Louis XIV, Donald Knuth,
>>> and others.
>>>
>>> http://origin.conversationsnetwork.org/ITC.OSCON05-RobertLefkowitz-2005.08.03.mp3
>>>
>>> THE SYNTHESIS
>>>
>>> In a fascinating synthesis, Robert “r0ml” Lefkowitz
>>> polishes up his exposition on the evolving meaning
>>> of the term ‘open source’. This intellectual joy-ride
>>> draws on some of the key ideas in artificial intelligence
>>> to probe the role of language, meaning and context
>>> in computing and the software development process.
>>> Like Wittgenstein’s famous thought experiment, the
>>> open source ‘beetle in a box’ can represent different
>>> things to different people, bearing interesting fruit for
>>> philosophers and software creators alike.
>>>
>>> http://itc.conversationsnetwork.org/audio/download/itconversations-1502.mp3
>>>
>>>
>>> On 3/7/20, Tim Daly <[hidden email]> wrote:
>>>> I've pushed the lastest version of Axiom. The plan, followed
>>>> so far, is to push once a month on the 7th.
>>>>
>>>> After some chat room interactions it was brought home
>>>> again that the proof world really does not seem to like the
>>>> idea of proving programs correct. And, given that it was is
>>>> of the main Axiom goals and a point of friction during the fork,
>>>> the computer algebra world does not like the idea of proving
>>>> programs correct either.
>>>>
>>>> So the idea of "computational mathematics", which includes
>>>> both disciplines (as well as type theory) seems still a long
>>>> way off.
>>>>
>>>> Nevertheless, the primary change in these past and future
>>>> updates is focused on merging proof and computer algebra.
>>>>
>>>> Proof systems are able to split the problem of creating a
>>>> proof and the problem of verifying a proof, which is much
>>>> cheaper. Ideally the proof checker would run on verified
>>>> hardware so the proof is checked "down to the metal".
>>>>
>>>> I have a background in Field Programmable Gate Arrays
>>>> (FPGAs) as I tried to do a startup using them. So now I'm
>>>> looking at creating a hardware proof checker using a
>>>> dedicated instruction set, one designed to be verifed.
>>>> New CPUs used in data centers (not yet available to us
>>>> mortals) have built-in FPGAs so it would be possible to
>>>> "side-load" a proof of a program to be checked while the
>>>> program is run. I have the FPGA and am doing a gate-level
>>>> special instruction design for such a proof checker.
>>>>
>>>>
>>>> On 2/7/20, Tim Daly <[hidden email]> wrote:
>>>>> As a mathematician, it is difficult to use a system like Axiom,
>>>>> mostly because it keeps muttering about Types. If you're not
>>>>> familiar with type theory (most mathematicians aren't) then it
>>>>> seems pointless and painful.
>>>>>
>>>>> So Axiom has a steep learning curve.
>>>>>
>>>>> As a mathematician with an algorithmic approach, it is difficult
>>>>> to use a system like Axiom, mostly because you have to find
>>>>> or create "domains" or "packages", understand categories
>>>>> with their inheritance model, and learn a new language with
>>>>> a painful compiler always complaining about types.
>>>>>
>>>>> So Axiom has a steep learning curve.
>>>>>
>>>>> The Sane version of Axiom requires knowing the mathematics.
>>>>> It also assumes a background in type theory, inductive logic,
>>>>> homotopy type theory, ML (meta-language, not machine
>>>>> learning (yet)), interactive theorem proving, kernels, tactics,
>>>>> and tacticals. Also assumed is knowledge of specification languages,
>>>>> Hoare triples, proof techniques, soundness, and completeness.
>>>>> Oh, and there is a whole new syntax and semantics added to
>>>>> specify definitions, axioms, and theorems, not to mention whole
>>>>> libraries of the same.
>>>>>
>>>>> So Axiom Sane has a steep learning curve.
>>>>>
>>>>> I've taken 10 courses at CMU and spent the last 4-5 years
>>>>> learning to read the leading edge literature (also known
>>>>> as "greek studies", since every paper has pages of greek).
>>>>>
>>>>> I'm trying to unify computer algebra and proof theory into a
>>>>> "computational mathematics" framework. I suspect that the only
>>>>> way this system will ever be useful is after Universities have a
>>>>> "Computational Mathematics" major course of study and degree.
>>>>>
>>>>> Creating a new department is harder than creating Axiom Sane
>>>>> because, you know, ... people.
>>>>>
>>>>> I think such a department is inevitable given the deep and wide
>>>>> impact of computers, just not in my lifetime. That's ok. When I
>>>>> started programming there was no computer science degree.
>>>>>
>>>>> Somebody has to be the first lemming over the cliff.
>>>>>
>>>>> Tim
>>>>>
>>>>> On 1/9/20, Tim Daly <[hidden email]> wrote:
>>>>>> When Axiom Sane is paired with a proof checker (e.g. with Lean)
>>>>>> there is a certain amount of verification that is involved.
>>>>>>
>>>>>> Axiom will provide proofs (presumably validated by Lean) for its
>>>>>> algorithms. Ideally, when a computation is requested from Lean
>>>>>> for a GCD, the result as well as a proof of the GCD algorithm is
>>>>>> returned. Lean can the verify that the proof is valid. But it is
>>>>>> computationally more efficient if Axiom and Lean use a cryptographic
>>>>>> hash, such as SHA1. That way the proof doesn't need to be
>>>>>> 'reproven', only a hash computation over the proof text needs to
>>>>>> be performed. Hashes are blazingly fast. This allows proofs to be
>>>>>> exchanged without re-running the proof mechanism. Since a large
>>>>>> computation request from Lean might involve many algorithms
>>>>>> there would be considerable overhead to recompute each proof.
>>>>>> A hash simplifies the issue yet provides proof integrity.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>
>>>>>> On 1/9/20, Tim Daly <[hidden email]> wrote:
>>>>>>> Provisos.... that is, 'formula SUCH pre/post-conditions'
>>>>>>>
>>>>>>> A computer algebra system ought to know and ought to provide
>>>>>>> information about the domain and range of a resulting formula.
>>>>>>> I've been pushing this effort since the 1980s (hence the
>>>>>>> SuchThat domain).
>>>>>>>
>>>>>>> It turns out that computing with, carrying, and combining this
>>>>>>> information is difficult if not impossible in the current system.
>>>>>>> The information isn't available and isn't computed. In that sense,
>>>>>>> the original Axiom system is 'showing its age'.
>>>>>>>
>>>>>>> In the Sane implementation the information is available. It is
>>>>>>> part of the specification and part of the proof steps. With a
>>>>>>> careful design it will be possible to provide provisos for each
>>>>>>> given result that are carried with the result for use in further
>>>>>>> computation.
>>>>>>>
>>>>>>> This raises interesting questions to be explored. For example,
>>>>>>> if the formula is defined over an interval, how is the interval
>>>>>>> arithmetic handled?
>>>>>>>
>>>>>>> Exciting research ahead!
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 1/3/20, Tim Daly <[hidden email]> wrote:
>>>>>>>> Trusted Kernel... all the way to the metal.
>>>>>>>>
>>>>>>>> While building a trusted computer algebra system, the
>>>>>>>> SANE version of Axiom, I've been looking at questions of
>>>>>>>> trust at all levels.
>>>>>>>>
>>>>>>>> One of the key tenets (the de Bruijn principle) calls for a
>>>>>>>> trusted kernel through which all other computations must
>>>>>>>> pass. Coq, Lean, and other systems do this. They base
>>>>>>>> their kernel  on logic like the Calculus of Construction or
>>>>>>>> something similar.
>>>>>>>>
>>>>>>>> Andrej Bauer has been working on a smaller kernel (a
>>>>>>>> nucleus) that separates the trust from the logic. The rules
>>>>>>>> for the logic can be specified as needed but checked by
>>>>>>>> the nucleus code.
>>>>>>>>
>>>>>>>> I've been studying Field Programmable Gate Arrays (FPGA)
>>>>>>>> that allow you to create your own hardware in a C-like
>>>>>>>> language (Verilog). It allows you to check the chip you build
>>>>>>>> all the way down to the transistor states. You can create
>>>>>>>> things as complex as a whole CPU or as simple as a trusted
>>>>>>>> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
>>>>>>>> history of verifying hardware logic.
>>>>>>>>
>>>>>>>> It appears that, assuming I can understand Bauers
>>>>>>>> Andromeda system, it would be possible and not that hard
>>>>>>>> to implement a trusted kernel on an FPGA the size and
>>>>>>>> form factor of a USB stick.
>>>>>>>>
>>>>>>>> Trust "down to the metal".
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 12/15/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>> Progress in happening on the new Sane Axiom compiler.
>>>>>>>>>
>>>>>>>>> Recently I've been musing about methods to insert axioms
>>>>>>>>> into categories so they could be inherited like signatures.
>>>>>>>>> At the moment I've been thinking about adding axioms in
>>>>>>>>> the same way that signatures are written, adding them to
>>>>>>>>> the appropriate categories.
>>>>>>>>>
>>>>>>>>> But this is an interesting design question.
>>>>>>>>>
>>>>>>>>> Axiom already has a mechanism for inheriting signatures
>>>>>>>>> from categories. That is, we can bet a plus signature from,
>>>>>>>>> say, the Integer category.
>>>>>>>>>
>>>>>>>>> Suppose we follow the same pattern. Currently Axiom
>>>>>>>>> inherits certain so-called "attributes", such as
>>>>>>>>> ApproximateAttribute,
>>>>>>>>> which implies that the results are only approximate.
>>>>>>>>>
>>>>>>>>> We could adapt the same mechnaism to inherit the Transitive
>>>>>>>>> property by defining it in its own category. In fact, if we
>>>>>>>>> follow Carette and Farmer's "tiny theories" architecture,
>>>>>>>>> where each property has its own inheritable category,
>>>>>>>>> we can "mix and match" the axioms at will.
>>>>>>>>>
>>>>>>>>> An "axiom" category would also export a function. This function
>>>>>>>>> would essentially be a "tactic" used in a proof. It would modify
>>>>>>>>> the proof step by applying the function to the step.
>>>>>>>>>
>>>>>>>>> Theorems would have the same structure.
>>>>>>>>>
>>>>>>>>> This allows theorems to be constructed at run time (since
>>>>>>>>> Axiom supports "First Class Dynamic Types".
>>>>>>>>>
>>>>>>>>> In addition, this design can be "pushed down" into the Spad
>>>>>>>>> language so that Spad statements (e.g. assignment) had
>>>>>>>>> proof-related properties. A range such as [1..10] would
>>>>>>>>> provide explicit bounds in a proof "by language definition".
>>>>>>>>> Defining the logical properties of language statements in
>>>>>>>>> this way would make it easier to construct proofs since the
>>>>>>>>> invariants would be partially constructed already.
>>>>>>>>>
>>>>>>>>> This design merges the computer algebra inheritance
>>>>>>>>> structure with the proof of algorithms structure, all under
>>>>>>>>> the same mechanism.
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>> On 12/11/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>> I've been reading Stephen Kell's (Univ of Kent
>>>>>>>>>> https://www.cs.kent.ac.uk/people/staff/srk21/) on
>>>>>>>>>> Seven deadly sins of talking about “types”
>>>>>>>>>> https://www.cs.kent.ac.uk/people/staff/srk21//blog/2014/10/07/
>>>>>>>>>>
>>>>>>>>>> He raised an interesting idea toward the end of the essay
>>>>>>>>>> that type-checking could be done outside the compiler.
>>>>>>>>>>
>>>>>>>>>> I can see a way to do this in Axiom's Sane compiler.
>>>>>>>>>> It would be possible to run a program over the source code
>>>>>>>>>> to collect the information and write a stand-alone type
>>>>>>>>>> checker. This "unbundles" type checking and compiling.
>>>>>>>>>>
>>>>>>>>>> Taken further I can think of several other kinds of checkers
>>>>>>>>>> (aka 'linters') that could be unbundled.
>>>>>>>>>>
>>>>>>>>>> It is certainly something to explore.
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 12/8/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>> The Axiom Sane compiler is being "shaped by the hammer
>>>>>>>>>>> blows of reality", to coin a phrase.
>>>>>>>>>>>
>>>>>>>>>>> There are many goals. One of the primary goals is creating a
>>>>>>>>>>> compiler that can be understood, maintained, and modified.
>>>>>>>>>>>
>>>>>>>>>>> So the latest changes involved adding multiple index files.
>>>>>>>>>>> These are documentation (links to where terms are mentioned
>>>>>>>>>>> in the text), code (links to the implementation of things),
>>>>>>>>>>> error (links to where errors are defined), signatures (links to
>>>>>>>>>>> the signatures of lisp functions), figures (links to figures),
>>>>>>>>>>> and separate category, domain, and package indexes.
>>>>>>>>>>>
>>>>>>>>>>> The tikz package is now used to create "railroad diagrams"
>>>>>>>>>>> of syntax (ala, the PASCAL report). The implementation of
>>>>>>>>>>> those diagrams follows immediately. Collectively these will
>>>>>>>>>>> eventually define at least the syntax of the language. In the
>>>>>>>>>>> ideal, changing the diagram would change the code but I'm
>>>>>>>>>>> not that clever.
>>>>>>>>>>>
>>>>>>>>>>> Reality shows up with the curent constraint that the
>>>>>>>>>>> compiler should accept the current Spad language as
>>>>>>>>>>> closely as possible. Of course, plans are to include new
>>>>>>>>>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>>>>>>>>>> but these are being postponed until "syntax complete".
>>>>>>>>>>>
>>>>>>>>>>> All parse information is stored in a parse object, which
>>>>>>>>>>> is a CLOS object (and therefore a Common Lisp type)
>>>>>>>>>>> Fields within the parse object, e.g. variables are also
>>>>>>>>>>> CLOS objects (and therefore a Common Lisp type).
>>>>>>>>>>> It's types all the way down.
>>>>>>>>>>>
>>>>>>>>>>> These types are being used as 'signatures' for the
>>>>>>>>>>> lisp functions. The goal is to be able to type-check the
>>>>>>>>>>> compiler implementation as well as the Sane language.
>>>>>>>>>>>
>>>>>>>>>>> The parser is designed to "wrap around" so that the
>>>>>>>>>>> user-level output of a parse should be the user-level
>>>>>>>>>>> input (albeit in a 'canonical" form). This "mirror effect"
>>>>>>>>>>> should make it easy to see that the parser properly
>>>>>>>>>>> parsed the user input.
>>>>>>>>>>>
>>>>>>>>>>> The parser is "first class" so it will be available at
>>>>>>>>>>> runtime as a domain allowing Spad code to construct
>>>>>>>>>>> Spad code.
>>>>>>>>>>>
>>>>>>>>>>> One plan, not near implementation, is to "unify" some
>>>>>>>>>>> CLOS types with the Axiom types (e.g. String). How
>>>>>>>>>>> this will happen is still in the land of design. This would
>>>>>>>>>>> "ground" Spad in lisp, making them co-equal.
>>>>>>>>>>>
>>>>>>>>>>> Making lisp "co-equal" is a feature, especially as Spad is
>>>>>>>>>>> really just a domain-specific language in lisp. Lisp
>>>>>>>>>>> functions (with CLOS types as signatures) would be
>>>>>>>>>>> avaiable for implementing Spad functions. This not
>>>>>>>>>>> only improves the efficiency, it would make the
>>>>>>>>>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>>>>>>>>>> .
>>>>>>>>>>> On the theory front I plan to attend the Formal Methods
>>>>>>>>>>> in Mathematics / Lean Together conference, mostly to
>>>>>>>>>>> know how little I know, especially that I need to know.
>>>>>>>>>>> http://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/
>>>>>>>>>>>
>>>>>>>>>>> Tim
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 11/28/19, Jacques Carette <[hidden email]> wrote:
>>>>>>>>>>>> The underlying technology to use for building such an algebra
>>>>>>>>>>>> library
>>>>>>>>>>>> is
>>>>>>>>>>>> documented in the paper " Building on the Diamonds between
>>>>>>>>>>>> Theories:
>>>>>>>>>>>> Theory Presentation Combinators"
>>>>>>>>>>>> http://www.cas.mcmaster.ca/~carette/publications/tpcj.pdf
>>>>>>>>>>>> [which
>>>>>>>>>>>> will
>>>>>>>>>>>> also be on the arxiv by Monday, and has been submitted to a
>>>>>>>>>>>> journal].
>>>>>>>>>>>>
>>>>>>>>>>>> There is a rather full-fledged prototype, very well documented
>>>>>>>>>>>> at
>>>>>>>>>>>> https://alhassy.github.io/next-700-module-systems/prototype/package-former.html
>>>>>>>>>>>>
>>>>>>>>>>>> (source at https://github.com/alhassy/next-700-module-systems).
>>>>>>>>>>>> It
>>>>>>>>>>>> is
>>>>>>>>>>>> literate source.
>>>>>>>>>>>>
>>>>>>>>>>>> The old prototype was hard to find - it is now at
>>>>>>>>>>>> https://github.com/JacquesCarette/MathScheme.
>>>>>>>>>>>>
>>>>>>>>>>>> There is also a third prototype in the MMT system, but it does
>>>>>>>>>>>> not
>>>>>>>>>>>> quite
>>>>>>>>>>>> function properly today, it is under repair.
>>>>>>>>>>>>
>>>>>>>>>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>>>>>>>>>> (https://alhassy.github.io/next-700-module-systems/papers/gpce19_a_language_feature_to_unbundle_data_at_will.pdf)
>>>>>>>>>>>>
>>>>>>>>>>>> is also relevant, as it solves a problem with parametrized
>>>>>>>>>>>> theories
>>>>>>>>>>>> (parametrized Categories in Axiom terminology) that all current
>>>>>>>>>>>> systems
>>>>>>>>>>>> suffer from.
>>>>>>>>>>>>
>>>>>>>>>>>> Jacques
>>>>>>>>>>>>
>>>>>>>>>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>>>>>>>>>> The new Sane compiler is also being tested with the Fricas
>>>>>>>>>>>>> algebra code. The compiler knows about the language but
>>>>>>>>>>>>> does not depend on the algebra library (so far). It should be
>>>>>>>>>>>>> possible, by design, to load different algebra towers.
>>>>>>>>>>>>>
>>>>>>>>>>>>> In particular, one idea is to support the "tiny theories"
>>>>>>>>>>>>> algebra from Carette and Farmer. This would allow much
>>>>>>>>>>>>> finer grain separation of algebra and axioms.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This "flexible algebra" design would allow things like the
>>>>>>>>>>>>> Lean theorem prover effort in a more natural style.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>>>>> The current design and code base (in bookvol15) supports
>>>>>>>>>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Another possible design choice is to target the GNU
>>>>>>>>>>>>>> GCC intermediate representation, making Axiom "just
>>>>>>>>>>>>>> another front-end language" supported by GCC.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The current intermediate representation does not (yet)
>>>>>>>>>>>>>> make any decision about the runtime implementation.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>>>>>>>>>>> a dependently typed general parser for context free grammar
>>>>>>>>>>>>>>> in Coq.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> They used the parser to prove its own completeness.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>>>>>>>>>> But it is an intersting thought exercise to consider
>>>>>>>>>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom musings...

William Sit-3
Hi Tim:

Glad to hear from you now and then, promoting and working towards your ideas and ideals.

 >>We need proven algorithms.

Just one short comment: it is often possible to prove algorithms (that is, providing the theoretical foundation for the algorithm), but it is much harder to prove that an implementation of the algorithm is correct. As you well know, the distinction lies in that implementation involves data representations whereas proofs of algorithms normally ignore them. Introducing (finite) data representations means introducing boundary situations that a programmer implementing an algorithm must deal with. So perhaps what we need to prove should include the correctness of implementations (to the bare metal, as you often say) and we should have a different set of analytic tools that can deal with the correctness (or completeness) of data representations. Of course, these tools must also be proven with the same rigor since behind every program is an algorithm.

William

William Sit
Professor Emeritus
Department of Mathematics
The City College of The City University of New York
New York, NY 10031
homepage: wsit.ccny.cuny.edu

________________________________________
From: Axiom-developer <axiom-developer-bounces+wyscc=[hidden email]> on behalf of Tim Daly <[hidden email]>
Sent: Saturday, July 18, 2020 6:28 PM
To: axiom-dev; Tim Daly
Subject: [EXTERNAL] Re: Axiom musings...

Richard Hamming gave a great talk. "You and Your Research"
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3Da1zDuOPkMSw&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=kSXlFiPNCbYVZvoZ62OUVd_40kcVviTxSKF3vNNtm0U&e=

His big question is:

"What is the most important problem in your field
and why aren't you working on it?"

To my mind, the most important problem in the field of
computational mathematics is grounding computer
algebra in proofs.

Computer mathematical algorithms that "maybe,
possibly, give correct answers sometimes" is a problem.
Indeed, for computer algebra, it is the most important
problem. We need proven algorithms.

New algorithms, better graphics, better documentation,
are all "nice to have" but, as Hamming would say,
they are not "the most important problem".

Tim



On 7/2/20, Tim Daly <[hidden email]> wrote:

> Time for another update.
>
> The latest Intel processors, available only to data centers
> so far, have a built-in FPGA. This allows you to design
> your own circuits and have them loaded "on the fly",
> running in parallel with the CPU.
>
> I bought a Lattice ICEstick FPGA development board. For
> the first time there are open source tools that support it so
> it is a great test bench for ideas and development. It is a
> USB drive so it can be easily ported to any PC.
> (https://urldefense.proofpoint.com/v2/url?u=https-3A__www.latticesemi.com_products_developmentboardsandkits_icestick&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=QxcJcE1BdIMqDbutQz2HFhAAAymG-QswIjRao_YTwz4&e= )
>
> I also bought a large Intel Cyclone FPGA development board.
> (https://urldefense.proofpoint.com/v2/url?u=http-3A__www.terasic.com.tw_cgi-2Dbin_page_archive.pl-3FLanguage-3DEnglish-26No-3D836&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=3wW6BueAeyVTQi0xGqoeE7xIA5EREDmvQR4fPw5zAXo&e= )
> which has 2 embedded ARM processors. Unfortunately
> the tools (which are freely available) are not open source.
> It has sufficient size and power to do anything.
>
>
> I've got 2 threads of work in progress, both of which
> involve FPGAs (Field Programmable Gate Arrays).
>
> Thread 1
>
> The first thread involves proving programs correct. Once
> a proof has been made it is rather easier to check the proof.
> If code is shipped with a proof, the proof can be loaded into
> an FPGA running a proof-checker which verifies the program
> in parallel with running the code on the CPU.
>
> I am researching the question of writing a proof checker that
> runs on an FPGA, thus verifying the code "down to the metal".
> The Lean proof checker is the current target.
>
> The idea is to make "Oracle" algorithms that, because they
> are proven correct and verified at runtime, can be trusted
> by other mathematical software (e.g. Lean, Coq, Agda)
> when used in proofs.
>
> Thread 2
>
>
> The second thread involves arithmetic. Axiom currently ships
> with numeric routines (BLAS and LAPACK, see bookvol10.5).
> These routines have a known set of numeric failures such as
> cancellation, underflow, and scaling.
>
> John Gustafson has designed a 'unum' numeric format that can
> eliminate many of these errors. (See
> Gustafson, John "The End of Error" CRC Press 2015
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_End-2DError-2DComputing-2DChapman-2DComputational_dp_1482239868_ref-3Dsr-5F1-5F1-3Fdchild-3D1-26keywords-3Dgustafson-2Bthe-2Bend-2Bof-2Berror-26qid-3D1593685423-26sr-3D8-2D1&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=cxcqXTqQQjOFj6wRWKcaCMutCt0BYJ0WwJnlo0hYa0A&e= )
>
> The research goal is to implement Axiom's floating-point
> arithmetic that can be offloaded onto an FPGA implementing
> the unum format. Such a system would radically simplify
> the implementation of BLAS and LAPACK as most of the
> errors can't occur. The impact would be similar to using
> multi-precision integer arithmetic, only now its floating-point.
>
> SANE, the greater goal.
>
> The Axiom SANE compiler / interpreter can use both of
> these tools to implement trusted mathematical software.
> It's a long, ambitious research effort but even if only pieces
> of it succeed, it changes computational mathematics.
>
> Tim
>
> "A person's reach should exceed their grasp,
> or what's a computer for?"  (misquoting Robert Browning)
>
> (https://urldefense.proofpoint.com/v2/url?u=https-3A__www.quotetab.com_quote_by-2Drobert-2Dbrowning_ah-2Dbut-2Da-2Dmans-2Dreach-2Dshould-2Dexceed-2Dhis-2Dgrasp-2Dor-2Dwhats-2Da-2Dheaven-2Dfor&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=ayZkzXC9ekESctdx_OqsfcYl4z14qlYS02TBNmnaHUY&e= )
>
>
>
>
> On 6/16/20, Tim Daly <[hidden email]> wrote:
>> WHY PROVE AXIOM CORRECT (SANE)?
>>
>> Historically, Axiom credits CLU, the Cluster language by
>> Barbara Liskov, with the essential ideas behind the Spad
>> language. Barbara gave a talk (a partial transcript below)
>> that gives the rational behind the ``where clause'' used by
>> Spad.
>>
>> She talks about the limits of the compile time capablity.
>> In particular, she says:
>>
>>    To go further, where we would say that T,
>>    in addition, has to be an equality relation, that requires
>>    much more sophisticated techniques that, even today, are
>>    beyond the capabilities of the compiler.
>>
>> Showing that the ``equal'' function satisfies the equality
>> relation is no longer ``beyond the capabilities of the compiler''.
>> We have the required formalisms and mechanisms to
>> prove properties at compile time.
>>
>> The SANE effort is essentially trying to push compile
>> time checking into proving that, for categories that use
>> ``equal'', we prove that the equal function implements
>> equality.
>>
>> I strongly encourage you to watch her video.
>>
>> Tim
>>
>> ===========================================
>> Barbara Liskov
>> May 2012
>> MIT CSAIL
>> Programming the Turing Machine
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3DibRar7sWulM&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=mKaSE2deFF_wqq9yriqo-s51oF6c3-ksS2_IZhS1eGY&e=
>>
>> POLYMORPHISM
>>
>> We don't just want a set, we want polymorphism or
>> generics, as they are called today. We wanted to
>> have a generic set which was paramaterized by type
>> so you could instantiate it as:
>>
>> Set = [T:type] create, insert,...
>>   % representation for Set object
>>   % implementation of Set operations
>>   Set
>>
>> Set[int] s := Set[int]$create()
>> Set[int]$insert(s,3)
>>
>> We wanted a static solution to this problem. The
>> problem is, not every type makes sense as a parameter
>> to Set of T. For sets, per se, you need an equality
>> relation. If it has been a sorted set we would have
>> some ordering relation. And a type that didn't have
>> one of those things would not have been a legitimate
>> parameter. We needed a way of expressing that in a
>> compile-time, checkable manner. Otherwise we would
>> have had to resort to runtime checking.
>>
>> Our solution was
>>
>> Set = [T:  ] create, insert,...
>>   T equal: (T,T) (bool)
>>
>>
>> Our solution, what we call the ``where clause''. So we
>> added this to the header. The ``where clause'' tells you
>> what operations the parameter type has to have.
>>
>> If you have the ``where'' clause you can do the static
>> checking because when you instantiate, when you provide
>> an actual type, the compiler can check that the type has
>> the operations that are required. And then, when you write
>> the implementation of Set the compiler knows it's ok to
>> call those operations because you can guarantee they are
>> actually there when you get around to running.
>>
>> Of course, you notice that there's just syntax here; there's
>> no semantics.
>>
>> As I'm sure you all know, compile-time type checking is
>> basically a proof technique of a very limited sort and
>> this was about as far as we can push what you could get out of the
>> static analysis. To go further, where we would say that T,
>> in addition, has to be an equality relation, that requires
>> much more sophisticated techniques that, even today, are
>> beyond the capabilities of the compiler.
>>
>>
>>
>>
>> On 3/24/20, Tim Daly <[hidden email]> wrote:
>>> I've spent entirely too much time studing the legal issues
>>> of free and open source software. There are copyright,
>>> trademark, and intellectual property laws. I have read
>>> several books, listened to lectures, and read papers on
>>> the subject. I've spoken to lawyers about it. I've even
>>> been required, by law, to coerce people I respect.
>>> You would think it was all perfectly clear. It isn't.
>>>
>>> The most entertaining and enlightening lectures were
>>> by Robert Lefkowitz at OSCON 2004. His talk is
>>> "The Semasiology of Open Source", which sounds
>>> horrible but I assure you, this is a real treat.
>>>
>>> THE THESIS
>>>
>>> Semasiology, n. The science of meanings or
>>> sense development (of words); the explanation
>>> of the development and changes of the meanings
>>> of words. Source: Webster's Revised Unabridged
>>> Dictionary, � 1996, 1998 MICRA, Inc.
>>>
>>> "Open source doesn't just mean access to the
>>> source code." So begins the Open Source Definition.
>>> What then, does access to the source code mean?
>>> Seen through the lens of an Enterprise user, what
>>> does open source mean? When is (or isn't) it
>>> significant? And a catalogue of open source
>>> related arbitrage opportunities.
>>>
>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__origin.conversationsnetwork.org_Robert-2520Lefkowitz-2520-2D-2520The-2520Semasiology-2520of-2520Open-2520Source.mp3&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=IpKqNvLCWxaxdmI9ATBmNX0r3h_3giwDJVTFcnEbusM&e=
>>>
>>> Computer source code has words and sentence
>>> structure like actual prose or even poetry. Writing
>>> code for the computer is like writing an essay. It
>>> should be written for other people to read,
>>> understand and modify. These are some of the
>>> thoughts behind literate programming proposed
>>> by Donald Knuth. This is also one of the ideas
>>> behind Open Source.
>>>
>>>  THE ANTITHESIS
>>>
>>> "Open Source" is a phrase like "Object Oriented"
>>> - weird at first, but when it became popular, the
>>> meaning began to depend on the context of the
>>> speaker or listener. "Object Oriented" meant that
>>> PERL, C++, Java, Smalltalk, Basic and the newest
>>> version of Cobol are all "Object Oriented" - for some
>>> specific definition of "Object Oriented". Similar is
>>> the case of the phrase "Open Source".
>>>
>>> In Part I, Lefkowitz talked about the shift of the
>>> meaning of "Open Source" away from any reference
>>> to the actual "source code," and more towards other
>>> phases of the software development life cycle. In
>>> Part II, he returns to the consideration of the
>>> relationship between "open source" and the actual
>>> "source code," and reflects upon both the way
>>> forward and the road behind, drawing inspiration
>>> from Charlemagne, King Louis XIV, Donald Knuth,
>>> and others.
>>>
>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__origin.conversationsnetwork.org_ITC.OSCON05-2DRobertLefkowitz-2D2005.08.03.mp3&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=LTgLxuL_diAdUFVj96fbcZJ08IEv_MGf28Vlk0InNQI&e=
>>>
>>> THE SYNTHESIS
>>>
>>> In a fascinating synthesis, Robert “r0ml” Lefkowitz
>>> polishes up his exposition on the evolving meaning
>>> of the term ‘open source’. This intellectual joy-ride
>>> draws on some of the key ideas in artificial intelligence
>>> to probe the role of language, meaning and context
>>> in computing and the software development process.
>>> Like Wittgenstein’s famous thought experiment, the
>>> open source ‘beetle in a box’ can represent different
>>> things to different people, bearing interesting fruit for
>>> philosophers and software creators alike.
>>>
>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__itc.conversationsnetwork.org_audio_download_itconversations-2D1502.mp3&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=Jls8thoIwON-5Jr2Rn1_MXWtrohVFn1Ik4c7l2MFsnk&e=
>>>
>>>
>>> On 3/7/20, Tim Daly <[hidden email]> wrote:
>>>> I've pushed the lastest version of Axiom. The plan, followed
>>>> so far, is to push once a month on the 7th.
>>>>
>>>> After some chat room interactions it was brought home
>>>> again that the proof world really does not seem to like the
>>>> idea of proving programs correct. And, given that it was is
>>>> of the main Axiom goals and a point of friction during the fork,
>>>> the computer algebra world does not like the idea of proving
>>>> programs correct either.
>>>>
>>>> So the idea of "computational mathematics", which includes
>>>> both disciplines (as well as type theory) seems still a long
>>>> way off.
>>>>
>>>> Nevertheless, the primary change in these past and future
>>>> updates is focused on merging proof and computer algebra.
>>>>
>>>> Proof systems are able to split the problem of creating a
>>>> proof and the problem of verifying a proof, which is much
>>>> cheaper. Ideally the proof checker would run on verified
>>>> hardware so the proof is checked "down to the metal".
>>>>
>>>> I have a background in Field Programmable Gate Arrays
>>>> (FPGAs) as I tried to do a startup using them. So now I'm
>>>> looking at creating a hardware proof checker using a
>>>> dedicated instruction set, one designed to be verifed.
>>>> New CPUs used in data centers (not yet available to us
>>>> mortals) have built-in FPGAs so it would be possible to
>>>> "side-load" a proof of a program to be checked while the
>>>> program is run. I have the FPGA and am doing a gate-level
>>>> special instruction design for such a proof checker.
>>>>
>>>>
>>>> On 2/7/20, Tim Daly <[hidden email]> wrote:
>>>>> As a mathematician, it is difficult to use a system like Axiom,
>>>>> mostly because it keeps muttering about Types. If you're not
>>>>> familiar with type theory (most mathematicians aren't) then it
>>>>> seems pointless and painful.
>>>>>
>>>>> So Axiom has a steep learning curve.
>>>>>
>>>>> As a mathematician with an algorithmic approach, it is difficult
>>>>> to use a system like Axiom, mostly because you have to find
>>>>> or create "domains" or "packages", understand categories
>>>>> with their inheritance model, and learn a new language with
>>>>> a painful compiler always complaining about types.
>>>>>
>>>>> So Axiom has a steep learning curve.
>>>>>
>>>>> The Sane version of Axiom requires knowing the mathematics.
>>>>> It also assumes a background in type theory, inductive logic,
>>>>> homotopy type theory, ML (meta-language, not machine
>>>>> learning (yet)), interactive theorem proving, kernels, tactics,
>>>>> and tacticals. Also assumed is knowledge of specification languages,
>>>>> Hoare triples, proof techniques, soundness, and completeness.
>>>>> Oh, and there is a whole new syntax and semantics added to
>>>>> specify definitions, axioms, and theorems, not to mention whole
>>>>> libraries of the same.
>>>>>
>>>>> So Axiom Sane has a steep learning curve.
>>>>>
>>>>> I've taken 10 courses at CMU and spent the last 4-5 years
>>>>> learning to read the leading edge literature (also known
>>>>> as "greek studies", since every paper has pages of greek).
>>>>>
>>>>> I'm trying to unify computer algebra and proof theory into a
>>>>> "computational mathematics" framework. I suspect that the only
>>>>> way this system will ever be useful is after Universities have a
>>>>> "Computational Mathematics" major course of study and degree.
>>>>>
>>>>> Creating a new department is harder than creating Axiom Sane
>>>>> because, you know, ... people.
>>>>>
>>>>> I think such a department is inevitable given the deep and wide
>>>>> impact of computers, just not in my lifetime. That's ok. When I
>>>>> started programming there was no computer science degree.
>>>>>
>>>>> Somebody has to be the first lemming over the cliff.
>>>>>
>>>>> Tim
>>>>>
>>>>> On 1/9/20, Tim Daly <[hidden email]> wrote:
>>>>>> When Axiom Sane is paired with a proof checker (e.g. with Lean)
>>>>>> there is a certain amount of verification that is involved.
>>>>>>
>>>>>> Axiom will provide proofs (presumably validated by Lean) for its
>>>>>> algorithms. Ideally, when a computation is requested from Lean
>>>>>> for a GCD, the result as well as a proof of the GCD algorithm is
>>>>>> returned. Lean can the verify that the proof is valid. But it is
>>>>>> computationally more efficient if Axiom and Lean use a cryptographic
>>>>>> hash, such as SHA1. That way the proof doesn't need to be
>>>>>> 'reproven', only a hash computation over the proof text needs to
>>>>>> be performed. Hashes are blazingly fast. This allows proofs to be
>>>>>> exchanged without re-running the proof mechanism. Since a large
>>>>>> computation request from Lean might involve many algorithms
>>>>>> there would be considerable overhead to recompute each proof.
>>>>>> A hash simplifies the issue yet provides proof integrity.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>
>>>>>> On 1/9/20, Tim Daly <[hidden email]> wrote:
>>>>>>> Provisos.... that is, 'formula SUCH pre/post-conditions'
>>>>>>>
>>>>>>> A computer algebra system ought to know and ought to provide
>>>>>>> information about the domain and range of a resulting formula.
>>>>>>> I've been pushing this effort since the 1980s (hence the
>>>>>>> SuchThat domain).
>>>>>>>
>>>>>>> It turns out that computing with, carrying, and combining this
>>>>>>> information is difficult if not impossible in the current system.
>>>>>>> The information isn't available and isn't computed. In that sense,
>>>>>>> the original Axiom system is 'showing its age'.
>>>>>>>
>>>>>>> In the Sane implementation the information is available. It is
>>>>>>> part of the specification and part of the proof steps. With a
>>>>>>> careful design it will be possible to provide provisos for each
>>>>>>> given result that are carried with the result for use in further
>>>>>>> computation.
>>>>>>>
>>>>>>> This raises interesting questions to be explored. For example,
>>>>>>> if the formula is defined over an interval, how is the interval
>>>>>>> arithmetic handled?
>>>>>>>
>>>>>>> Exciting research ahead!
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 1/3/20, Tim Daly <[hidden email]> wrote:
>>>>>>>> Trusted Kernel... all the way to the metal.
>>>>>>>>
>>>>>>>> While building a trusted computer algebra system, the
>>>>>>>> SANE version of Axiom, I've been looking at questions of
>>>>>>>> trust at all levels.
>>>>>>>>
>>>>>>>> One of the key tenets (the de Bruijn principle) calls for a
>>>>>>>> trusted kernel through which all other computations must
>>>>>>>> pass. Coq, Lean, and other systems do this. They base
>>>>>>>> their kernel  on logic like the Calculus of Construction or
>>>>>>>> something similar.
>>>>>>>>
>>>>>>>> Andrej Bauer has been working on a smaller kernel (a
>>>>>>>> nucleus) that separates the trust from the logic. The rules
>>>>>>>> for the logic can be specified as needed but checked by
>>>>>>>> the nucleus code.
>>>>>>>>
>>>>>>>> I've been studying Field Programmable Gate Arrays (FPGA)
>>>>>>>> that allow you to create your own hardware in a C-like
>>>>>>>> language (Verilog). It allows you to check the chip you build
>>>>>>>> all the way down to the transistor states. You can create
>>>>>>>> things as complex as a whole CPU or as simple as a trusted
>>>>>>>> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
>>>>>>>> history of verifying hardware logic.
>>>>>>>>
>>>>>>>> It appears that, assuming I can understand Bauers
>>>>>>>> Andromeda system, it would be possible and not that hard
>>>>>>>> to implement a trusted kernel on an FPGA the size and
>>>>>>>> form factor of a USB stick.
>>>>>>>>
>>>>>>>> Trust "down to the metal".
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 12/15/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>> Progress in happening on the new Sane Axiom compiler.
>>>>>>>>>
>>>>>>>>> Recently I've been musing about methods to insert axioms
>>>>>>>>> into categories so they could be inherited like signatures.
>>>>>>>>> At the moment I've been thinking about adding axioms in
>>>>>>>>> the same way that signatures are written, adding them to
>>>>>>>>> the appropriate categories.
>>>>>>>>>
>>>>>>>>> But this is an interesting design question.
>>>>>>>>>
>>>>>>>>> Axiom already has a mechanism for inheriting signatures
>>>>>>>>> from categories. That is, we can bet a plus signature from,
>>>>>>>>> say, the Integer category.
>>>>>>>>>
>>>>>>>>> Suppose we follow the same pattern. Currently Axiom
>>>>>>>>> inherits certain so-called "attributes", such as
>>>>>>>>> ApproximateAttribute,
>>>>>>>>> which implies that the results are only approximate.
>>>>>>>>>
>>>>>>>>> We could adapt the same mechnaism to inherit the Transitive
>>>>>>>>> property by defining it in its own category. In fact, if we
>>>>>>>>> follow Carette and Farmer's "tiny theories" architecture,
>>>>>>>>> where each property has its own inheritable category,
>>>>>>>>> we can "mix and match" the axioms at will.
>>>>>>>>>
>>>>>>>>> An "axiom" category would also export a function. This function
>>>>>>>>> would essentially be a "tactic" used in a proof. It would modify
>>>>>>>>> the proof step by applying the function to the step.
>>>>>>>>>
>>>>>>>>> Theorems would have the same structure.
>>>>>>>>>
>>>>>>>>> This allows theorems to be constructed at run time (since
>>>>>>>>> Axiom supports "First Class Dynamic Types".
>>>>>>>>>
>>>>>>>>> In addition, this design can be "pushed down" into the Spad
>>>>>>>>> language so that Spad statements (e.g. assignment) had
>>>>>>>>> proof-related properties. A range such as [1..10] would
>>>>>>>>> provide explicit bounds in a proof "by language definition".
>>>>>>>>> Defining the logical properties of language statements in
>>>>>>>>> this way would make it easier to construct proofs since the
>>>>>>>>> invariants would be partially constructed already.
>>>>>>>>>
>>>>>>>>> This design merges the computer algebra inheritance
>>>>>>>>> structure with the proof of algorithms structure, all under
>>>>>>>>> the same mechanism.
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>> On 12/11/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>> I've been reading Stephen Kell's (Univ of Kent
>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.cs.kent.ac.uk_people_staff_srk21_&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=0SL3F3KHh9R1lV_IrJ0LmINrn_DSMjMq5xsNk1_eii0&e= ) on
>>>>>>>>>> Seven deadly sins of talking about “types”
>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.cs.kent.ac.uk_people_staff_srk21__blog_2014_10_07_&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=GOMXhymTlK2T6dt62fTbqv-K98dBQv0oMmB82kE8mXo&e=
>>>>>>>>>>
>>>>>>>>>> He raised an interesting idea toward the end of the essay
>>>>>>>>>> that type-checking could be done outside the compiler.
>>>>>>>>>>
>>>>>>>>>> I can see a way to do this in Axiom's Sane compiler.
>>>>>>>>>> It would be possible to run a program over the source code
>>>>>>>>>> to collect the information and write a stand-alone type
>>>>>>>>>> checker. This "unbundles" type checking and compiling.
>>>>>>>>>>
>>>>>>>>>> Taken further I can think of several other kinds of checkers
>>>>>>>>>> (aka 'linters') that could be unbundled.
>>>>>>>>>>
>>>>>>>>>> It is certainly something to explore.
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 12/8/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>> The Axiom Sane compiler is being "shaped by the hammer
>>>>>>>>>>> blows of reality", to coin a phrase.
>>>>>>>>>>>
>>>>>>>>>>> There are many goals. One of the primary goals is creating a
>>>>>>>>>>> compiler that can be understood, maintained, and modified.
>>>>>>>>>>>
>>>>>>>>>>> So the latest changes involved adding multiple index files.
>>>>>>>>>>> These are documentation (links to where terms are mentioned
>>>>>>>>>>> in the text), code (links to the implementation of things),
>>>>>>>>>>> error (links to where errors are defined), signatures (links to
>>>>>>>>>>> the signatures of lisp functions), figures (links to figures),
>>>>>>>>>>> and separate category, domain, and package indexes.
>>>>>>>>>>>
>>>>>>>>>>> The tikz package is now used to create "railroad diagrams"
>>>>>>>>>>> of syntax (ala, the PASCAL report). The implementation of
>>>>>>>>>>> those diagrams follows immediately. Collectively these will
>>>>>>>>>>> eventually define at least the syntax of the language. In the
>>>>>>>>>>> ideal, changing the diagram would change the code but I'm
>>>>>>>>>>> not that clever.
>>>>>>>>>>>
>>>>>>>>>>> Reality shows up with the curent constraint that the
>>>>>>>>>>> compiler should accept the current Spad language as
>>>>>>>>>>> closely as possible. Of course, plans are to include new
>>>>>>>>>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>>>>>>>>>> but these are being postponed until "syntax complete".
>>>>>>>>>>>
>>>>>>>>>>> All parse information is stored in a parse object, which
>>>>>>>>>>> is a CLOS object (and therefore a Common Lisp type)
>>>>>>>>>>> Fields within the parse object, e.g. variables are also
>>>>>>>>>>> CLOS objects (and therefore a Common Lisp type).
>>>>>>>>>>> It's types all the way down.
>>>>>>>>>>>
>>>>>>>>>>> These types are being used as 'signatures' for the
>>>>>>>>>>> lisp functions. The goal is to be able to type-check the
>>>>>>>>>>> compiler implementation as well as the Sane language.
>>>>>>>>>>>
>>>>>>>>>>> The parser is designed to "wrap around" so that the
>>>>>>>>>>> user-level output of a parse should be the user-level
>>>>>>>>>>> input (albeit in a 'canonical" form). This "mirror effect"
>>>>>>>>>>> should make it easy to see that the parser properly
>>>>>>>>>>> parsed the user input.
>>>>>>>>>>>
>>>>>>>>>>> The parser is "first class" so it will be available at
>>>>>>>>>>> runtime as a domain allowing Spad code to construct
>>>>>>>>>>> Spad code.
>>>>>>>>>>>
>>>>>>>>>>> One plan, not near implementation, is to "unify" some
>>>>>>>>>>> CLOS types with the Axiom types (e.g. String). How
>>>>>>>>>>> this will happen is still in the land of design. This would
>>>>>>>>>>> "ground" Spad in lisp, making them co-equal.
>>>>>>>>>>>
>>>>>>>>>>> Making lisp "co-equal" is a feature, especially as Spad is
>>>>>>>>>>> really just a domain-specific language in lisp. Lisp
>>>>>>>>>>> functions (with CLOS types as signatures) would be
>>>>>>>>>>> avaiable for implementing Spad functions. This not
>>>>>>>>>>> only improves the efficiency, it would make the
>>>>>>>>>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>>>>>>>>>> .
>>>>>>>>>>> On the theory front I plan to attend the Formal Methods
>>>>>>>>>>> in Mathematics / Lean Together conference, mostly to
>>>>>>>>>>> know how little I know, especially that I need to know.
>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.andrew.cmu.edu_user_avigad_meetings_fomm2020_&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=giWJNgv9oeh8Aj_giZkHCx-3GFVk62hxr53YKr4naRk&e=
>>>>>>>>>>>
>>>>>>>>>>> Tim
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 11/28/19, Jacques Carette <[hidden email]> wrote:
>>>>>>>>>>>> The underlying technology to use for building such an algebra
>>>>>>>>>>>> library
>>>>>>>>>>>> is
>>>>>>>>>>>> documented in the paper " Building on the Diamonds between
>>>>>>>>>>>> Theories:
>>>>>>>>>>>> Theory Presentation Combinators"
>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cas.mcmaster.ca_-7Ecarette_publications_tpcj.pdf&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=5QO0O72zl3hFmW3ryVeFoBjl0AZs2cuQZhKuIxk8NUw&e=
>>>>>>>>>>>> [which
>>>>>>>>>>>> will
>>>>>>>>>>>> also be on the arxiv by Monday, and has been submitted to a
>>>>>>>>>>>> journal].
>>>>>>>>>>>>
>>>>>>>>>>>> There is a rather full-fledged prototype, very well documented
>>>>>>>>>>>> at
>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__alhassy.github.io_next-2D700-2Dmodule-2Dsystems_prototype_package-2Dformer.html&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=9fnfoSWyT66oQoIb4gKAYpCE7JjANqxHquwJdRdo2Uk&e=
>>>>>>>>>>>>
>>>>>>>>>>>> (source at https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_alhassy_next-2D700-2Dmodule-2Dsystems&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=Z-d1Pn1slXyiP2l23mZBB5fBQOj0-Q48CUKRS1VNLao&e= ).
>>>>>>>>>>>> It
>>>>>>>>>>>> is
>>>>>>>>>>>> literate source.
>>>>>>>>>>>>
>>>>>>>>>>>> The old prototype was hard to find - it is now at
>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_JacquesCarette_MathScheme&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=pkDi0LOAAPefRjcwvjwNNI3BVzNgJDITFQRpkFBgg8c&e= .
>>>>>>>>>>>>
>>>>>>>>>>>> There is also a third prototype in the MMT system, but it does
>>>>>>>>>>>> not
>>>>>>>>>>>> quite
>>>>>>>>>>>> function properly today, it is under repair.
>>>>>>>>>>>>
>>>>>>>>>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>>>>>>>>>> (https://urldefense.proofpoint.com/v2/url?u=https-3A__alhassy.github.io_next-2D700-2Dmodule-2Dsystems_papers_gpce19-5Fa-5Flanguage-5Ffeature-5Fto-5Funbundle-5Fdata-5Fat-5Fwill.pdf&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=Rui27trbws4VTZL5B0zits8pEczWsib7Q7_mxyRIxhk&e= )
>>>>>>>>>>>>
>>>>>>>>>>>> is also relevant, as it solves a problem with parametrized
>>>>>>>>>>>> theories
>>>>>>>>>>>> (parametrized Categories in Axiom terminology) that all current
>>>>>>>>>>>> systems
>>>>>>>>>>>> suffer from.
>>>>>>>>>>>>
>>>>>>>>>>>> Jacques
>>>>>>>>>>>>
>>>>>>>>>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>>>>>>>>>> The new Sane compiler is also being tested with the Fricas
>>>>>>>>>>>>> algebra code. The compiler knows about the language but
>>>>>>>>>>>>> does not depend on the algebra library (so far). It should be
>>>>>>>>>>>>> possible, by design, to load different algebra towers.
>>>>>>>>>>>>>
>>>>>>>>>>>>> In particular, one idea is to support the "tiny theories"
>>>>>>>>>>>>> algebra from Carette and Farmer. This would allow much
>>>>>>>>>>>>> finer grain separation of algebra and axioms.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This "flexible algebra" design would allow things like the
>>>>>>>>>>>>> Lean theorem prover effort in a more natural style.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>>>>> The current design and code base (in bookvol15) supports
>>>>>>>>>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Another possible design choice is to target the GNU
>>>>>>>>>>>>>> GCC intermediate representation, making Axiom "just
>>>>>>>>>>>>>> another front-end language" supported by GCC.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The current intermediate representation does not (yet)
>>>>>>>>>>>>>> make any decision about the runtime implementation.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>>>>>>>>>>> a dependently typed general parser for context free grammar
>>>>>>>>>>>>>>> in Coq.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> They used the parser to prove its own completeness.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>>>>>>>>>> But it is an intersting thought exercise to consider
>>>>>>>>>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


Reply | Threaded
Open this post in threaded view
|

Re: [EXTERNAL] Re: Axiom musings...

Tim Daly
There are several "problems" with proving programs correct that
I don't quite know how to solve, or even approach. But that's the
fun of "research", right?

For the data representation question I've been looking at types.
I took 10 courses at CMU. I am eyebrow deep in type theory.
I'm looking at category theory and homotopy type theory. So
far I haven't seen anyone looking at the data problem. Most of
the focus is on strict code typing.

There is an old MIT course by Abelson and Sussman "Structure
and Interpretation of Computer Programs" (SICP). They rewrite
data as programs which, in Lisp, is trivial to do, Dan Friedman
seems to have some interesting ideas too.

All of Axiom's SANE types are now CLOS objects which gives
two benefits. First, they can be inherited. But second, they
are basically Lisp data structures with associated code.

I'm thinking of associating "data axioms" with the representation
(REP) object of a domain as well as with the functions.

For example, DenavitHartenbergMatrix encodes 4x4 matrices
used in graphics and robotics. They are 4x4 matrices where
the upper left 3x3 encodes rotations, the right column encodes
translations, and the lower row includes scaling, skewing, etc.

(As an aside, DHMATRIX matrices have an associated
Jacobian which encodes the dynamics in things like robots.
Since I'm also programming a robot I'm tempted to work on
extending the domain with related functions... but, as
Hamming said, new algebra code isn't "the most important
problem in computational mathematics").

Axioms associated with the REP can assume that they are
4x4, that they can be inverted, that they have a "space" of
rotations, etc. The axioms provide "facts" known to be true
about the REP. (I also need to think about a "specification"
for the REP but I'm not there yet).

Since every category and domain is a CLOS data structure
the DHMATRIX data structure inherits REP axioms from its
inheritance graph (e.g. SQMATRIX axioms). But DHMATRIX
adds domain-specific REP axioms (as well as domain-specific
function axioms). Thus a DHMATRIX rotate function can
base its proof on the fact that it only affects the upper 3x3
and lives in a space of rotations, all of which can be assumed
by the proof.

If I use the SICP "trick" of representing data as code I can
"expand" the data as part of the program proof.

It is all Omphaloskepsis (navel gazing) at this point though.
I'm still writing the new SANE compiler (which is wildly
different from the compiler course I taught).

I did give a talk at Notre Dame but I haven't attempted to
publish. All of my work shows up in literate programming
Axiom books on github.
(https://github.com/daly/PDFS)

It is all pretty pointless since nobody cares about computer
algebra, proving math programs correct, or Axiom itself.
Wolfram is taking up all the oxygen in the discussions.

But I have to say, this research is great fun. It reminds me
of the Scratchpad days, although I miss the give-and-take
of the group. It is hard to recreate my role as the dumbest
guy in the room when I'm stuck here by myself :-)

Hope you and your family are safe and healthy.

Tim

PS. I think we should redefine the "Hamming Distance" as
the distance between an idea and its implementation.



On 7/19/20, William Sit <[hidden email]> wrote:

> Hi Tim:
>
> Glad to hear from you now and then, promoting and working towards your ideas
> and ideals.
>
>  >>We need proven algorithms.
>
> Just one short comment: it is often possible to prove algorithms (that is,
> providing the theoretical foundation for the algorithm), but it is much
> harder to prove that an implementation of the algorithm is correct. As you
> well know, the distinction lies in that implementation involves data
> representations whereas proofs of algorithms normally ignore them.
> Introducing (finite) data representations means introducing boundary
> situations that a programmer implementing an algorithm must deal with. So
> perhaps what we need to prove should include the correctness of
> implementations (to the bare metal, as you often say) and we should have a
> different set of analytic tools that can deal with the correctness (or
> completeness) of data representations. Of course, these tools must also be
> proven with the same rigor since behind every program is an algorithm.
>
> William
>
> William Sit
> Professor Emeritus
> Department of Mathematics
> The City College of The City University of New York
> New York, NY 10031
> homepage: wsit.ccny.cuny.edu
>
> ________________________________________
> From: Axiom-developer
> <axiom-developer-bounces+wyscc=[hidden email]> on behalf of
> Tim Daly <[hidden email]>
> Sent: Saturday, July 18, 2020 6:28 PM
> To: axiom-dev; Tim Daly
> Subject: [EXTERNAL] Re: Axiom musings...
>
> Richard Hamming gave a great talk. "You and Your Research"
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3Da1zDuOPkMSw&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=kSXlFiPNCbYVZvoZ62OUVd_40kcVviTxSKF3vNNtm0U&e=
>
> His big question is:
>
> "What is the most important problem in your field
> and why aren't you working on it?"
>
> To my mind, the most important problem in the field of
> computational mathematics is grounding computer
> algebra in proofs.
>
> Computer mathematical algorithms that "maybe,
> possibly, give correct answers sometimes" is a problem.
> Indeed, for computer algebra, it is the most important
> problem. We need proven algorithms.
>
> New algorithms, better graphics, better documentation,
> are all "nice to have" but, as Hamming would say,
> they are not "the most important problem".
>
> Tim
>
>
>
> On 7/2/20, Tim Daly <[hidden email]> wrote:
>> Time for another update.
>>
>> The latest Intel processors, available only to data centers
>> so far, have a built-in FPGA. This allows you to design
>> your own circuits and have them loaded "on the fly",
>> running in parallel with the CPU.
>>
>> I bought a Lattice ICEstick FPGA development board. For
>> the first time there are open source tools that support it so
>> it is a great test bench for ideas and development. It is a
>> USB drive so it can be easily ported to any PC.
>> (https://urldefense.proofpoint.com/v2/url?u=https-3A__www.latticesemi.com_products_developmentboardsandkits_icestick&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=QxcJcE1BdIMqDbutQz2HFhAAAymG-QswIjRao_YTwz4&e=
>> )
>>
>> I also bought a large Intel Cyclone FPGA development board.
>> (https://urldefense.proofpoint.com/v2/url?u=http-3A__www.terasic.com.tw_cgi-2Dbin_page_archive.pl-3FLanguage-3DEnglish-26No-3D836&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=3wW6BueAeyVTQi0xGqoeE7xIA5EREDmvQR4fPw5zAXo&e=
>> )
>> which has 2 embedded ARM processors. Unfortunately
>> the tools (which are freely available) are not open source.
>> It has sufficient size and power to do anything.
>>
>>
>> I've got 2 threads of work in progress, both of which
>> involve FPGAs (Field Programmable Gate Arrays).
>>
>> Thread 1
>>
>> The first thread involves proving programs correct. Once
>> a proof has been made it is rather easier to check the proof.
>> If code is shipped with a proof, the proof can be loaded into
>> an FPGA running a proof-checker which verifies the program
>> in parallel with running the code on the CPU.
>>
>> I am researching the question of writing a proof checker that
>> runs on an FPGA, thus verifying the code "down to the metal".
>> The Lean proof checker is the current target.
>>
>> The idea is to make "Oracle" algorithms that, because they
>> are proven correct and verified at runtime, can be trusted
>> by other mathematical software (e.g. Lean, Coq, Agda)
>> when used in proofs.
>>
>> Thread 2
>>
>>
>> The second thread involves arithmetic. Axiom currently ships
>> with numeric routines (BLAS and LAPACK, see bookvol10.5).
>> These routines have a known set of numeric failures such as
>> cancellation, underflow, and scaling.
>>
>> John Gustafson has designed a 'unum' numeric format that can
>> eliminate many of these errors. (See
>> Gustafson, John "The End of Error" CRC Press 2015
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_End-2DError-2DComputing-2DChapman-2DComputational_dp_1482239868_ref-3Dsr-5F1-5F1-3Fdchild-3D1-26keywords-3Dgustafson-2Bthe-2Bend-2Bof-2Berror-26qid-3D1593685423-26sr-3D8-2D1&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=cxcqXTqQQjOFj6wRWKcaCMutCt0BYJ0WwJnlo0hYa0A&e=
>> )
>>
>> The research goal is to implement Axiom's floating-point
>> arithmetic that can be offloaded onto an FPGA implementing
>> the unum format. Such a system would radically simplify
>> the implementation of BLAS and LAPACK as most of the
>> errors can't occur. The impact would be similar to using
>> multi-precision integer arithmetic, only now its floating-point.
>>
>> SANE, the greater goal.
>>
>> The Axiom SANE compiler / interpreter can use both of
>> these tools to implement trusted mathematical software.
>> It's a long, ambitious research effort but even if only pieces
>> of it succeed, it changes computational mathematics.
>>
>> Tim
>>
>> "A person's reach should exceed their grasp,
>> or what's a computer for?"  (misquoting Robert Browning)
>>
>> (https://urldefense.proofpoint.com/v2/url?u=https-3A__www.quotetab.com_quote_by-2Drobert-2Dbrowning_ah-2Dbut-2Da-2Dmans-2Dreach-2Dshould-2Dexceed-2Dhis-2Dgrasp-2Dor-2Dwhats-2Da-2Dheaven-2Dfor&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=ayZkzXC9ekESctdx_OqsfcYl4z14qlYS02TBNmnaHUY&e=
>> )
>>
>>
>>
>>
>> On 6/16/20, Tim Daly <[hidden email]> wrote:
>>> WHY PROVE AXIOM CORRECT (SANE)?
>>>
>>> Historically, Axiom credits CLU, the Cluster language by
>>> Barbara Liskov, with the essential ideas behind the Spad
>>> language. Barbara gave a talk (a partial transcript below)
>>> that gives the rational behind the ``where clause'' used by
>>> Spad.
>>>
>>> She talks about the limits of the compile time capablity.
>>> In particular, she says:
>>>
>>>    To go further, where we would say that T,
>>>    in addition, has to be an equality relation, that requires
>>>    much more sophisticated techniques that, even today, are
>>>    beyond the capabilities of the compiler.
>>>
>>> Showing that the ``equal'' function satisfies the equality
>>> relation is no longer ``beyond the capabilities of the compiler''.
>>> We have the required formalisms and mechanisms to
>>> prove properties at compile time.
>>>
>>> The SANE effort is essentially trying to push compile
>>> time checking into proving that, for categories that use
>>> ``equal'', we prove that the equal function implements
>>> equality.
>>>
>>> I strongly encourage you to watch her video.
>>>
>>> Tim
>>>
>>> ===========================================
>>> Barbara Liskov
>>> May 2012
>>> MIT CSAIL
>>> Programming the Turing Machine
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3DibRar7sWulM&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=mKaSE2deFF_wqq9yriqo-s51oF6c3-ksS2_IZhS1eGY&e=
>>>
>>> POLYMORPHISM
>>>
>>> We don't just want a set, we want polymorphism or
>>> generics, as they are called today. We wanted to
>>> have a generic set which was paramaterized by type
>>> so you could instantiate it as:
>>>
>>> Set = [T:type] create, insert,...
>>>   % representation for Set object
>>>   % implementation of Set operations
>>>   Set
>>>
>>> Set[int] s := Set[int]$create()
>>> Set[int]$insert(s,3)
>>>
>>> We wanted a static solution to this problem. The
>>> problem is, not every type makes sense as a parameter
>>> to Set of T. For sets, per se, you need an equality
>>> relation. If it has been a sorted set we would have
>>> some ordering relation. And a type that didn't have
>>> one of those things would not have been a legitimate
>>> parameter. We needed a way of expressing that in a
>>> compile-time, checkable manner. Otherwise we would
>>> have had to resort to runtime checking.
>>>
>>> Our solution was
>>>
>>> Set = [T:  ] create, insert,...
>>>   T equal: (T,T) (bool)
>>>
>>>
>>> Our solution, what we call the ``where clause''. So we
>>> added this to the header. The ``where clause'' tells you
>>> what operations the parameter type has to have.
>>>
>>> If you have the ``where'' clause you can do the static
>>> checking because when you instantiate, when you provide
>>> an actual type, the compiler can check that the type has
>>> the operations that are required. And then, when you write
>>> the implementation of Set the compiler knows it's ok to
>>> call those operations because you can guarantee they are
>>> actually there when you get around to running.
>>>
>>> Of course, you notice that there's just syntax here; there's
>>> no semantics.
>>>
>>> As I'm sure you all know, compile-time type checking is
>>> basically a proof technique of a very limited sort and
>>> this was about as far as we can push what you could get out of the
>>> static analysis. To go further, where we would say that T,
>>> in addition, has to be an equality relation, that requires
>>> much more sophisticated techniques that, even today, are
>>> beyond the capabilities of the compiler.
>>>
>>>
>>>
>>>
>>> On 3/24/20, Tim Daly <[hidden email]> wrote:
>>>> I've spent entirely too much time studing the legal issues
>>>> of free and open source software. There are copyright,
>>>> trademark, and intellectual property laws. I have read
>>>> several books, listened to lectures, and read papers on
>>>> the subject. I've spoken to lawyers about it. I've even
>>>> been required, by law, to coerce people I respect.
>>>> You would think it was all perfectly clear. It isn't.
>>>>
>>>> The most entertaining and enlightening lectures were
>>>> by Robert Lefkowitz at OSCON 2004. His talk is
>>>> "The Semasiology of Open Source", which sounds
>>>> horrible but I assure you, this is a real treat.
>>>>
>>>> THE THESIS
>>>>
>>>> Semasiology, n. The science of meanings or
>>>> sense development (of words); the explanation
>>>> of the development and changes of the meanings
>>>> of words. Source: Webster's Revised Unabridged
>>>> Dictionary, � 1996, 1998 MICRA, Inc.
>>>>
>>>> "Open source doesn't just mean access to the
>>>> source code." So begins the Open Source Definition.
>>>> What then, does access to the source code mean?
>>>> Seen through the lens of an Enterprise user, what
>>>> does open source mean? When is (or isn't) it
>>>> significant? And a catalogue of open source
>>>> related arbitrage opportunities.
>>>>
>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__origin.conversationsnetwork.org_Robert-2520Lefkowitz-2520-2D-2520The-2520Semasiology-2520of-2520Open-2520Source.mp3&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=IpKqNvLCWxaxdmI9ATBmNX0r3h_3giwDJVTFcnEbusM&e=
>>>>
>>>> Computer source code has words and sentence
>>>> structure like actual prose or even poetry. Writing
>>>> code for the computer is like writing an essay. It
>>>> should be written for other people to read,
>>>> understand and modify. These are some of the
>>>> thoughts behind literate programming proposed
>>>> by Donald Knuth. This is also one of the ideas
>>>> behind Open Source.
>>>>
>>>>  THE ANTITHESIS
>>>>
>>>> "Open Source" is a phrase like "Object Oriented"
>>>> - weird at first, but when it became popular, the
>>>> meaning began to depend on the context of the
>>>> speaker or listener. "Object Oriented" meant that
>>>> PERL, C++, Java, Smalltalk, Basic and the newest
>>>> version of Cobol are all "Object Oriented" - for some
>>>> specific definition of "Object Oriented". Similar is
>>>> the case of the phrase "Open Source".
>>>>
>>>> In Part I, Lefkowitz talked about the shift of the
>>>> meaning of "Open Source" away from any reference
>>>> to the actual "source code," and more towards other
>>>> phases of the software development life cycle. In
>>>> Part II, he returns to the consideration of the
>>>> relationship between "open source" and the actual
>>>> "source code," and reflects upon both the way
>>>> forward and the road behind, drawing inspiration
>>>> from Charlemagne, King Louis XIV, Donald Knuth,
>>>> and others.
>>>>
>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__origin.conversationsnetwork.org_ITC.OSCON05-2DRobertLefkowitz-2D2005.08.03.mp3&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=LTgLxuL_diAdUFVj96fbcZJ08IEv_MGf28Vlk0InNQI&e=
>>>>
>>>> THE SYNTHESIS
>>>>
>>>> In a fascinating synthesis, Robert “r0ml” Lefkowitz
>>>> polishes up his exposition on the evolving meaning
>>>> of the term ‘open source’. This intellectual joy-ride
>>>> draws on some of the key ideas in artificial intelligence
>>>> to probe the role of language, meaning and context
>>>> in computing and the software development process.
>>>> Like Wittgenstein’s famous thought experiment, the
>>>> open source ‘beetle in a box’ can represent different
>>>> things to different people, bearing interesting fruit for
>>>> philosophers and software creators alike.
>>>>
>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__itc.conversationsnetwork.org_audio_download_itconversations-2D1502.mp3&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=Jls8thoIwON-5Jr2Rn1_MXWtrohVFn1Ik4c7l2MFsnk&e=
>>>>
>>>>
>>>> On 3/7/20, Tim Daly <[hidden email]> wrote:
>>>>> I've pushed the lastest version of Axiom. The plan, followed
>>>>> so far, is to push once a month on the 7th.
>>>>>
>>>>> After some chat room interactions it was brought home
>>>>> again that the proof world really does not seem to like the
>>>>> idea of proving programs correct. And, given that it was is
>>>>> of the main Axiom goals and a point of friction during the fork,
>>>>> the computer algebra world does not like the idea of proving
>>>>> programs correct either.
>>>>>
>>>>> So the idea of "computational mathematics", which includes
>>>>> both disciplines (as well as type theory) seems still a long
>>>>> way off.
>>>>>
>>>>> Nevertheless, the primary change in these past and future
>>>>> updates is focused on merging proof and computer algebra.
>>>>>
>>>>> Proof systems are able to split the problem of creating a
>>>>> proof and the problem of verifying a proof, which is much
>>>>> cheaper. Ideally the proof checker would run on verified
>>>>> hardware so the proof is checked "down to the metal".
>>>>>
>>>>> I have a background in Field Programmable Gate Arrays
>>>>> (FPGAs) as I tried to do a startup using them. So now I'm
>>>>> looking at creating a hardware proof checker using a
>>>>> dedicated instruction set, one designed to be verifed.
>>>>> New CPUs used in data centers (not yet available to us
>>>>> mortals) have built-in FPGAs so it would be possible to
>>>>> "side-load" a proof of a program to be checked while the
>>>>> program is run. I have the FPGA and am doing a gate-level
>>>>> special instruction design for such a proof checker.
>>>>>
>>>>>
>>>>> On 2/7/20, Tim Daly <[hidden email]> wrote:
>>>>>> As a mathematician, it is difficult to use a system like Axiom,
>>>>>> mostly because it keeps muttering about Types. If you're not
>>>>>> familiar with type theory (most mathematicians aren't) then it
>>>>>> seems pointless and painful.
>>>>>>
>>>>>> So Axiom has a steep learning curve.
>>>>>>
>>>>>> As a mathematician with an algorithmic approach, it is difficult
>>>>>> to use a system like Axiom, mostly because you have to find
>>>>>> or create "domains" or "packages", understand categories
>>>>>> with their inheritance model, and learn a new language with
>>>>>> a painful compiler always complaining about types.
>>>>>>
>>>>>> So Axiom has a steep learning curve.
>>>>>>
>>>>>> The Sane version of Axiom requires knowing the mathematics.
>>>>>> It also assumes a background in type theory, inductive logic,
>>>>>> homotopy type theory, ML (meta-language, not machine
>>>>>> learning (yet)), interactive theorem proving, kernels, tactics,
>>>>>> and tacticals. Also assumed is knowledge of specification languages,
>>>>>> Hoare triples, proof techniques, soundness, and completeness.
>>>>>> Oh, and there is a whole new syntax and semantics added to
>>>>>> specify definitions, axioms, and theorems, not to mention whole
>>>>>> libraries of the same.
>>>>>>
>>>>>> So Axiom Sane has a steep learning curve.
>>>>>>
>>>>>> I've taken 10 courses at CMU and spent the last 4-5 years
>>>>>> learning to read the leading edge literature (also known
>>>>>> as "greek studies", since every paper has pages of greek).
>>>>>>
>>>>>> I'm trying to unify computer algebra and proof theory into a
>>>>>> "computational mathematics" framework. I suspect that the only
>>>>>> way this system will ever be useful is after Universities have a
>>>>>> "Computational Mathematics" major course of study and degree.
>>>>>>
>>>>>> Creating a new department is harder than creating Axiom Sane
>>>>>> because, you know, ... people.
>>>>>>
>>>>>> I think such a department is inevitable given the deep and wide
>>>>>> impact of computers, just not in my lifetime. That's ok. When I
>>>>>> started programming there was no computer science degree.
>>>>>>
>>>>>> Somebody has to be the first lemming over the cliff.
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>> On 1/9/20, Tim Daly <[hidden email]> wrote:
>>>>>>> When Axiom Sane is paired with a proof checker (e.g. with Lean)
>>>>>>> there is a certain amount of verification that is involved.
>>>>>>>
>>>>>>> Axiom will provide proofs (presumably validated by Lean) for its
>>>>>>> algorithms. Ideally, when a computation is requested from Lean
>>>>>>> for a GCD, the result as well as a proof of the GCD algorithm is
>>>>>>> returned. Lean can the verify that the proof is valid. But it is
>>>>>>> computationally more efficient if Axiom and Lean use a cryptographic
>>>>>>> hash, such as SHA1. That way the proof doesn't need to be
>>>>>>> 'reproven', only a hash computation over the proof text needs to
>>>>>>> be performed. Hashes are blazingly fast. This allows proofs to be
>>>>>>> exchanged without re-running the proof mechanism. Since a large
>>>>>>> computation request from Lean might involve many algorithms
>>>>>>> there would be considerable overhead to recompute each proof.
>>>>>>> A hash simplifies the issue yet provides proof integrity.
>>>>>>>
>>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>> On 1/9/20, Tim Daly <[hidden email]> wrote:
>>>>>>>> Provisos.... that is, 'formula SUCH pre/post-conditions'
>>>>>>>>
>>>>>>>> A computer algebra system ought to know and ought to provide
>>>>>>>> information about the domain and range of a resulting formula.
>>>>>>>> I've been pushing this effort since the 1980s (hence the
>>>>>>>> SuchThat domain).
>>>>>>>>
>>>>>>>> It turns out that computing with, carrying, and combining this
>>>>>>>> information is difficult if not impossible in the current system.
>>>>>>>> The information isn't available and isn't computed. In that sense,
>>>>>>>> the original Axiom system is 'showing its age'.
>>>>>>>>
>>>>>>>> In the Sane implementation the information is available. It is
>>>>>>>> part of the specification and part of the proof steps. With a
>>>>>>>> careful design it will be possible to provide provisos for each
>>>>>>>> given result that are carried with the result for use in further
>>>>>>>> computation.
>>>>>>>>
>>>>>>>> This raises interesting questions to be explored. For example,
>>>>>>>> if the formula is defined over an interval, how is the interval
>>>>>>>> arithmetic handled?
>>>>>>>>
>>>>>>>> Exciting research ahead!
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 1/3/20, Tim Daly <[hidden email]> wrote:
>>>>>>>>> Trusted Kernel... all the way to the metal.
>>>>>>>>>
>>>>>>>>> While building a trusted computer algebra system, the
>>>>>>>>> SANE version of Axiom, I've been looking at questions of
>>>>>>>>> trust at all levels.
>>>>>>>>>
>>>>>>>>> One of the key tenets (the de Bruijn principle) calls for a
>>>>>>>>> trusted kernel through which all other computations must
>>>>>>>>> pass. Coq, Lean, and other systems do this. They base
>>>>>>>>> their kernel  on logic like the Calculus of Construction or
>>>>>>>>> something similar.
>>>>>>>>>
>>>>>>>>> Andrej Bauer has been working on a smaller kernel (a
>>>>>>>>> nucleus) that separates the trust from the logic. The rules
>>>>>>>>> for the logic can be specified as needed but checked by
>>>>>>>>> the nucleus code.
>>>>>>>>>
>>>>>>>>> I've been studying Field Programmable Gate Arrays (FPGA)
>>>>>>>>> that allow you to create your own hardware in a C-like
>>>>>>>>> language (Verilog). It allows you to check the chip you build
>>>>>>>>> all the way down to the transistor states. You can create
>>>>>>>>> things as complex as a whole CPU or as simple as a trusted
>>>>>>>>> nucleus. (youtube: Building a CPU on an FPGA). ACL2 has a
>>>>>>>>> history of verifying hardware logic.
>>>>>>>>>
>>>>>>>>> It appears that, assuming I can understand Bauers
>>>>>>>>> Andromeda system, it would be possible and not that hard
>>>>>>>>> to implement a trusted kernel on an FPGA the size and
>>>>>>>>> form factor of a USB stick.
>>>>>>>>>
>>>>>>>>> Trust "down to the metal".
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 12/15/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>> Progress in happening on the new Sane Axiom compiler.
>>>>>>>>>>
>>>>>>>>>> Recently I've been musing about methods to insert axioms
>>>>>>>>>> into categories so they could be inherited like signatures.
>>>>>>>>>> At the moment I've been thinking about adding axioms in
>>>>>>>>>> the same way that signatures are written, adding them to
>>>>>>>>>> the appropriate categories.
>>>>>>>>>>
>>>>>>>>>> But this is an interesting design question.
>>>>>>>>>>
>>>>>>>>>> Axiom already has a mechanism for inheriting signatures
>>>>>>>>>> from categories. That is, we can bet a plus signature from,
>>>>>>>>>> say, the Integer category.
>>>>>>>>>>
>>>>>>>>>> Suppose we follow the same pattern. Currently Axiom
>>>>>>>>>> inherits certain so-called "attributes", such as
>>>>>>>>>> ApproximateAttribute,
>>>>>>>>>> which implies that the results are only approximate.
>>>>>>>>>>
>>>>>>>>>> We could adapt the same mechnaism to inherit the Transitive
>>>>>>>>>> property by defining it in its own category. In fact, if we
>>>>>>>>>> follow Carette and Farmer's "tiny theories" architecture,
>>>>>>>>>> where each property has its own inheritable category,
>>>>>>>>>> we can "mix and match" the axioms at will.
>>>>>>>>>>
>>>>>>>>>> An "axiom" category would also export a function. This function
>>>>>>>>>> would essentially be a "tactic" used in a proof. It would modify
>>>>>>>>>> the proof step by applying the function to the step.
>>>>>>>>>>
>>>>>>>>>> Theorems would have the same structure.
>>>>>>>>>>
>>>>>>>>>> This allows theorems to be constructed at run time (since
>>>>>>>>>> Axiom supports "First Class Dynamic Types".
>>>>>>>>>>
>>>>>>>>>> In addition, this design can be "pushed down" into the Spad
>>>>>>>>>> language so that Spad statements (e.g. assignment) had
>>>>>>>>>> proof-related properties. A range such as [1..10] would
>>>>>>>>>> provide explicit bounds in a proof "by language definition".
>>>>>>>>>> Defining the logical properties of language statements in
>>>>>>>>>> this way would make it easier to construct proofs since the
>>>>>>>>>> invariants would be partially constructed already.
>>>>>>>>>>
>>>>>>>>>> This design merges the computer algebra inheritance
>>>>>>>>>> structure with the proof of algorithms structure, all under
>>>>>>>>>> the same mechanism.
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>> On 12/11/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>> I've been reading Stephen Kell's (Univ of Kent
>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.cs.kent.ac.uk_people_staff_srk21_&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=0SL3F3KHh9R1lV_IrJ0LmINrn_DSMjMq5xsNk1_eii0&e=
>>>>>>>>>>> ) on
>>>>>>>>>>> Seven deadly sins of talking about “types”
>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.cs.kent.ac.uk_people_staff_srk21__blog_2014_10_07_&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=GOMXhymTlK2T6dt62fTbqv-K98dBQv0oMmB82kE8mXo&e=
>>>>>>>>>>>
>>>>>>>>>>> He raised an interesting idea toward the end of the essay
>>>>>>>>>>> that type-checking could be done outside the compiler.
>>>>>>>>>>>
>>>>>>>>>>> I can see a way to do this in Axiom's Sane compiler.
>>>>>>>>>>> It would be possible to run a program over the source code
>>>>>>>>>>> to collect the information and write a stand-alone type
>>>>>>>>>>> checker. This "unbundles" type checking and compiling.
>>>>>>>>>>>
>>>>>>>>>>> Taken further I can think of several other kinds of checkers
>>>>>>>>>>> (aka 'linters') that could be unbundled.
>>>>>>>>>>>
>>>>>>>>>>> It is certainly something to explore.
>>>>>>>>>>>
>>>>>>>>>>> Tim
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 12/8/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>>> The Axiom Sane compiler is being "shaped by the hammer
>>>>>>>>>>>> blows of reality", to coin a phrase.
>>>>>>>>>>>>
>>>>>>>>>>>> There are many goals. One of the primary goals is creating a
>>>>>>>>>>>> compiler that can be understood, maintained, and modified.
>>>>>>>>>>>>
>>>>>>>>>>>> So the latest changes involved adding multiple index files.
>>>>>>>>>>>> These are documentation (links to where terms are mentioned
>>>>>>>>>>>> in the text), code (links to the implementation of things),
>>>>>>>>>>>> error (links to where errors are defined), signatures (links to
>>>>>>>>>>>> the signatures of lisp functions), figures (links to figures),
>>>>>>>>>>>> and separate category, domain, and package indexes.
>>>>>>>>>>>>
>>>>>>>>>>>> The tikz package is now used to create "railroad diagrams"
>>>>>>>>>>>> of syntax (ala, the PASCAL report). The implementation of
>>>>>>>>>>>> those diagrams follows immediately. Collectively these will
>>>>>>>>>>>> eventually define at least the syntax of the language. In the
>>>>>>>>>>>> ideal, changing the diagram would change the code but I'm
>>>>>>>>>>>> not that clever.
>>>>>>>>>>>>
>>>>>>>>>>>> Reality shows up with the curent constraint that the
>>>>>>>>>>>> compiler should accept the current Spad language as
>>>>>>>>>>>> closely as possible. Of course, plans are to include new
>>>>>>>>>>>> constructs (e.g. hypothesis, axiom, specification, etc)
>>>>>>>>>>>> but these are being postponed until "syntax complete".
>>>>>>>>>>>>
>>>>>>>>>>>> All parse information is stored in a parse object, which
>>>>>>>>>>>> is a CLOS object (and therefore a Common Lisp type)
>>>>>>>>>>>> Fields within the parse object, e.g. variables are also
>>>>>>>>>>>> CLOS objects (and therefore a Common Lisp type).
>>>>>>>>>>>> It's types all the way down.
>>>>>>>>>>>>
>>>>>>>>>>>> These types are being used as 'signatures' for the
>>>>>>>>>>>> lisp functions. The goal is to be able to type-check the
>>>>>>>>>>>> compiler implementation as well as the Sane language.
>>>>>>>>>>>>
>>>>>>>>>>>> The parser is designed to "wrap around" so that the
>>>>>>>>>>>> user-level output of a parse should be the user-level
>>>>>>>>>>>> input (albeit in a 'canonical" form). This "mirror effect"
>>>>>>>>>>>> should make it easy to see that the parser properly
>>>>>>>>>>>> parsed the user input.
>>>>>>>>>>>>
>>>>>>>>>>>> The parser is "first class" so it will be available at
>>>>>>>>>>>> runtime as a domain allowing Spad code to construct
>>>>>>>>>>>> Spad code.
>>>>>>>>>>>>
>>>>>>>>>>>> One plan, not near implementation, is to "unify" some
>>>>>>>>>>>> CLOS types with the Axiom types (e.g. String). How
>>>>>>>>>>>> this will happen is still in the land of design. This would
>>>>>>>>>>>> "ground" Spad in lisp, making them co-equal.
>>>>>>>>>>>>
>>>>>>>>>>>> Making lisp "co-equal" is a feature, especially as Spad is
>>>>>>>>>>>> really just a domain-specific language in lisp. Lisp
>>>>>>>>>>>> functions (with CLOS types as signatures) would be
>>>>>>>>>>>> avaiable for implementing Spad functions. This not
>>>>>>>>>>>> only improves the efficiency, it would make the
>>>>>>>>>>>> BLAS/LAPACK (see volume 10.5) code "native" to Axiom.
>>>>>>>>>>>> .
>>>>>>>>>>>> On the theory front I plan to attend the Formal Methods
>>>>>>>>>>>> in Mathematics / Lean Together conference, mostly to
>>>>>>>>>>>> know how little I know, especially that I need to know.
>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.andrew.cmu.edu_user_avigad_meetings_fomm2020_&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=giWJNgv9oeh8Aj_giZkHCx-3GFVk62hxr53YKr4naRk&e=
>>>>>>>>>>>>
>>>>>>>>>>>> Tim
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 11/28/19, Jacques Carette <[hidden email]> wrote:
>>>>>>>>>>>>> The underlying technology to use for building such an algebra
>>>>>>>>>>>>> library
>>>>>>>>>>>>> is
>>>>>>>>>>>>> documented in the paper " Building on the Diamonds between
>>>>>>>>>>>>> Theories:
>>>>>>>>>>>>> Theory Presentation Combinators"
>>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cas.mcmaster.ca_-7Ecarette_publications_tpcj.pdf&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=5QO0O72zl3hFmW3ryVeFoBjl0AZs2cuQZhKuIxk8NUw&e=
>>>>>>>>>>>>> [which
>>>>>>>>>>>>> will
>>>>>>>>>>>>> also be on the arxiv by Monday, and has been submitted to a
>>>>>>>>>>>>> journal].
>>>>>>>>>>>>>
>>>>>>>>>>>>> There is a rather full-fledged prototype, very well documented
>>>>>>>>>>>>> at
>>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__alhassy.github.io_next-2D700-2Dmodule-2Dsystems_prototype_package-2Dformer.html&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=9fnfoSWyT66oQoIb4gKAYpCE7JjANqxHquwJdRdo2Uk&e=
>>>>>>>>>>>>>
>>>>>>>>>>>>> (source at
>>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_alhassy_next-2D700-2Dmodule-2Dsystems&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=Z-d1Pn1slXyiP2l23mZBB5fBQOj0-Q48CUKRS1VNLao&e=
>>>>>>>>>>>>> ).
>>>>>>>>>>>>> It
>>>>>>>>>>>>> is
>>>>>>>>>>>>> literate source.
>>>>>>>>>>>>>
>>>>>>>>>>>>> The old prototype was hard to find - it is now at
>>>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_JacquesCarette_MathScheme&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=pkDi0LOAAPefRjcwvjwNNI3BVzNgJDITFQRpkFBgg8c&e=
>>>>>>>>>>>>> .
>>>>>>>>>>>>>
>>>>>>>>>>>>> There is also a third prototype in the MMT system, but it does
>>>>>>>>>>>>> not
>>>>>>>>>>>>> quite
>>>>>>>>>>>>> function properly today, it is under repair.
>>>>>>>>>>>>>
>>>>>>>>>>>>> The paper "A Language Feature to Unbundle Data at Will"
>>>>>>>>>>>>> (https://urldefense.proofpoint.com/v2/url?u=https-3A__alhassy.github.io_next-2D700-2Dmodule-2Dsystems_papers_gpce19-5Fa-5Flanguage-5Ffeature-5Fto-5Funbundle-5Fdata-5Fat-5Fwill.pdf&d=DwIFaQ&c=4NmamNZG3KTnUCoC6InoLJ6KV1tbVKrkZXHRwtIMGmo&r=qW9SUYRDo6sWEVPpx7wwWYZ79PdSWMRxNZvTih0Bkxc&m=_2V6ryqOIDfXNZePX0kmp-2428hMSBYbz5fq8bDzgkQ&s=Rui27trbws4VTZL5B0zits8pEczWsib7Q7_mxyRIxhk&e=
>>>>>>>>>>>>> )
>>>>>>>>>>>>>
>>>>>>>>>>>>> is also relevant, as it solves a problem with parametrized
>>>>>>>>>>>>> theories
>>>>>>>>>>>>> (parametrized Categories in Axiom terminology) that all current
>>>>>>>>>>>>> systems
>>>>>>>>>>>>> suffer from.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jacques
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 2019-11-27 11:47 p.m., Tim Daly wrote:
>>>>>>>>>>>>>> The new Sane compiler is also being tested with the Fricas
>>>>>>>>>>>>>> algebra code. The compiler knows about the language but
>>>>>>>>>>>>>> does not depend on the algebra library (so far). It should be
>>>>>>>>>>>>>> possible, by design, to load different algebra towers.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In particular, one idea is to support the "tiny theories"
>>>>>>>>>>>>>> algebra from Carette and Farmer. This would allow much
>>>>>>>>>>>>>> finer grain separation of algebra and axioms.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This "flexible algebra" design would allow things like the
>>>>>>>>>>>>>> Lean theorem prover effort in a more natural style.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>>>>>> The current design and code base (in bookvol15) supports
>>>>>>>>>>>>>>> multiple back ends. One will clearly be a common lisp.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Another possible design choice is to target the GNU
>>>>>>>>>>>>>>> GCC intermediate representation, making Axiom "just
>>>>>>>>>>>>>>> another front-end language" supported by GCC.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The current intermediate representation does not (yet)
>>>>>>>>>>>>>>> make any decision about the runtime implementation.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 11/26/19, Tim Daly <[hidden email]> wrote:
>>>>>>>>>>>>>>>> Jason Gross and Adam Chlipala ("Parsing Parses") developed
>>>>>>>>>>>>>>>> a dependently typed general parser for context free grammar
>>>>>>>>>>>>>>>> in Coq.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> They used the parser to prove its own completeness.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Unfortunately Spad is not a context-free grammar.
>>>>>>>>>>>>>>>> But it is an intersting thought exercise to consider
>>>>>>>>>>>>>>>> an "Axiom on Coq" implementation.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
>

123