Design of Semantic Latex

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Design of Semantic Latex

Tim Daly
(email failed... retry)

The referenced paper only looked at the DLMF Airy functions. The results are:

"Of the 17 sections in the DLMF sample chapter on Airy functions we can handle the mathematical

formulas completely in 6, partially in 5 (without the transformation to MathML), and 6 remain

incomplete.


The grammar currently contains approximately 1000 productions, of which ca. 350 are dictionaries.

There are about 550 rewrite rules. There are fewer rewrite rules than grammar rules, partly because

dictionaries can be treated uniformly by manipulating literals, and partly because it is still incomplete

with respect to the grammar.


Our project shows that parsing mathematics in the form of LATEX documents written to project-specific

rules is feasible, but due to the variations in notation the grammar needs to be engineered specifically

for the project, or even for different chapters written by different mathematicians (e.g. the chapter on

elementary transcendental functions and on Airy functions)."


Many have tried to parse DLMF but there is not sufficient information in the latex. This 

effort used many rules to try to decide if w(x+y) is a multiplication or a function application.

There are rules to decide if sin a/b means (sin a)/b or sin(a/b), either of which is trivial to

distinguish if the latex read {sin a}/b or sin{a/b}


Trivial amounts of semantic markup in the DLMF would communicate semantics without using

1000 productions which still get wrong answers.


_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: Design of Semantic Latex

Tim Daly
(email failed...retry)

William Sit wrote:

=======
>I am still not sure what your immediate goal is. My current understanding (correct me
>if I am still wrong) is that you want to translate the left hand side of an identity (like a
>formula for an integral) given in latex into Axiom code which then, by executing the
>code, generates the right hand side of that identity, first by outputted as an Axiom
>expression and then as selatex, and one difficulty is the lack of semantics implicit
>in the input (left hand side) latex string.

Well, the selatex output would only happen if you asked for it, but yes, this is correct.

>However, there are other difficulties: often, there is no canonical form for the right hand
>side (an answer after evaluating the left hand side),

If you look at the CATS Schaum's Integration set you'll see that each Axiom
integration result was subtracted from the published result, expecting it to simplify to 0.
(This is undecidable).

Where the answers differed they usually differed by a constant
(which was shown by taking the derivative of the difference). Sometimes special
simplification routines were needed, other times some rules had to be applied to get
some trig identities. More than once there was no resolution, which either indicates
a bug in Axiom, an error in the published result, or some hidden semantic assumption.

Even more interesting is that many of the reference books list many results.

=======
>nor does Axiom always give the provisos under which the evaluation/identity is valid

One crisis at a time... Axiom has, for example, branch cut choices and these need to
be made explicit. Ideally, these could be changed by provisos. The SuchThat domain
provides some proviso functionality but is rarely used at present. However, the published
results don't provide the provisos either making the whole issue invisible despite
being important.

=========
>For test suites, you won't have to worry about the canonical form or provisos because
>you are only comparing the answer with the one generated by a previous version of
>Axiom, to make sure nothing is "broken". For that purpose, the semantic you need only
>needs to be consistent from one Axiom version to the next, and you may choose the
>specific parameters needed in any way where the identity makes sense.

The regression tests use exactly this philosophy. But the CATS tests are a
"Computer Algebra Test Suite", not an Axiom test suite so they are measured against
published results. On several occasions Axiom has found errors in the published results.

==========
>The general problem (which I am not sure if you are pursuing) where one wants to add
>explicit semantic to any mathematical expression given in latex is a far more challenging
>one, independent of whether the expression can or should be evaluated, or semantic
>provided for the rewritten expression (or "answer"). I wonder if the general problem has
>applications. Would such mark-ups help or hinder the creation of a piece of mathematical
>work?

In general I don't think published mathematics will adopt semantic markup. The context
available in the surrounding paragraphs is sufficient. Most formulas are just there
to make the text statements exact.

However, for reference works that have no surrounding paragraphs like the CRC/NIST/etc.
the loss of paragraphs makes the formulas ambiguous. The E=mc^2 formula has no
meaning if you don't know what E, m, and c are.

Reference works lack grounding. This is an effort to provide semantic grounding by
showing that the formula is backed by algorithms that can recover the "results".
Is that useful?

The CATS test suite shows that Axiom has problems which need to be solved.
It also shows that the reference works have published errors. Both efforts benefit.

Is it of interest? Apparently so. Every so often someone tries to parse latex with the
goal of automation or inter-system communication. The natural source of latex
formulas are the reference works. The parsing effort fails every time based
on the lack of semantics. This effort addresses the "root cause", making the
reference works much more useful.


On Sat, Aug 27, 2016 at 3:08 AM, Tim Daly <[hidden email]> wrote:
(email failed... retry)

The referenced paper only looked at the DLMF Airy functions. The results are:

"Of the 17 sections in the DLMF sample chapter on Airy functions we can handle the mathematical

formulas completely in 6, partially in 5 (without the transformation to MathML), and 6 remain

incomplete.


The grammar currently contains approximately 1000 productions, of which ca. 350 are dictionaries.

There are about 550 rewrite rules. There are fewer rewrite rules than grammar rules, partly because

dictionaries can be treated uniformly by manipulating literals, and partly because it is still incomplete

with respect to the grammar.


Our project shows that parsing mathematics in the form of LATEX documents written to project-specific

rules is feasible, but due to the variations in notation the grammar needs to be engineered specifically

for the project, or even for different chapters written by different mathematicians (e.g. the chapter on

elementary transcendental functions and on Airy functions)."


Many have tried to parse DLMF but there is not sufficient information in the latex. This 

effort used many rules to try to decide if w(x+y) is a multiplication or a function application.

There are rules to decide if sin a/b means (sin a)/b or sin(a/b), either of which is trivial to

distinguish if the latex read {sin a}/b or sin{a/b}


Trivial amounts of semantic markup in the DLMF would communicate semantics without using

1000 productions which still get wrong answers.



_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: Design of Semantic Latex

Richard Fateman-2
Designation of branch cuts is sometimes denoted by natural language.
While the end points are specific -- depend of singularities -- the
cuts can be moved for convenience, and this is done often to evaluate
contour integrals, for example.

Take up a book on complex analysis and see what problems you have
 as you try to encode the statements, or especially the homework
problems. I tried this decades ago with the text I used,
https://www.amazon.com/Functions-Complex-Variable-Technique-Mathematics/dp/0898715954
but probably any other text would do.

I think the emphasis on handbook or reference book representation
is natural, and I have certainly pursued this direction myself.  However
what you/we want to be able to encode is mathematical discourse. This
goes beyond "has the algorithm reproduced the reference value for an
integration."   Can you encode in semantic latex a description of the geometry
of the (perhaps infinitely layered) contour of a complex function?  You
might wonder if this is important, but then note that questions of this sort
appear in the problem section for chapter 1.

Here's the challenge then.  Take a mathematics book and "encode"
 it so that a program (hypothetically) could answer the problems at
the end of each chapter.

You do not need special functions and integral tables to find
problems that are too hard to handle.  I just found this

http://news.mit.edu/2014/computer-system-automatically-solves-word-problems-0502


I think the problem, algebra word problems,  which has been addressed repeatedly since
1965 or so,  is already difficult.  While I think (judging solely by the news article -- I was
unaware of this work -- which apparently used Macsyma) this is low quality,  it is
hard to be sure.   Maybe their problems can be be related to your ambitions.  A quote from the article above,
The system’s ability to perform fairly well even when trained chiefly on raw numerical answers is “super-encouraging,” Knight adds. “It needs a little help, but it can benefit from a bunch of extra data that you haven’t labeled in detail.”



RJF

_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: Design of Semantic Latex

Tim Daly


On Sat, Aug 27, 2016 at 12:14 PM, Richard Fateman <[hidden email]> wrote:

Take up a book on complex analysis and see what problems you have
 as you try to encode the statements, or especially the homework
problems. I tried this decades ago with the text I used,
https://www.amazon.com/Functions-Complex-Variable-Technique-Mathematics/dp/0898715954
but probably any other text would do.

My last project at CMU (Tires) involved work on machine learning
using natural language (and Good-Old-Fashioned-AI (GOFAI)).
I'm not smart enough to make progress in natural language.

 

I think the emphasis on handbook or reference book representation
is natural, and I have certainly pursued this direction myself.  However
what you/we want to be able to encode is mathematical discourse. This
goes beyond "has the algorithm reproduced the reference value for an
integration."   Can you encode in semantic latex a description of the geometry
of the (perhaps infinitely layered) contour of a complex function?  You
might wonder if this is important, but then note that questions of this sort
appear in the problem section for chapter 1.

Like any research project, there has to be bounds on the ambition.

At this point, the goal is to modify the markup to disambiguate a latex
formula so the machine can import it. Axiom needs to import it to create
a test suite measuring progress against existing knowledge.

What you're describing seems to be a way to encode topological issues
dealing with the structure of the space underlying the formulas. I have no
idea how to encode the Bloch sphere or a torus or any other space except
by referencing an Axiom domain, which implicitly encodes it.

If the formula deals with quantum mechanics then the algorithms have an
implicit, mechanistic way of dealing with the Bloch sphere. So markup that
uses these function calls use this implicit grounding. Simllarly, markup that
uses a branch cut implicitly uses the implementation semantics.

Axiom and Mathematics have one set of branch cuts, Maple and Maxima
have another (at far as I can tell). So the markup decisions have to be
carefully chosen.
 

Here's the challenge then.  Take a mathematics book and "encode"
 it so that a program (hypothetically) could answer the problems at
the end of each chapter.

That's a much deeper can of worms than it appears. I spent a lot of
time in the question-answering literature. I have no idea how to make
progress in that area. The Tires project involved self-modifying lisp
based on natural language interaction with a human in the limited
domain of changing a car tire. See
(The grant ended before the projected ended. Sigh)

Tim

P.S. Tires is self-modifying lisp code. It "learns" by changing itself.
The initial code (the seed code) becomes "something else". One
interesting insight is that two versions of the seed code will diverge
based on "experience". That implies that you can't "teach by copy",
that is, you can't teach one system and then "just copy" it to another
existing system since their experiences (and the code structure)
will differ. Any system that "learns" will fail "teach by copy", I believe.
That means that AI will not have the exponential growth that everyone
seems to believe.



_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: Design of Semantic Latex

Tim Daly
The weaver program can now process a latex document.
The end result is a tree structure of the same document.
There is still more to do, of course. Much more.

It is clear that the semantics of the markup tags are all in
the weaver program. This is obvious in hindsight since the
markup needs to be transparent to the print representation.

The parser needs to know the 'arity' of tags since \tag{a}{b}
would parse one way, \tag{a}, for a 1-arity tag and another
way \tag{a}{b} for a 2-arity tag. The code needs to be generalized
to parse given the arity.

The weaver program is structured so that the tree-parse output
is independent of Axiom. The Axiom rewrite will take the tree
as input and produce valid Axiom inputforms. This should make
it possible to target any CAS.

Onward and upward, as they say....

Tim


On Sat, Aug 27, 2016 at 1:28 PM, Tim Daly <[hidden email]> wrote:


On Sat, Aug 27, 2016 at 12:14 PM, Richard Fateman <[hidden email]> wrote:

Take up a book on complex analysis and see what problems you have
 as you try to encode the statements, or especially the homework
problems. I tried this decades ago with the text I used,
https://www.amazon.com/Functions-Complex-Variable-Technique-Mathematics/dp/0898715954
but probably any other text would do.

My last project at CMU (Tires) involved work on machine learning
using natural language (and Good-Old-Fashioned-AI (GOFAI)).
I'm not smart enough to make progress in natural language.

 

I think the emphasis on handbook or reference book representation
is natural, and I have certainly pursued this direction myself.  However
what you/we want to be able to encode is mathematical discourse. This
goes beyond "has the algorithm reproduced the reference value for an
integration."   Can you encode in semantic latex a description of the geometry
of the (perhaps infinitely layered) contour of a complex function?  You
might wonder if this is important, but then note that questions of this sort
appear in the problem section for chapter 1.

Like any research project, there has to be bounds on the ambition.

At this point, the goal is to modify the markup to disambiguate a latex
formula so the machine can import it. Axiom needs to import it to create
a test suite measuring progress against existing knowledge.

What you're describing seems to be a way to encode topological issues
dealing with the structure of the space underlying the formulas. I have no
idea how to encode the Bloch sphere or a torus or any other space except
by referencing an Axiom domain, which implicitly encodes it.

If the formula deals with quantum mechanics then the algorithms have an
implicit, mechanistic way of dealing with the Bloch sphere. So markup that
uses these function calls use this implicit grounding. Simllarly, markup that
uses a branch cut implicitly uses the implementation semantics.

Axiom and Mathematics have one set of branch cuts, Maple and Maxima
have another (at far as I can tell). So the markup decisions have to be
carefully chosen.
 

Here's the challenge then.  Take a mathematics book and "encode"
 it so that a program (hypothetically) could answer the problems at
the end of each chapter.

That's a much deeper can of worms than it appears. I spent a lot of
time in the question-answering literature. I have no idea how to make
progress in that area. The Tires project involved self-modifying lisp
based on natural language interaction with a human in the limited
domain of changing a car tire. See
(The grant ended before the projected ended. Sigh)

Tim

P.S. Tires is self-modifying lisp code. It "learns" by changing itself.
The initial code (the seed code) becomes "something else". One
interesting insight is that two versions of the seed code will diverge
based on "experience". That implies that you can't "teach by copy",
that is, you can't teach one system and then "just copy" it to another
existing system since their experiences (and the code structure)
will differ. Any system that "learns" will fail "teach by copy", I believe.
That means that AI will not have the exponential growth that everyone
seems to believe.




_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: Design of Semantic Latex

James Davenport

Sorry – I’ve not been much involved: other projects. But I just saw this – haven’t looked in any detail yet.

DeepAlgebra - an outline of a program

Authors: Przemyslaw Chojecki

Categories: cs.AI math.AG

Comments: 6 pages, https://przchojecki.github.io/deepalgebra/

\\

  We outline a program in the area of formalization of mathematics to automate theorem proving in algebra and algebraic geometry. We propose a construction of a dictionary between automated theorem provers and (La)TeX exploiting syntactic parsers. We describe its application to a repository of human-written facts and definitions in algebraic geometry (The Stacks Project). We use deep learning techniques.

\\ ( https://arxiv.org/abs/1610.01044

 

From: Tim Daly [mailto:[hidden email]]
Sent: 01 September 2016 13:25
To: Richard Fateman
Cc: Tim Daly; axiom-dev; Ralf Hemmecke; James Davenport; Mike Dewar; [hidden email]; D Zwillinger; [hidden email]
Subject: Re: Design of Semantic Latex

 

The weaver program can now process a latex document.

The end result is a tree structure of the same document.

There is still more to do, of course. Much more.

It is clear that the semantics of the markup tags are all in

the weaver program. This is obvious in hindsight since the

markup needs to be transparent to the print representation.

The parser needs to know the 'arity' of tags since \tag{a}{b}
would parse one way, \tag{a}, for a 1-arity tag and another

way \tag{a}{b} for a 2-arity tag. The code needs to be generalized

to parse given the arity.

 

The weaver program is structured so that the tree-parse output

is independent of Axiom. The Axiom rewrite will take the tree

as input and produce valid Axiom inputforms. This should make

it possible to target any CAS.

Onward and upward, as they say....

Tim

 

On Sat, Aug 27, 2016 at 1:28 PM, Tim Daly <[hidden email]> wrote:

 

 

On Sat, Aug 27, 2016 at 12:14 PM, Richard Fateman <[hidden email]> wrote:


Take up a book on complex analysis and see what problems you have
 as you try to encode the statements, or especially the homework
problems. I tried this decades ago with the text I used,
https://www.amazon.com/Functions-Complex-Variable-Technique-Mathematics/dp/0898715954
but probably any other text would do.

 

My last project at CMU (Tires) involved work on machine learning
using natural language (and Good-Old-Fashioned-AI (GOFAI)).

I'm not smart enough to make progress in natural language.

 


I think the emphasis on handbook or reference book representation
is natural, and I have certainly pursued this direction myself.  However
what you/we want to be able to encode is mathematical discourse. This
goes beyond "has the algorithm reproduced the reference value for an
integration."   Can you encode in semantic latex a description of the geometry
of the (perhaps infinitely layered) contour of a complex function?  You
might wonder if this is important, but then note that questions of this sort
appear in the problem section for chapter 1.

 

Like any research project, there has to be bounds on the ambition.

At this point, the goal is to modify the markup to disambiguate a latex

formula so the machine can import it. Axiom needs to import it to create
a test suite measuring progress against existing knowledge.

 

What you're describing seems to be a way to encode topological issues

dealing with the structure of the space underlying the formulas. I have no

idea how to encode the Bloch sphere or a torus or any other space except

by referencing an Axiom domain, which implicitly encodes it.

 

If the formula deals with quantum mechanics then the algorithms have an

implicit, mechanistic way of dealing with the Bloch sphere. So markup that

uses these function calls use this implicit grounding. Simllarly, markup that

uses a branch cut implicitly uses the implementation semantics.

Axiom and Mathematics have one set of branch cuts, Maple and Maxima

have another (at far as I can tell). So the markup decisions have to be

carefully chosen.

 


Here's the challenge then.  Take a mathematics book and "encode"
 it so that a program (hypothetically) could answer the problems at
the end of each chapter.

 

That's a much deeper can of worms than it appears. I spent a lot of

time in the question-answering literature. I have no idea how to make

progress in that area. The Tires project involved self-modifying lisp

based on natural language interaction with a human in the limited

domain of changing a car tire. See

(The grant ended before the projected ended. Sigh)

Tim

 

P.S. Tires is self-modifying lisp code. It "learns" by changing itself.

The initial code (the seed code) becomes "something else". One

interesting insight is that two versions of the seed code will diverge

based on "experience". That implies that you can't "teach by copy",

that is, you can't teach one system and then "just copy" it to another

existing system since their experiences (and the code structure)

will differ. Any system that "learns" will fail "teach by copy", I believe.

That means that AI will not have the exponential growth that everyone

seems to believe.

 

 


_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer
Reply | Threaded
Open this post in threaded view
|

Re: Design of Semantic Latex

Tim Daly
Axiom calls COQ (for Spad code) and ACL2 (for Lisp code) at build time
in order to run proofs. It is hard enough trying to construct proofs by hand
despite the notion that Spad is "nearly mathematical". Implementation
details matter a lot. We do, however, have the types already available.
Even with types the Spad-to-COQ gap is a long jump at best, a PhD
at worst.

I'm not sure how a dictionary between automated theorem provers and
latex would be useful. Fateman has already shown that raw latex lacks
enough information for an unambiguous parse (see prior references).

I'm suggesting that the latex be "augmented" by a semantic tags package
containing tags that do not change the print representation but contain
additional semantic information. E.g., for Axiom, \withtype ...

\[ \withtype{ \int{sin(x)} }{x}{EXPR(INT)} \]

prints as 'sin(x)' but becomes

integrate(sin(x),x)

for Axiom.

That way the latex prints "as is" but can be post-processed by machine
to feed CAS systems. Of course this would be a trial-and-error process
as the CAS will inevitably fail on missing semantics, requiring yet-
another-tag somewhere. There is no such thing as a simple job.
I have a trivial implementation working but there is much to do.

As for trying to feed a Deep Neural Network proofs... I have spent a
fair amount of time on DNNs (part of the TIRES project, see prior refs).
I am clueless how they could possibly be applied to proofs.

Tim


On Wed, Oct 5, 2016 at 3:08 AM, James Davenport <[hidden email]> wrote:

Sorry – I’ve not been much involved: other projects. But I just saw this – haven’t looked in any detail yet.

DeepAlgebra - an outline of a program

Authors: Przemyslaw Chojecki

Categories: cs.AI math.AG

Comments: 6 pages, https://przchojecki.github.io/deepalgebra/

\\

  We outline a program in the area of formalization of mathematics to automate theorem proving in algebra and algebraic geometry. We propose a construction of a dictionary between automated theorem provers and (La)TeX exploiting syntactic parsers. We describe its application to a repository of human-written facts and definitions in algebraic geometry (The Stacks Project). We use deep learning techniques.

\\ ( https://arxiv.org/abs/1610.01044

 

From: Tim Daly [mailto:[hidden email]]
Sent: 01 September 2016 13:25
To: Richard Fateman
Cc: Tim Daly; axiom-dev; Ralf Hemmecke; James Davenport; Mike Dewar; [hidden email]; D Zwillinger; [hidden email]
Subject: Re: Design of Semantic Latex

 

The weaver program can now process a latex document.

The end result is a tree structure of the same document.

There is still more to do, of course. Much more.

It is clear that the semantics of the markup tags are all in

the weaver program. This is obvious in hindsight since the

markup needs to be transparent to the print representation.

The parser needs to know the 'arity' of tags since \tag{a}{b}
would parse one way, \tag{a}, for a 1-arity tag and another

way \tag{a}{b} for a 2-arity tag. The code needs to be generalized

to parse given the arity.

 

The weaver program is structured so that the tree-parse output

is independent of Axiom. The Axiom rewrite will take the tree

as input and produce valid Axiom inputforms. This should make

it possible to target any CAS.

Onward and upward, as they say....

Tim

 

On Sat, Aug 27, 2016 at 1:28 PM, Tim Daly <[hidden email]> wrote:

 

 

On Sat, Aug 27, 2016 at 12:14 PM, Richard Fateman <[hidden email]> wrote:


Take up a book on complex analysis and see what problems you have
 as you try to encode the statements, or especially the homework
problems. I tried this decades ago with the text I used,
https://www.amazon.com/Functions-Complex-Variable-Technique-Mathematics/dp/0898715954
but probably any other text would do.

 

My last project at CMU (Tires) involved work on machine learning
using natural language (and Good-Old-Fashioned-AI (GOFAI)).

I'm not smart enough to make progress in natural language.

 


I think the emphasis on handbook or reference book representation
is natural, and I have certainly pursued this direction myself.  However
what you/we want to be able to encode is mathematical discourse. This
goes beyond "has the algorithm reproduced the reference value for an
integration."   Can you encode in semantic latex a description of the geometry
of the (perhaps infinitely layered) contour of a complex function?  You
might wonder if this is important, but then note that questions of this sort
appear in the problem section for chapter 1.

 

Like any research project, there has to be bounds on the ambition.

At this point, the goal is to modify the markup to disambiguate a latex

formula so the machine can import it. Axiom needs to import it to create
a test suite measuring progress against existing knowledge.

 

What you're describing seems to be a way to encode topological issues

dealing with the structure of the space underlying the formulas. I have no

idea how to encode the Bloch sphere or a torus or any other space except

by referencing an Axiom domain, which implicitly encodes it.

 

If the formula deals with quantum mechanics then the algorithms have an

implicit, mechanistic way of dealing with the Bloch sphere. So markup that

uses these function calls use this implicit grounding. Simllarly, markup that

uses a branch cut implicitly uses the implementation semantics.

Axiom and Mathematics have one set of branch cuts, Maple and Maxima

have another (at far as I can tell). So the markup decisions have to be

carefully chosen.

 


Here's the challenge then.  Take a mathematics book and "encode"
 it so that a program (hypothetically) could answer the problems at
the end of each chapter.

 

That's a much deeper can of worms than it appears. I spent a lot of

time in the question-answering literature. I have no idea how to make

progress in that area. The Tires project involved self-modifying lisp

based on natural language interaction with a human in the limited

domain of changing a car tire. See

(The grant ended before the projected ended. Sigh)

Tim

 

P.S. Tires is self-modifying lisp code. It "learns" by changing itself.

The initial code (the seed code) becomes "something else". One

interesting insight is that two versions of the seed code will diverge

based on "experience". That implies that you can't "teach by copy",

that is, you can't teach one system and then "just copy" it to another

existing system since their experiences (and the code structure)

will differ. Any system that "learns" will fail "teach by copy", I believe.

That means that AI will not have the exponential growth that everyone

seems to believe.

 

 



_______________________________________________
Axiom-developer mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/axiom-developer