Zoom Logo

ben@singularitynet.io's Personal Meeting Room - Shared screen with speaker view
Adam Vandervorst
13:39
Maybe your audio is off Ben.
Nugi
16:08
I am just a high school student, but I am exited with your paper. It's exiting and fascinating that I can listen to you all in this room.
ben@singularitynet.io
27:18
Hi Nugi! Well I started working on AGI when I was 16 and am still at it ;)
Adam Vandervorst
56:57
I think recursion schemes, more specifically its algebra's and co-algebra's strike a nice balance between expressivity, safety and efficiency.
Adam Vandervorst
01:05:09
https://jtobin.io/page6/ is a good source of examples for recursion schemes and https://github.com/precog/matryoshka/blob/master/resources/recursion-schemes.pdf is a cheat sheet of the different types.
Adam Vandervorst
01:23:13
Absolutely agree with usability.
Adam Vandervorst
01:25:53
I think what Tensorflow, Keras, PyTorch do well: they compile high-level user programs into a lower-level computational graph which can be optimized and executed efficiently.
Adam Vandervorst
01:26:54
The analogy here would be "What language is easy, expressive, and compiles to folds and refolds"
Kabir
01:27:07
I think a good risk reduction stragegy is to rely on interoperabiilty and APIs as much as possible, so that components could be exchanged with one another least painfully. Not much can be done when uncertainty is so high...
Kabir
01:27:17
I believe this is what Ben just mentioned too...
Adrian Borucki
01:29:43
@Adam well, Tensorflow is not doing that well anymore - people are moving to PyTorch and others because TF was so unwieldy and became very bloated. In general in the programming language world you have LLVM - various “frontends” compiling to one Intermediate Representation that can then be optimised.
Adam Vandervorst
01:32:59
@Adrian I agree TF is becoming very bloated, but in an absolute sense it's still doing a really good job of letting the user define highly complex efficient systems. I think LLVM is too low-level for this, I think you'd not want to write or compile all that, more likely compile it to an existing library optimized for this purpose.
Adam Vandervorst
01:34:51
The second question here is "do you want to run this on a top-100 super-computer or on a beefy AWS instance", which influences what level of abstraction you can compile down to.
Adam Vandervorst
01:36:25
You don't think on the level you calculate. You don't need your reasoning system to be extremely efficient, you just need it to be smart and let it use good calculators and programming languages.
Mark Waser
01:36:30
I'm afraid that I've got to bail for another meeting.Great seeing such interest!
Jon Pennant
01:37:35
Yeah I need to go in a moment too. Really nice to meet everyone, there's a lot of very nice skilled people here. :)
Adrian Borucki
01:37:53
@Adam Well, the advantage of compiling to this lower level representation is that you get very good performance - if that is not necessary then you can probably treat Atomese as some EDSL made with some existing programming language’s metaprogramming support.
Adam Vandervorst
01:41:59
For anyone interested: https://arxiv.org/abs/1810.10525
Adrian Borucki
01:42:32
The DreamCoder paper also has nice results showing the process of building a library of programming primitives for program learning.
ben@singularitynet.io
01:42:50
https://mitpress.mit.edu/books/compositional-evolution
Adrian Borucki
01:44:24
https://arxiv.org/abs/2006.08381
Adam Vandervorst
01:48:40
Left-recursive meta-learning haha
Adam Vandervorst
01:51:31
In principle the infinite tree of meta-learning can be expressed as a single back-loop: Let one of the tasks the meta-learner learns be the meta-learning algorithm.
Jose Ignacio
01:55:56
Thank you all for the great discussion!
Adam Vandervorst
01:56:22
That's great, thanks for organizing.