Zoom Logo

ben@singularitynet.io's Personal Meeting Room - Shared screen with speaker view
Jacques
12:52
In my github page there are link to the paper and main materials.
Jacques
12:54
https://github.com/kaalam
Jacques
13:19
Also some slides https://www.slideshare.net/SantiagoBasalda/jazz-open-expo-europe-june-2020
Adam Vandervorst
46:02
Anyone done any work on how much better an AI programmer would need to be to no longer benefit from reusing libraries, frameworks or even languages?
Adam Vandervorst
46:37
Aggreed, top down planning is essential for every decently sized project.
Adam Vandervorst
47:34
And maybe the core is indeed to do co-evolution, and not planning. The high-level structure needs to be able to evolve in order to make the whole thing tractable.
Jon Pennant
49:09
I love the idea of vector space embeddings for programs or formal proofs.I guess a couple of questions I have about it:How do you ensure points which are close in vector space are somehow close in "program space"How do you find your way through "the maze" of legal programs or statements to find your way from one point to another.I do think that brute force + even a mild heuristic would be very powerful, especially scaled to significant hardware.
ben@singularitynet.io
50:05
Here was our crude initial stab in that direction, https://arxiv.org/abs/2005.12535
ben@singularitynet.io
50:32
We haven’t had time to follow up, but have gotten digressed into designing/building Hyperon hoping to have a better framework in which to build such things ;)
Jon Pennant
52:05
That's interesting, I'll take a look, thanks Ben.
pedja@crackleware.nu
52:49
@Jon, maybe if we train a model of program transformations that humans do most of the time even if it's imperfect, this model could be used for calculating embeddings and therefore topology of program space. also you need gradual transition to facilitate understanding by limited humans
Kabir
54:44
@jacjue, do you think that DNA/RNA code is deterministic?
Jon Pennant
54:57
@pedja yeah that's an interesting way of approaching it.
Adam Vandervorst
55:20
Sure languages like APL are just sequences and do mapping and folding.
Adam Vandervorst
57:49
How does this compare to only considering total languages Jacques?
Jacques
01:02:50
On DNA: The part we understand (maybe 25%) is. The base pair (ATCG) produce codons. Each codon encodes a peptide. The whole sequence encode a polypeptide that folds as a proteine. That is deterministic. When sequences are copied they get random mutations.
Jacques
01:05:34
On APL. I haven't tried it. But pretty much everything can be used. What I have tried is very domain specific. We need research to find more general things.
Adam Vandervorst
01:06:38
Again, much better would an AI programmer would need to be to no longer benefit from reusing human code?"Expertise" and "reasoning" is domain where orders of magnitudes difference in ability is common.It may be that fundamentally solving it with the right components is a much easier problem than creating a much dumber system with lots of human bias, languages, and existing code.
Jon Pennant
01:07:29
I need to go. Thanks for the nice discussions everyone, lots of nice ideas to think about. :)
Adam Vandervorst
01:07:41
Thanks Jacques, I think the point-free nature of APL may inspire some developments in your research.
anatoly belikov
01:09:51
Aren't our programs represent something in external world most of the time, like different speakers in this zoom meeting? Any current code generator is in disadvantage of not knowing about external world, thus not being able to decompose the task in subcomponents..
Adam Vandervorst
01:11:48
Absolutely Anatoly. That's a big problem with reusing existing programming languages, they are based on human abstractions and analogs, or other things that leak real world data like computers with memory adresses and instruction sets.
Adam Vandervorst
01:12:13
In Lambda Calculus every program works Ben haha
Haley
01:12:44
Learning always occurs as a flash of insight; no matter how much brute force we put into the ‘figuring out’…the moment you learn it is an insight, or series of insights.
pedja@crackleware.nu
01:15:42
@Anatoly, yes, we may ask what is this critical size and contents of the initial knowledge base to enable auto programing and precize enough communication between human and AGI from that point on. existing codebases are important, they contain important existing knowledge.
Haley
01:19:22
Same for computer vision…taking human knowledge of 3D objects, and getting the computer to recognize a 2D representation
Adam Vandervorst
01:20:24
"Existing codebases are important for new code because they contain existing knowledge", no existing codebases - for us - are important because they contain lots of work by people like us. And that doesn't hold true for new AI systems.
Adam Vandervorst
01:20:49
I agree on the former part of the message.
Haley
01:22:49
When a program can update its ‘pattern of self’, the pattern where is holds a representation of its patterns…when it can modify that pattern, it will be learning, it will experience insight, to some degree
Adam Vandervorst
01:42:21
Program learning agents is a perfectly valid thing.
Adam Vandervorst
01:43:01
That's qualitative feedback right? We don't know how to do that programmatically (or even theoretically?) yet right?
Adam Vandervorst
01:47:29
Category theory is an instance of this in some way; where level shifting, morphisms at these different levels, and the laws to which each level conforms is core.
Adam Vandervorst
01:48:48
Maybe a meta discussion about these calls is useful too.
pedja@crackleware.nu
01:49:05
ty!