firstname.lastname@example.org's Personal Meeting Room - Shared screen with speaker view
Who can see your viewing activity?
This was an interesting, tangentially related read: https://arxiv.org/abs/2106.06981 (Thinking Like Transformers)
Sounds like ChemLambda too.
In OpenCog, is that collaboration between agents engineered in, or emergent?
Thanks for that introduction Brett. I'd like to ask, how do you start "programming" this? Related, are you looking into "meta" systems that can learn how to initialize and train these general learning systems?
I'd like to push back on that Alexey, I think this comes down to how fast an AGI can do everything you'd feed it otherwise. There may be a world where learning things from scratch is an easier problem than bootstrapping an AGI system to be smart enough to start learning from our resources.
I have a website that might answer some of your questions. A good way to get an overview is to go to the last link on my home page and ask for a copy of my book's introductory chapter.
@Adam generally I’d expect learning more to be more costly in terms of data and computation requirements - the thing is, learning systems can achieve better performance than those built by hand by human beings.
When a baby learns a language - humans appear to have an engineered ‘language learning’ program, but it is adaptive enough to learn English, Russian, Swahili….and they are taught that language by their caregivers, not inventing the language from scratch. But looks emergent to me, is the understanding of that language, and how to apply it appropriately in its world, to Alexays’ point
@Adam as for the use of human resources by an AI - there is an important factor to consider here - getting something that can use human knowledge to bootstrap itself can in itself be a tough learning problem or engineering problem…
This links to SISTER at singularity net, including notebooks: https://blog.singularitynet.io/singularitynets-first-simulation-is-open-to-the-community-37445cb81bc4
But once we get an efficient solution to the problem of integrating existing human knowledge into AI systems then it shouldn’t matter anymore.
Wiring our limitations into the AI, ouch
SISTER paper http://jasss.soc.surrey.ac.uk/8/1/1.html
I believe the main drawback on engineered AGI is the limits of flexibility during the optimization stage, whether it is an evolutionary system, reinforcement algorithm, or even backpropagating neural networks. However if we are able to reach some sort of proto-AGI from engineered parts, even if it works up to a limited capability, it could be able to take over it's optimization stage. So the road to AGI could also from entirely emerging intelligence structures, be from letting an engineered AGI, let it run it's own emerging structure, like a human creating an evolutionary algorithm that changes it's own neuron parameters to optimize their intelligence. Now, reaching that initial proto engineered AGI is easier said than done.
And the larger they become, the better transfer learning becomes!
imagination….creation of new patterns
Thgere is https://en.wikipedia.org/wiki/General_game_playing
Ooof, self-play and tournament dynamic becomes very hard, very quickly when you allow rules to change.Interesting idea though.
my answer is yes - if it is an AGI agent - continuously learning and adapting
The above statement was inspired by allowing "players" to teleport to different environment at will.Adding this action, made it intractable.
ty, best regards!