Continuation of a conversation with @[email protected] about deconstructing the approach of NCA (neural cellular automata)/ ML generally – which led to insights on describing the ai-alignment problem in terms of :

  1. This project
  2. Present-day-ai

the conversation

https://mastodon.online/@themanual4am/111597886169413358

yes, i think differently on that first point though, because establishing what the first few stages are, instructs how subsequent derived layers compose and augment (ideally, and high-dimensionally)

Following that, we can probably skip a bunch to NCA kinda level (albeit a differently defined problem and solution), say, and end up with something ’truer’, more broadly applicable

Eg, given the way phenomenal structures and behaviours seem to reappear through levels, expect similar

https://mastodon.online/@themanual4am/111601210965911353

… that said – when i woke up, i understood what we’d landed on yesterday (in the context/ course of this project). i’ll follow up when i figure out the best way to articulate

Ok, so.

The above insight relates to conceiving of a new way to map this project to present-day-ai: previously my analysis was based more on the alignment-space, between both.

One specific outcome, is a way to describe a solution to the ai-alignment problem, in terms of present-day-ai (i think)

Though before that, I think the first step is to sketch a solution to the ai-alignment problem, in the simplified terms of this project (with a similar set of priors as zonal ban)


background

Early in this project, I realised that the ML-space looks as it does, because of ’the gap’ (in mind-sciences’ understanding of the mind) :

  1. I had identified an apparent architecture for cognition; but found no formal account in the sciences 1
  2. AI-peeps didn’t see any architecture represented by the sciences. However, they had identified some neuronal functionality, and (i imagine) thought – “ok, if this is all-that-we-see, let’s see if it’s all-that-we-need
    1. Specifically: lets imagine that we just throw ‘all the data’ at this process, and hope that cognition simply ‘pops out’ at the end
    2. (Which is fair enough, initially. But I suggest, flawed)

—what next?

Observations :

  1. We’re still stuck in that ‘all at once’ mindset with ML (as if acting on a resultant, flattened set-theoretic set)
  2. I suggest that we ought to think in terms of structured ML-able steps, and a plurally scoped architecture (as if acting on an graph of related concerns)

simply, where present approaches to ML equate to ‘indiscriminately process all the things’, we completely miss the point that biology, framed by survival, necessarily discriminates what it spends valuable resources processing

take 5, and imagine we don’t have an earth-killing resource expenditure options…

remember WWED: —what would evolution do?

Gist : I suggest a composition of discreet-though-relatable ML scopes, which together describe a scene; with each scope a discreet interpretive sub-context; whereby reasoning about any scene involves evaluating the significance of present scopes, and respective relations, across a scene.


the set-up

  1. Consider that stimuli (sensory information and patterns) accrue, based upon :
    1. Simultaneous arrival; significance of prior detection; weighting; etc
  2. Stimuli which commonly arrive together, are represented and persisted together, as ’localised or phenomenal accumulations’
  3. Heuristic tolerance and representational variation evolves through time; when variation and tolerance grows past a point, accumulations split, variation and tolerances shrink
  4. Split accumulations are related, with various degrees of commonality {structure; behaviour; adornments; etc}
  5. Consider that accumulations are persisted like physical places :
    1. Relative arrangement, or locational
    2. And we can assign an each a measure of significance
  6. When arriving stimuli match an accumulation
    1. The accumulation is mapped to the relevant sensory field of view, and
    2. Any previously stored measure of significance is invoked
    3. (Note: this equates to attention and motivation)
  7. Over time, we might consider that the graph of split accumulations grows to include species, sub-types (like maturity, sex/ threat); and individuals, each with optional measure of significance

the point

Importantly :

  1. The external scene is noisy, and distracting
  2. What is important for comprehension, exists by {presence (incl. characteristics); relation; significance} of the internal graph of attention-mapped phenomenal accumulations

‘internal attention graph’?

Note that there are two distinct problem-spaces for ML training :

  1. Recognising external stimuli patterns which belong to any phenomenal accumulation, and;

  2. Evaluating significance of (the presence, characteristics, and relation of) attention-mapped accumulations/ internal attention graph

  3. I suggest that plural discreet-though-related attention-mapped phenomenal scopes is how you address ai-alignment

  4. Because

    1. Comprehension is compositional
    2. Reasoning evaluates, navigates, and mutates this internal attention graph
    3. Discreet phenomenally assigned measures of significance present the method for manual alignment
    4. This is what happens when we observe and reason

consider this architecture :

—does this exist? —is there a name?



  1. Because, as it turns out, when psychology ditched introspection in favour of behaviourism, it also threw out meditation practices as a legitimate target of analysis and modernisation – which incidentally, at > 5k years is the oldest, and I suggest most repeated, structured exercise/ methodology, of our species entire history (as pertains to the domain of mind science). Staggering ↩︎