we routinely speak past each other, using different words for the same things, and the same words, for different things

While physical movement shapes circumstances by direct manipulation; language influences circumstances by proxy, albeit only when interpreted 1.

  1. Language is a (lossy) knowledge synchronisation protocol
    1. Knowledge is a (high-dimensional) relational, representational graph (of sensory conditions, and derived forms) which:-
      1. Cannot be synchronised directly (by natural means)
      2. Is the evaluable artefact (state), of embodied sensory {cognitive; interpretive} operation(s) (processes), which equates to {cognitive; interpretive, and; behavioural} conditionality
  2. We use language to externally synchronise (and validate) accrued knowledge; to communicate
    1. Communication is linear (words are a linear sequence of letters; phrases and sentences are a linear sequence of words, etc), and as such to communicate knowledge (whether by speaking or writing, in words, phrases or sentences), knowledge must be serialised (into language)
      1. Language:-
        1. serialisation includes two several dimensions {encoding; expression} (in practice, expressions become composite encodings) #tbc this will be updated to refer to conceptual-domain and layers of communication-domain
        2. Deserialisation is dependent upon knowledge {high-dimensional graph; embedding; etc}, for interpreting, and decoding
    2. Language is inherently ambiguous, because:-
      1. The same letters apply to arbitrarily-many words
      2. The same words apply to arbitrarily-many sentences
      3. The same words and sentences apply to arbitrarily-many situations
      4. every legal sequence of letters, words and sentences, might mean different things in different situations, and as such, must be interpreted contextually 2
  3. Serialisation equates to isolating, ordering and sequencing nodes and node-relationships, of an unordered, arbitrarily associated, and (generally) continuable, graph
    1. A graph segment 3 is a operationally isolated selection/ scope of nodes and node-relations 4
      1. The process and result of isolating a graph segment is somewhat dependent upon pre-exiting representation, within active and possible contexts of interpretation
    2. Any non-trivial graph segment is plurally enumerable (more than one possible sequence); might serialise to arbitrarily many distinct ordered sequences, of expression
      1. Plural enumeration -> plural sequential expressions? 5
    3. The process and result of isolating and serialising a graph segment is arbitrarily/ circumstantially dependent upon the overall circumstances of a graph, including operational configuration and pre-exiting representation, across active and possible interpretative contexts
  4. Correspondingly, deserialisation equates to mapping ordered sequences of words, phrases and sentences, to nodes and node-relations (associated localisations of internal representation), within a high-dimensional graph of one-or-more interpretive contexts
    1. The mapping of words to nodes (and node relationships), and the subsequent meaning of resultant composition of nodes and relationships in place, is highly dependent upon preexisting representations and interpretive contexts
    2. Accurate communication, involves including within synchronised-detail sufficient phenomenal characteristics to ensure that the resultant interpreted representational composition is ‘sufficiently equivalent’
    3. The process and result of interpreting and de-serialising language to an unambiguously specific graph segment, is arbitrarily/ circumstantially dependent upon the overall circumstances of a graph, including operational configuration/ pre-exiting representation, across active and possible interpretative contexts
  5. It is useful to synchronise knowledge:-
    1. To validate internal representations of intangible (or unavailable) phenomena, which cannot be physically explored mapped (or remapped)
      1. Children practice synchronising (and therefore validating) internal representations of tangible, available phenomena, which can be physically explored or remapped, to validate the process and state of internal representations, to establish a validated baseline
      2. Writers (and particularly technical writers) re-establish descriptions of well-known phenomena before describing less-well-known phenomena for similar reasons, to establish a validated baseline for further synchronisation
    2. To disseminate means to interpret (tangible and intangible) phenomena
    3. Because typically synchronisation is more-efficient {quicker; safer; representationally and compositionally consistent} than re-discovery
  6. Operational imperatives; circumstantial and heuristic based dismissal
    1. Serialisation and deserialisation (as with all biological action) is weighed against finite energy reserves, resultant representational structural health, and operational latency in time
      1. So excessive synchronisation of inconsequential detail violates biological imperatives
      2. Interpretive generosity is often dependant upon affordances extrinsic to the written word: sufficient homeostatic and allostatic contentment; including physical and financial safety, nutrition/ diet, and other operational concerns, like noise/ distractions
    2. All interpretation risks internal representational corruption, mitigated by cognitive imperatives
    3. #todo further describe biological and cognitive operational imperatives
  7. Language:-
    1. Is encoded low-dimensionally (relative to knowledge) – a finite token encoding-scope; linear sequence
      1. There are too few tokens to uniquely reference all permutations of nodes and node-relations: consequently, tokens are re-used; which results in interpretive ambiguity (false equivalence/ collision)
      2. Two solutions exist
      3. Increase token complexity:-
        1. Tokens must be materialised by action/ articulation (gesture; speech; written symbols) to be communicated; so increasing token complexity is to increase articulation complexity (increased similarity/ nuance; or formal complexity), which must be pre-synchronised ahead of communication, to be noticed and interpreted as nuance, and not dismissed as variance/ tolerance, before consideration
      4. Increase token-space complexity (interpretive contexts):-
        1. Communicate with same tokens (and increased interpretive tolerance to stimuli {gesture; speech; written symbols} variance), and depend upon consideration to resolve ambiguity, to discernment
          1. By token association
          2. By operationally segmenting localised associations/ groupings
          3. By operationally segmenting interpretive context (for operationally efficient or specialist distinction, in distinct physical environments, or situations)
            1. #todo describe details of lack of association between new interpretive contexts and pre-existing language articulations, in terms of synchronisation templates
        2. Note, while increasing token-space complexity is not dependent upon pre-synchronisation of additional token distinctions with others, it is dependent upon another more-fundamental physical form of pre-synchronisation: biological – token-scope complexity must evolve
      5. #todo further describe encoding constraints and operation? 6
    2. Is expressed high-dimensionally (relative to knowledge) – plural expressions
      1. There are many distinct collections of sentences which (when decoded and interpreted), map (to some tolerance of sufficiently) equivalent graph
        1. #todo describe expression tolerance
        2. For intuition, think of normalised and denormalised SQL data
  8. The problem-space of (and therefore efforts required for) serialisation {encoding; expression} and deserialisation (interpretation) {parsing; decoding}, differs individual to individual, as such the optimal transformation toward accessibility is subjective
    1. We must interpret generously, because optimal accessibility of detail is audience dependent, and therefore – any language transformed sufficiently to be interpreted by all audiences – will contain excessive transformation for many
    2. Interpretive generosity is constrained by finite limits: on tractability, the ability to decode, deserialise and map language to a graph; and on the time it takes to evaluate certainty
    3. established practice and common serialisation (writing) advice, is to intentionally minimise audience and topical content scope/ specificity, to reduce the encoding-expression-ambiguity-space of communications #rewrite
    4. interpretive generosity is plurally dimensional {expression; encoding; scope selection; sequencing; #tbc }, and individually asymmetric
  9. Formal coherence; candidate forms
    1. Attempting to interpret (decode and associate/ integrate to graph) poorly-or-differently validated concepts, can lead to an assessment of incoherence, or incoherent candidate form
    2. Intractability is similar, though typically relates to conceptual composition (intrinsics)
    3. Both can result from attempts to interpret communication with insufficient preexisting knowledge to decode and interpret sufficiently
  10. The simplest and most common solution to the perils of interpretive complexity, is tribalistic in-group out-group dynamics, to reduce the operational cost of interpretation (biological imperative), or the scope, significance, or consequences of interpretive ambiguity (cognitive imperative)
  11. #tbc

#tbc #todo related :


  1. And with arbitrary equivalence; increased/ exponential scaling; successful influence is more efficient than physical enaction ↩︎

  2. interpretive context typically refers to fundamentally distinct dimensions-or-domains of interpretation, though might refer to operationally isolated graph segment #tbc  ↩︎

  3. Subgraph ↩︎

  4. Edges (for now, implementation specific though) ↩︎

  5. For intuition, think of normalised and denormalised SQL data ↩︎

  6. consequences include interpretive ambiguity; context dependence (token extrinsics) ; arbitrarily plural contextual token re-assignment (different tokens, same node) ↩︎