Why Text Alone Fails: Modeling Cognitive Gaps

 When we communicate using only text, we risk cognitive misalignment. This post introduces a conceptual data model—built using Richard Barker’s notation—that illustrates how both verbal and non-verbal elements shape cognition. It highlights how discrepancies emerge when recognition is filtered through limited channels, especially in text-based exchanges.

Why Text Alone Fails: Modeling Cognitive Gaps

Modeling Based on 西剛志『結局、​どう​したら​伝わるのか?』


Entity Name Description
Language Verbal communication expressed through structured sentences and written text.
Convey through Sentences A sub-type of Language; transmitting meaning via written or spoken sentences.
Non-verbal Communication through tone, gestures, facial expressions, and other non-textual cues.
Speak A convergence point of verbal and non-verbal signals during real-time communication.
Cognition The internal process of interpreting received signals, shaped by context and prior knowledge.
Cognitive Discrepancy A misalignment between intended meaning and received interpretation, often due to missing cues.

This model reminds us that cognition is not just about decoding words—it’s about interpreting signals. When non-verbal cues are absent, recognition falters, and discrepancies emerge. To archive knowledge effectively, we must design communication that respects the full spectrum of human cognition.

Comments