Going back to Zork and all the other classic text adventures, I fondly remember angrily typing simple actions and not knowing where I was. It’s not that I can’t create mental models, it’s just that I can’t mental model with possibilities.
What do I mean? This is what I think I mean:
You are facing WEST. You just walked WEST from a North Wall, where you exited from a dungeon that you fell down 40 steps back. If you look East, you see the NORTH wall. If you look NORTH, you see a forest. SOUTH, there’s a house with smoke coming out of the chimney. West, you see an impassable gate.
When you look around IRL, do you take in information or do you “sense” your surroundings and ignore information until you choose the proper way to go?
I can retrace my steps in a hedge maze. I can’t build a mental model of my surroundings just from descriptors. When I look up from my laptop and stare at the wall across from me, I don’t just see “an unlocked window”… there’s a million other things in my peripheral.
I’ll stop there. Check out this site! It’s cool
Robot