Discussion about this post

User's avatar
Jonathan McLernon's avatar

This has always been my worry with Gen AI.

Wharton researchers Steven Shaw and Gideon Nave published a study involving 1300+ people, involving 9500+ trials over 3 experiments, and they call the results "cognitive surrender" to describe when you stop evaluating output and start accepting it as your own thinking.

My concern has been what I refer to as the "erosion of my cognitive sovereignty".

But, when I use Gen AI with careful parameters around topics which I already hold expertise, I do sometimes learn something new.

I think I've found value in it as a thinking partner, though I sometimes laugh at the absurdity of when I argue with a software tool (but I think it's also a healthy reflex that indicates I'm still seeking to retain critical thinking)

Maybe an oversimplification, but careless use will lead to dumbing down, and intentional use can be used to enhance thinking and learning.

How do we convince people to use it with caution?

Something like smoking ads that say "AI is literally shrinking your brain?" ... I'm only half kidding

No posts

Ready for more?