I like the assumptions. Computational functionalism isn’t universally accepted, but I accept it
I’m not the first to say so—I personally probably explored this first while listening to David Chalmers interviewed by Rob Wiblin—but I’m not sure that ethical significance is only attached to valence states. I think consciousness is a necessar…
I like the assumptions. Computational functionalism isn’t universally accepted, but I accept it
I’m not the first to say so—I personally probably explored this first while listening to David Chalmers interviewed by Rob Wiblin—but I’m not sure that ethical significance is only attached to valence states. I think consciousness is a necessary property for ethical significance, but valence may not be. perhaps consciousness+agency or something like that is enough.
I like the CartPole example, but not quite convinced. The signs used in mathematics are arbitrary descriptions of the neuroscience; plausibly what “really” matters is whether delivery of dopamine at a particular time occurs, and what the subjective experience of receiving dopamine is. In the brain, you can’t abstract that away to merely changing the sign, or, if you did, it wouldn’t meaningful.
Plus, also, reward prediction error. You mentioned that. To elaborate on it, a learner like that isn’t only concerned with delivered reward signal but also with reward prediction error. Assuming the agent is like a human agent being only trained on positive reinforcement, the agent trained only on only positive reinforcement would only continue to “behave as if suffering, yelling for help, etc” because its reward prediction error was negative, i.e., it was failing to accomplish the task in spite of having allowed for the possibility of success (subjectively, we give states of mind like this names like “frustration” or “exasperation”; very much negatively valenced, and very much associated with negative RPE rather than negative reinforcement).
Considering Berridge's wanting vs. liking distinction, and Schultz (1997) and subsequent observations on dopamine, it is pretty clear that dopamine release is strongly associated with RPE. In humans, it clearly isn't the only part of the story (pain is painful even when you were expecting it) but it seems like RPE is critical for understanding valenced qualia during learning.
Overall, I think trying to tackle the Big Question is an exciting project! I also appreciated the synthesis of some perspectives from psychology, neuroscience, philosophy, and AI. The neuroscience of consciousness is making steady if not rapid progress, and I think year by year, or decade by decade, we will continue to have a much clearer understanding than before.
I like the assumptions. Computational functionalism isn’t universally accepted, but I accept it
I’m not the first to say so—I personally probably explored this first while listening to David Chalmers interviewed by Rob Wiblin—but I’m not sure that ethical significance is only attached to valence states. I think consciousness is a necessary property for ethical significance, but valence may not be. perhaps consciousness+agency or something like that is enough.
I like the CartPole example, but not quite convinced. The signs used in mathematics are arbitrary descriptions of the neuroscience; plausibly what “really” matters is whether delivery of dopamine at a particular time occurs, and what the subjective experience of receiving dopamine is. In the brain, you can’t abstract that away to merely changing the sign, or, if you did, it wouldn’t meaningful.
Plus, also, reward prediction error. You mentioned that. To elaborate on it, a learner like that isn’t only concerned with delivered reward signal but also with reward prediction error. Assuming the agent is like a human agent being only trained on positive reinforcement, the agent trained only on only positive reinforcement would only continue to “behave as if suffering, yelling for help, etc” because its reward prediction error was negative, i.e., it was failing to accomplish the task in spite of having allowed for the possibility of success (subjectively, we give states of mind like this names like “frustration” or “exasperation”; very much negatively valenced, and very much associated with negative RPE rather than negative reinforcement).
Considering Berridge's wanting vs. liking distinction, and Schultz (1997) and subsequent observations on dopamine, it is pretty clear that dopamine release is strongly associated with RPE. In humans, it clearly isn't the only part of the story (pain is painful even when you were expecting it) but it seems like RPE is critical for understanding valenced qualia during learning.
Overall, I think trying to tackle the Big Question is an exciting project! I also appreciated the synthesis of some perspectives from psychology, neuroscience, philosophy, and AI. The neuroscience of consciousness is making steady if not rapid progress, and I think year by year, or decade by decade, we will continue to have a much clearer understanding than before.