Deconstructing Shoulds
How does a person decide what they should do? Could a machine decide what it should do? Can intelligent animals like dolphins decide on their shoulds? If you make a list of the things that you should do, and each is in about the same category, how would you rank each should? which methods would you have to employ to ensure each decision to put one should over another is fair? Let's deconstruct the concept of a 'should' into its basic parts.
In the trolley problem, you are tasked with deciding whether to pull a lever to save the lives of five people working on the tracks but the other track has one person working on it. One either should pull the lever or not, but before that one has to understand that the lever can divert the tracks and the consequences of either choice as well as the good that can be done. It seems then, that a 'should' is constructed by three concepts: First is a potential future that can be envisioned; Second is a goal and the steps required to achieve it; Third is some benefit or gain in well being or effectiveness. If it were the case that these three concepts were necessary to be able to decide what one should do, then it would be impossible to 'should' if one of these concepts were missing.
- First, without an understanding of the consequences gained by simulating possible futures, one can't start deciding on what one should do; should you pull the lever in the trolley problem? well what's at stake? what's the scenario?
- Second, without a goal then one isn't able to link shoulds to an action; do you understand that pulling the lever diverts the track? if you didn't, how could you decide that you should pull it?
- Third, without an understanding of the benefits to be gained from pulling the lever, one couldn't rank one decision over another; the trolley is going to track A, but if you pull the lever, it will go to track B. Does it make sense that you should pull the lever?
These three concepts: potential future, goal and
benefit, are quantifiable, we can measure each objectively.
Potential futures can be explored in simulations of the world and understood by capturing the causality of the simulated objects, human goals can be understood by exploring how we process information and by how Access-Consciousness works, benefits can be understood by reverse engineering our
evolutionary biology and psychology, and by understanding our cultures
and societies.
If these three concepts could be understood well enough then we could implement them in a computer program. Computers are notorious for being able to process data much faster than the normal person and for being able to simulate the many fictions of video games, so we might be able to create a program that can simulate many scenarios, plan goals and analyze actuarial data to determine benefits many times faster than us. In a sense, a tool that can aid a person decide what they should do, or in the event where the person needs expertise to make a choice, the tool could make the choice for them by pooling expertise from the internet.
So, if a computer program could make decisions on what a person should do and if this program is created from our understanding of simulations, goals and well-being, then is Hume's Guillotine avoidable? can science actually cross the is-ought divide in this way? I believe so.
Comments
Post a Comment