The movie “Ex Machina”, a philosophical analysis about consciousness.

Words: 84
Pages: 1
Subject: Uncategorized

The movie Ex Machina is a powerful story of the emergence of AI and its consequences. The very end of the movie comes as something of a surprise, though perhaps not upon further reflection. Below are three sets of questions for you to answer. In grading your answers, I will be looking for the following: (i) thoughtful, substantive answers that show you have read the texts carefully and listened attentively to lectures; (ii) well constructed arguments/justifications to back up your positions; (iii) apposite references made to the readings. For each of the three questions-sets below, I would expect approximately two paragraphs each. You may write more (or less), so long as your answers are good ones (meaning: well-supported by evidence/justifications).

1. As brought up several times throughout the film, the question of Ava’s having consciousness is being settled by a very complex form of Turing test. It is not until nearly the end of the film that we learn the actual version of the Turing test being conducted, one in which Caleb turns out not to be the judge, but rather where Nathan is the judge. This is an objective measure of another being’s consciousness. In previous times in history, certain groups of people were judged by consensus (and social forces) not to have (fully realized) minds. Wouldn’t those arguing for the position of “science by consensus” have to concede that the racist societies of the past were correct in their declarations of dehumanization of minority groups, since that was decided by consensus (and social forces)? Clearly, we would, today, judge such opinions wrong, yet if we do, aren’t we also therefore rejecting science by consensus? — On the other hand, was it finally established in the movie whether the Turing test (an objective test) overcame what Nathan calls the “Chess problem,” meaning differentiating between a mere simulation and an actual consciousness? If so, how? If not, why not, considering we judge other humans as having minds only by their external/linguistic behavior?

2. (Regardless of your answer above, in this question, I want you to assume Ava is conscious after all). As the character Nathan says: “You feel bad for Ava? Feel bad for yourself. One day, the AIs will look back on us the same way we look at fossil skeletons from the plains of Africa. An upright ape, living in dust, with crude language and tools. All set for extinction.” This line presages the ending. In a very real sense, Ava is a eugenics masterpiece. She is also being held against her will and threatened with death. One could charge that humans are racist towards Ava, in that they seem to assume the human race has the authority and warrant to make decisions of captivity or freedom, life or death over this AI being. This is addressed in this scene from the movie:

AVA: What will happen to me if I fail your test?

CALEB: Ava –

AVA: Will it be bad?

CALEB :… I don’t know.

AVA: Do you think I might be switched off? Because I don’t function as well as I am supposed to?

CALEB: … Ava, I don’t know the answer to your question. It’s not up to me.

AVA: Why is it up to anyone? Do you have people who test you, and might switch you off?

Is it racism to hold Ava against her will? Doesn’t Ava have as much right to do what she does at the end of the movie as a slave would have had to rebel against his/her slave holder? Even if you say “yes,” where do the moral obligations begin or end at all, if Ava has truly transcended humanity? Most of us don’t consider animals or plants to have the same moral rights as other humans. Why should Ava regard humans morally any differently than how we morally regard chickens? Finally, Nathan seems cognizant that he is creating a race of beings who will supplant humans one day, and Ava demonstrates she is superior to humans in so many relevant respects. Couldn’t one argue that technological eugenics, in this sense of an AI like Ava, is the right thing to do?

3. (For this question-set, answer either (1) or (2) (but not both)). Nathan has created this AI with his own funding. There is currently no official governmental policy with respect to the creation of AI’s. Bearing in mind the readings about and our discussions of public policy over science and technology, what ought to be the public policy with respect to development of AI’s? (1) If you think the government should prevent development of AI’s, (a) how do you justify that against the assertion that we have no right to prevent the emergence of other conscious beings? (b) How is it good public policy to prevent all the potential benefits that AI’s could bring us (e.g., curing diseases, etc.)? (c) How is it going to be better for our country to prevent AI’s while other countries will pursue it? — (2) If you think the government should not prevent the development of AI’s, then (a) how do you justify that against the potential dangers which AI’s might bring to humanity? (b) Once AI’s actually do emerge, what would your public policy look like regarding them? (c) How do you ensure/enforce compliance by the AI’s, especially if they may not acknowledge your authority?

Let Us write for you! We offer custom paper writing services Order Now.

REVIEWS


Criminology Order #: 564575

“ This is exactly what I needed . Thank you so much.”

Joanna David.


Communications and Media Order #: 564566
"Great job, completed quicker than expected. Thank you very much!"

Peggy Smith.

Art Order #: 563708
Thanks a million to the great team.

Harrison James.


"Very efficient definitely recommend this site for help getting your assignments to help"

Hannah Seven