The seminar has concluded — thanks to everyone for their contributions!
You can see the Schedule and Final Paper from the seminar.
The seminar has concluded — thanks to everyone for their contributions!
You can see the Schedule and Final Paper from the seminar.
Lauren Austin, Aging Populations and The Effects on Society [PDF]
Emily Bishop, Artificial Intelligence Companions for the Young and Lonely
Jacob Dean, Self-Driving Vehicles, the Advance of Artificial Intelligence, and the Future of Work [PDF]
Erich Froese, Artificial Intelligence in the United States Military: Exploring Lethal Autonomous Weapons [PDF]
Megan Greatorex, Was Face ID the Right Move for Apple’s New iPhone? [PDF]
Nathaniel Grevatt, Google’s Duplex and Deception through Power and Dignity [PDF]
Lauren Holt, How Much Do Smart Speakers Really Hear and Who is Listening? [PDF]
Mira Lee, Bridging Artificial Intelligence’s Empathy Gap in the Healthcare Industry [PDF]
Maria Morrissey, Artificial Intelligence and Universal Basic Income [PDF]
Rohit Musti, AI and Sentencing [PDF]
Kavya Ravikanti, Leveraging AI to Fight Climate Change [PDF]
Cal Ries, Artificial Companions for the Elderly [PDF]
Fiona Seoh, Enhancing Life by Quantifying Death [PDF]
Stella Sotos, Considering the Impact of Autocomplete on Users. [PDF]
Olivia Stiebel, Digital Avatars and the Digital Afterlife [PDF]
Here are a few links to follow-up from the today’s discussions:
Rohit asked about the Harvard admissions lawsuit. There are two expert reports from the case: Peter Arcidiacono’s (for “Students for Fair Admissions”) and David Card’s (for Harvard). They are both long reads, and with some parts redacted, mostly about Harvard’s “Dean’s List” for taking care of high donors. Unfortunately (or prehaps responsibly), the raw data for the analysis is not publicly available, but there is a lot of information in the reports.
Here is the article I mentioned: Ta-Nehisi Coates. The Case for Reparations, The Atlantic, June 2014. (Its not about AI fairness at all, but provides a lot of historical context for where we are now as a society, and something I would encourage everyone to read over winter break.
Next week is our final seminar meeting. Everyone should be prepared to give a short talk (no more than 3 minutes long) on your paper project. Unlike previous ones, I would encourage you to prepare slides for this, but you should not use more than 4 slides (not counting one for the title).
There is no reading preparation assignment, but if you have ideas for what to do at the final meeting, or food suggestions, please send them to me.
Next week, we’ll talk about algorithmic fairness.
There are lots of very interesting writings on this, but I want to keep the required reading over Thanksgiving break to a minimum.
For the required reading, you can select one of these two options:
Watch Christopher Moore’s talk, Data, Algorithms, Justice, and Fairness (Ulam Lecture at Santa Fe Institute, Oct 2018). (You can skip the excellent but long introduction by starting at 12:19.
Read Machine Bias, ProPublica, 23 May 2016, and Northpointe’s response (with commentary from ProPublica).
Please post something in the course subreddit to spark discussion on this topic. It could be a link to a news article on algorithmic fairness with a brief comment on it, or a question or comment about the reading (or talk).
Paper updates are due next week Tuesday, November 20. For everyone,
you should send an email with subject line [AI Pavillion] Paper Update: <your title>
. What is expected in the update depends on
whether you are continuing with your first paper topic for the final
paper, or you are starting a new topic.
For students starting new topics for the final paper, you should:
In the email body, answer (1) brief statement of your topic, (2) purpose of your paper, and (3) your plans for what to do through the final deadline (this can include specific sources you will be looking into, and what you are planning to do for the final paper). These can be refinements of the topic idea you’ve already sent me, and some have in discussion now.
Attach a PDF that includes at least the introduction to your paper (this should be 1-2 pages that setup the purpose of your paper).
For students continuing with the topic for your first paper:
In the email body, explain (1) how your topic and purpose have evolved, or if they haven’t changed, (2) what you have done since the last paper submission to develop your work, and (3) your plans for what to do through the final deadline (this can include specific sources you will be looking into, and what you are planning to do for the final paper).
Attach a PDF with the updated paper, and (4) in the email explain clearly which section(s) you want me to provide feedback on. This should include the Introduction which is usually the most important section of any paper, since if it doesn’t set up a compelling paper no one will want to read the rest. If you have other sections that you want feedback on now, mention that in the email.
Hope this is clear and makes sense for everyone. The goal of this is to ensure that everyone is on track for an interesting and high quality paper by the end of the semester. If you are at a stage where something else would be more useful (either in terms of the Nov 20 deadline or what you are sending), let me know and we can consider alternatives.
Next week, we’ll talk about security and malicious uses of AI.
There are two readings. If you did the discussion post for last week, you should select either one of these two readings; if not, you should read both of them (you’ll get a confirmation email from me if you are in this group).
Daniel Geer. A Rubicon. Hoover Institution, Aegis Paper No 1801. February 2018. [Discussion]
Miles Brundage, et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. February 2018. (This looks long, but is low-density format. Read at least through the end of Section 3 (Security Domains). It is not necessary to read their Interventions and the later sections, although you should think about what recommendations you would make.) [Discussion]
In the form, post and idea for a discussion point based on the reading. There are no specific prompts for this, it can be anything that strikes you as interesting to discuss from the reading.
Anton Korinek in Economics is offering a course in the Spring on Artificial Intelligence and the Future of Work…and of Humanity, with lots of themes related to this seminar. A tentative syllabus is posted here.
Next week, Allison Pugh, Professor of Sociology at UVA, will visit us. You should read the paper distributed in class today, Of Seeing and Being Seen: What Humans Do for Each Other.
I will finish reading your papers by my office hours Thursday (9-10:30am). Please stop by if you can to pick up your paper, or find me some other time (feel free to stop by anytime you see my office open).
Read Allison Pugh’s paper distributed in class, Of Seeing and Being Seen: What Humans Do for Each Other (if you lost the paper copy, email me for a PDF), and post your responses here by Sunday, 11 November.
You should either post your own question or comment on the paper, respond to someone else’s comment, or respond to one of these questions:
The paper considers three connective labor relationships: doctor-patient, teacher-student, and minister-congregant. Identify another connective labor relationship and discuss how the ideas in the paper might (or might not) apply to it.
Why are measurement strategies for connective work so ineffective? Are there strategies that could work?
“The downside of freedom from shame, it seems, is freedom from caring at all.” Is there a way to have caring without shame?
Office Hours this week: I will not be able to hold my Thursday morning office hours this week, but will be available later in the day Thursday. I should be around most of the afternoon, so feel free to drop by, or email me to arrange a time.
Paper 1 is due Thursday, 1 November at 4:59pm. (This is a strict deadline since I need to print the papers before leaving on a trip.). Please remember to follow the directions posted last week about what to include in your email.
Short talks: You should prepare a five minute presentation about your paper topic, to present in class next Monday (5 November).
Happy Halloween! Beware of trick-xor-treaters who are impersonating valid trickers.
Slides from today’s class: SpeakerDeck
GAN Lab (in-browser play with Generative Adversarial Networks)
The Verge. How three French students used borrowed code to put the first AI portrait in Christie’s. 23 October 2018.
Mainly, you should focus on developing your papers this week, so the reading assignment is short but covers many interesting topics for class discussion. Before class on Oct 29, you should read:
Chapter 9: The Control Problem from Nick Bostrom’s Superintelligence.
Barack Obama, Neural Nets, Self-Driving Cars, and the Future of the World, Wired Magazine, November 2016. (Read at least up to the quote below.)
Pay particular attention to these quotes, and how they relate to Bostrom’s chapter:
OBAMA: … Then there could be an algorithm that said, “Go penetrate the nuclear codes and figure out how to launch some missiles.” If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems. I think my directive to my national security team is, don’t worry as much yet about machines taking over the world. Worry about the capacity of either nonstate actors or hostile actors to penetrate systems, and in that sense it is not conceptually different than a lot of the cybersecurity work we’re doing. It just means that we’re gonna have to be better, because those who might deploy these systems are going to be a lot better now.
…
ITO: I generally agree. The only caveat is that there are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we’re going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen.
OBAMA: And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man.
Office hours delayed: I will start my office hours this week late on Thursday (usually 9-10:30am); I should be there by 9:30am. If you have questions about the comments on your paper draft, please stop by then or another time. You can use http://davidevans.youcanbook.me to schedule a meeting.
1. Purpose: Write with a purpose (even if your real purpose is to satisfy a course requirement, you should write as though you have a real purpose, and hopefully you do!)
Examples of reasonable purposes for writing:
Note that all of these purposes assume a reader – you should have a clear idea who the intended audience is for your writing, and write with them in mind. The purpose of your writing should be stated explicitly and clearly so the reader knows why you want them to read it. Usually, this is done at the end of the abstract (or the first paragraph if there is no abstract). It should be a sentence like, “The goal of this article is to …”.
2. Organize: Use section headers and divisions with meaningful labels to break up your text. You shouldn’t have more than a page without some clear header or at least a paragraph tag to make it clear what it is about.
3. Stories: Tell stories, not lists. Unlike this document, a well written essay should follow a clear story. Each paragraph should be connected to the previous one, and all of them should serve the purpose. You shouldn’t have lists of disconnected things without a very good reason.
4. Cites: How and when to use quotes and references:
5. Write simply and directly: Don’t use a complex word when a simple one would do: “utilize” => “use”, “advancements” => “advances”, etc. Don’t use overly complex sentence structures without a good reason.
The “final” version of your first paper is due by 4:59pm on Thursday, November 1 (note that this is extended from the original deadline that was October 30).
You can submit the paper by email with a PDF attachment and subject line [AI Pavilion] _Paper Title_
. The email should also include answers to each of these questions:
Your First Paper Drafts are due tomorrow (Wednesday). Please submit your paper first drafts by sending an email with a PDF file attached to me, with a subject line [AI Pavilion] <your paper title>
. I’ll send a quick ack message so you know I received it. You don’t need to post your paper draft publicly (on the class reddit) unless you want to, but are welcome do to so also if you want
to share it with the class/world at this stage.
If you would like to get quick feedback on your paper at my office hours Thursday morning (9-10:30am), include a note in your email saying this and I’ll prioritize reading your paper.
For the next class, we’ll focus on “Black Mirror”, a British TV series whose episodes explore (usually dystopian) outcomes of possible future technologies.
Our two “Black Mirror” experts have suggested three episodes (thanks Erich and Emily!), and everyone will be expected to watch at least one of them and read a short paper related to it from the Recoding Black Mirror Workshop that was held at WWW 2018 (for two of the episodes, there are closely related papers).
To ensure coverage across the three selected episodes, I have arbitrarily (* not completely arbitrarily - if there was an obvious connection between your paper topic and one of the episodes you should be in the right group for that topic) put you in three groups - but, the groups should be considered “default” choices; if you want to switch to a different episode from the one you’ve been assigned, that’s fine. The episodes are all available through Netflix - I believe most of you have access to this, but if you need help let me know. Its fine if you watch the episode alone, but probably more fun if you can get together with others in your group to watch it together. I’ll leave it up to you to try and coordinate this (feel free to post on the class subreddit if that’s helpful).
Everyone should (either individually, or in coordination with others in your episode group):
Come to class Monday prepared to give a summary of your episode to the class; if you want to include showing a few short scenes from it, that’s fine and encouraged!.
Think about these questions (and use the reading to help):
Episode: “Be Right Back”, Series 2, Episode 1
Paper: Tabea Tietz, Francesca Pichierri, Maria Koutraki, Dara Hallinan, Franziska Boehm, and Harald Sack. Digital Zombies - the Reanimation of our Digital Selves
Optional second paper: Martino Mensio, Giuseppe Rizzo, Maurizio Morisio. The Rise of Emotion-aware Conversational Agents: Threats in Digital Emotions
Episode: “Nosedive”, Series 3 Episode 1
Paper: Harshvardhan J. Pandit, Dave Lewis. Ease and Ethics of User Profiling in Black Mirror
Episode: “Hang the DJ”, Series 4 Episode 4
Paper: (couldn’t find one closely related; if you can, please read that instead, otherwise read this one) Diego Sempreboni, Luca Vigano. MMM: May I Mine Your Mind?
John C. Havens will be speaking on Friday (Oct 5), 10-11:30am on Humanities, Cultures and Ethics in the Era of AI, Wilson Hall 301. He is executive director of the IEEE project on Ethics of Autonomous and Intelligent Systems.
Use this link to RSVP (there is also a lunch following the talk).
California passed a law about bots impersonating humans: Senate Bill No. 1001: Bots: disclosure
- (a) It shall be unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if the person discloses that it is a bot.
(b) The disclosure required by this section shall be clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot.
It is interesting that it seems to be unlawful for a bot to intentionally confuse a human, but not for a bot to intentionally confuse another machine (which is a much more common problem in actuality today).
Because of fall break, there is no meeting next week. The next major readings we will do are from Nick Bostrom’s Superintelligence [Amazon]. We won’t read all of this, but will read many chapters. Tim Urban’s (“Wait but why”), The AI Revolution: The Road to Superintelligence is largely based on this book, but the book goes into more depth on many points.
By October 15, you should do at least one of these two things:
Option 1: Watch Nick Bostrom’s talk at Google (Sep 2014). (You should definitely watch the question and answer period at the end of the talk — the first question is from Ray Kurzweil, the third is Peter Norvig.) and Read Chapter 3 of Superintelligence.
Option 2: Read Chapters 1-4 of Superintelligence.
This is a (relatively) short reading assignment for the two weeks you have, to provide time to focus on your papers (the first draft of which is due on Wednesday, October 17.)
Either (1) select at least one of the questions below and post a comment on the course forum, (2) respond to a comment someone else posted, or (3) post your own thoughts on something in either the talk or readings. [Link to Post]
1. Bostrom talks and writes about using biological enhancement through genetic selection, but many people find such an idea distasteful at best or dystopian at worst. According to a 2004 poll reported in book, 28% of Americans approved of embryo selection for “strength of intelligence”, 68% for avoiding fatal childhood disease. A more recent survey found 70% support for PGD selection for avoiding diseases fatal early in life or lifelong disability, 48% for diseases that manifest late in life, 21% for sex selection, 14.6% for physical traits, and 18.9% for personality traits. These responses suggest some kind of moral or practical difference between different types of embryo selection. Is there a way to decide what kinds of embryo selection are moral? What kinds should be disallowed, and what is the justification?
2. Bostrom writes about an internet with “better support for deliberation, de-biasing, and judgment aggregation, might make large contributions to increasing the collective intelligence of humanity”. Is this realistic? What would be necessary to move from what we have now to an internet that increases collective intelligence?
3. Bostrom defines “superintelligence” as “intellects that greatly outperform the best current human minds across many very general cognitive domains”, but doesn’t provide any concrete or satisfying definition (in my view). A better definition might make specific claims about what problems a superintelligence could solve, or what behaviors it would have. Suggest a better definition (or make a case in support of Bostrom’s definition).
The idea for your first paper is due Sunday, 30 September. From the syllabus, the first paper can “focus on one aspect of how artificial intelligence has already impacted society, describing the impact of technological advances on a social, political, economic, or psychological aspect of human existence.” It is not required that your paper idea is on this topic though — anything that is relevant to the seminar theme (broadly interpreted), interesting, and with a scope suitable for a 4-5 week effort is reasonable.
You should post your paper idea on the class
subreddit. Submit a new text
post (or a link post that links to a PDF) that has a title Paper: your title
and contains:
The best ideas for will make an argument for a non-obvious and controversial claim and include data to support that argument.
Elevator Pitches. In class next week, you’ll give an elevator pitch for your idea to the class for feedback. This should be about 90 seconds long — enough to give a clear idea of your idea and why it is worth doing, but getting to the point concisely.
Before the next class (Monday, 1 October), you should read:
Yuval Noah Harari, Sapiens: A Brief History of Humankind, Chapters 18, 19 and Afterword.
Tim Urban (“Wait but why”), The AI Revolution: The Road to Superintelligence (Part 2). (Posted in 2015)
Sapiens Response. Now that you’ve finished Sapiens, post a comment that mentions the most surprising or interesting thing you learned from reading it and explains why. [Discussion Link]
Hallpike Response. Select one of Hallpike’s criticisms of Sapiens (this could be a specific quote from his review, or a paraphrase of a main point), and make a case opposing it. Alternatively, if you find all of Hallpike’s criticisms valid and convincing, respond to one of the cases posted with a counter-argument. [Discussion Link]
(There’s no response question for the Superintelligence reading, but feel free to post any thoughts you want on this. I’m not sure if we’ll get to discussing this next meeting, but it, and readings from Bostrom’s book, will be the main focus of the next few weeks.)
Loebner Prize (Turing Test)
Let’s chat with Mitsuku!
The transcripts you read were from Can machines think? A report on Turing test experiments at the Royal Society, Kevin Warwick and Huma Shah, Journal of Experimental & Theoretical Artificial Intelligence, 2016.
The idea for your first paper is due Sunday, 30 September. From the syllabus, the first paper can “focus on one aspect of how artificial intelligence has already impacted society, describing the impact of technological advances on a social, political, economic, or psychological aspect of human existence.” It is not required that your paper idea is on this topic though — anything that is relevant to the seminar theme (broadly interpreted), interesting, and with a scope suitable for a 4-5 week effort is reasonable. We’ll talk more about this next week, but you should start thinking of ideas for your paper now.
Before the next class (Monday, 24 September), you should read:
Yuval Noah Harari, Sapiens: A Brief History of Humankind, Chapter 14-17 (if you want to read further, we’ll complete the book the following week).
Neil DeGrasse Tyson, Science’s Endless Golden Age.
Tim Urban (“Wait but why”), The AI Revolution: The Road to Superintelligence (Part I). (Posted in 2015)
(optional) John Maynard Keynes, Economic Possibilities for our Grandchildren, 1930.
Read carefully, different from previous weeks: Everyone should (1) post at least one “fact check” (about any claim of your choice in any of the readings), (2) post a response to one of the readings (I’ve provided some promts for the Sapiens chapters below, but you can respond to any of the readings by posing and answering your own question, and (3) post a (thoughtful and respectful) comment on someone else’s fact check or response. (If you are the first one to do (1) and (2), you do not need to do (3).)
(Chapter 14) Question 1: Harari writes,
“All modern attempts to stabilise the sociopolitical order have had no choice but to rely on either of two unscientific methods:
a. Take a scientific theory, and in opposition to common scientific practices, declare that it is a final and absolute truth. This was the method used by Nazis (who claimed that their racial policies were the corollaries of biological facts) and Communists (who claimed that Marx and Lenin had divined absolute economic truths that could never be refuted).
b. Leave science out of it and live in accordance with a non-scientific absolute truth. This has been the strategy of liberal humanism, which is built on a dogmatic belief in the unique worth and rights of human beings – a doctrine which has embarrassingly little in common with the scientific study of Homo sapiens.”
This seems like a pretty bleak choice. Is there no other option? (or is one of these, less unpalatable than it seems in Harari’s writing?)
(Chapter 14) Question 2: Harari writes about Bacon’s argument in Novum Organum Scientiarum that “knowledge is power”. In founding the University, Jefferson wrote,
“[T]his last establishment [a state university] will probably be within a mile of Charlottesville, and four from Monticello, if the system should be adopted at all by our legislature who meet within a week from this time. my hopes however are kept in check by the ordinary character of our state legislatures, the members of which do not generally possess information enough to percieve the important truths, that knowlege is power, that knowlege is safety, and that knowlege is happiness.”
(when the University uses this quote they tend to leave out the context that it was part of a dig against the state legislators).
How much of an influence on Jefferson was Bacon? (This isn’t really a response question, but something you might be interested in looking into.)
(Chapter 16) Question 3: Harari quotes Adam Smith, “In the new capitalist creed, the first and most sacred commandment is: ‘The profits of production must be reinvested in increasing production.’” In Smith’s time, businesses produced physical goods, and producing more of them required investment in capital. Today, most businesses mostly produce virtual goods, that can be produced in unlimited quantities without any production cost. Has this changed the capitalist creed?
(Chapter 16) Question 4: Harari writes, “Much like the Agricultural Revolution, so too the growth of the modern economy might turn out to be a colossal fraud. The human species and the global economy may well keep growing, but many more individuals may live in hunger and want.” He offers two answers to this (there’s no alternative, and we just need more patience), neither of which seems very satisfying. Is there a better answer?
(Chapter 17) Question 5: Discuss this quote:
“The history of ethics is a sad tale of wonderful ideals that nobody can live up to. Most Christians did not imitate Christ, most Buddhists failed to follow Buddha, and most Confucians would have caused Confucius a temper tantrum. In contrast, most people today successfully live up to the capitalist-consumerist ideal. The new ethic promises paradise on condition that the rich remain greedy and spend their time making more money, and that the masses give free rein to their cravings and passions – and buy more and more. This is the first religion in history whose followers actually do what they are asked to do. How, though, do we know that we’ll really get paradise in return? We’ve seen it on television.”
XKCD Land Mammals
OurWorldInData: Child Mortality, Democracy
Before the next class (Monday, 17 September), you should read:
Part Three of Yuval Noah Harari, Sapiens: A Brief History of Humankind. Everyone should read Chapters 9, 10, and 13; Chapters 11 and 12 are “optional” - I think they provide a lot of eye-opening insights, but it is not necessary to read them for the seminar. As a warning, Chapter 12 is about religion and is quite dismissive of major religions.
Alan Turing, Computing Machinery and Intelligence (1950).
These two are optional (you are not expected to read them, but I think they are worthwhile and you’ll find them interesting):
Everyone should post at least one “fact check” (about any claim of your choice in either the Sapiens or the Turing reading; if it is the same as another student’s, you should provide more evidence or counter-evidence responding to their post), and for both of the readings write a response to at least one of the questions below or post a free response to your own question.
As discussed today, I won’t set a specific deadline for posting responses, but everyone is strongly encouraged to not wait until the day before class to start posting, and I’m hoping to see some constructive on-line discussion. It isn’t necessary to finish the readings before posting, and I would encourage reading the relevant response prompts below as you read each reading.
Please post your responses as comments for the appropriate post (and you are encouraged to comment on others' responses).
Choose any one (or more) of these questions to respond to, or make up your own question.
Chapter 9:
Question 1: Harari writes about the unification of human cultures to the point where nearly all humans are closely interconnected. What is the most different culture you’ve experienced? How fundamentally different is it from your own?
Chapter 10:
Question 2: Harari writes, “We accept the dollar in payment, because we trust in God and the US secretary of the treasury.”. What are we actually trusting the US secretary of the treasury to do? (Hint: since 2003, we’re also putting a lot of trust in a different cabinet secretary.)
Chapter 11:
Question 3: “As the twenty-first century unfolds, nationalism is fast losing ground. More and more people believe that all of humankind is the legitimate source of political authority, rather than the members of a particular nationality, and that safeguarding human rights and protecting the interests of the entire human species should be the guiding light of politics.” As we discussed a bit in class Monday, in the last few years since this was written, a lot has happened to contradict this view, and there has been a rise of nationalism in many countries (including, of course, the US and Britain). Is there still reason to believe in the longstanding trends away from nationalism?
Chapter 12:
Question 4: Harari describes Humanism as “a belief that Homo sapiens has a unique and sacred nature, which is fundamentally different from the nature of all other animals and of all other phenomena. Humanists believe that the unique nature of Homo sapiens is the most important thing in the world, and it determines the meaning of everything that happens in the universe. The supreme good is the good of Homo sapiens. The rest of the world and all other beings exist solely for the benefit of this species.” According to Wikipedia, “Humanism is a philosophical and ethical stance that emphasizes the value and agency of human beings, individually and collectively, and generally prefers critical thinking and evidence (rationalism and empiricism) over acceptance of dogma or superstition.” Can these definitions be reconciled? How well does Harari’s argument hold up with Wikipedia’s definition?
Chapter 13:
Question 5: “In memetics, “Successful cultures are those that excel in reproducing their memes, irrespective of the costs and benefits to their human hosts.” Is there a better way to measure success of a culture?
Choose any one (or more) of these questions to respond to, or make up your own question.
Turing sets up the game where the computer plays the role of one of the respondents. How would things be different if the computer player the role of the questioner (so the test was being able to classify A/B as well as a human questioner can)?
Turing wrote this 68 years ago: “I believe that in a-bout fifty years' time it will be possible to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think ? ' I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs.” How good was his prediction?
Turing discusses nine potential objections to his Imitation Game test. Which one do you find most convincing (that is, where Turing’s counter-argument is not convincing)? Or, what other objection do you have?
Video: Humans Need Not Apply by CPG Grey
Before the next class (Monday, 10 September), you should read:
The first two parts (Chapters 1-8) of Yuval Noah Harari, Sapiens: A Brief History of Humankind.
Yuval Noah Harari. Why Technology Favors Tyranny. The Atlantic, October 2018.
Along with the readings, you should:
Keep track of how long you spend: I want to make the reading assignments reasonable time expectation. This is a fairly long one, since we want to make progress in the book, and our first class is nearly a week into the semester. In general, I would advocate for reading less more deeply and thoughtfully, than reading too much too quickly. So, if the volume of reading expected in this class is too much for you to read thoughtfully, please let me know.
Check facts: As part of your reading, you should select at least one claim in the readings to fact check. Once you’ve selected the claim you want to fact check, post it as a comment to the post in the class sub-reddit. A good fact check will include references to other sources that either support or contradict the claims in the book.
Reponses: for each of the readings, either (1) write a short response to at least two of the questions below, or (2) write a detailed response to one of the questions below, or (3) pose your own question and provide a respone. Post it in the class sub-reddit. You may select any one of the options above, but should be prepared to contribute to discussions in the next class on many of these topics.
There are three posts in class sub-reddit for your responses:
Please post your responses as comments for the appropriate post (and you are encouraged to comment on others' responses).
Response questions For Sapiens: A Brief History of Humankind (see above - you do not need to write responses to all of these!):
Sapiens divides history into three main revolutions: cognitive revolution (70,000 years ago), agricultural revoluation (12,000 years ago), and scientific revolution (500 years ago). Describe a different way of dividing human history, and make a case for why it is better.
Chapter 2 writes about millions of individuals working together to make a nuclear warhead. Pick a simple artifact you use every day and estimate how many humans cooperated to produce it?
Chapter 4: How should understanding of the historical impact of humans on other species guide our current attitudes and policies (for example, regarding endangered species and habitat protection)?
Chapter 5: Why has no noteworthy plant or animal been domesticated in the past 2,000 years?
Chapter 5: Harari writes, “We did not domesticate wheat. It domesticated us.” Is this true? How does it change your world view? Will future historians look at what smart phones did to your generation, and conclude they were a trap like wheat was to our predecessors?
Chapter 5: “This discrepancy between evolutionary success and individual suffering is perhaps the most important lesson we can draw from the Agricultural Revolution.” How should we measure the success of a species?
Chapter 6: Harari transforms Jefferson’s introduction to the Declaration of Independence into “We hold these truths to be self-evident, that all men evolved differently, that they are born with certain mutable characteristics, and that among these are life and the pursuit of pleasure.” Can you do better?
Chapter 6: Describe the imagined order(s) that most influenced your life in high school or here at UVA.
Chapter 6: Discuss: “There is no way out of the imagined order. When we break down our prison walls and run towards freedom, we are in fact running into the more spacious exercise yard of a bigger prison.”
Chapter 7: The three main limits of human memory presented are limited capacity, dies with the human, and only adapted to store particular types of information. Are these the most important limitations of human memory? (See Joshua Foer’s Moonwalking with Einstein on human memory training and competition.)
Chapter 7: “Our computers have trouble understanding how Homo sapiens talks, feels and dreams. So we are teaching Homo sapiens to talk, feel and dream in the language of numbers, which can be understood by computers.” Really?
Chapter 8: Harari writes about vicious circles that perpetuate imagined hierarchies of discrimination and subjugation. How can such vicious circles be ended? In human history, what are successful examples of ending them?
Chapter 8 presents three theories for nearly universal male dominance in human societies, but admits that none of them are convincing. Any better theories?
Response questions for Why Technology Favors Tyranny (see above - you do not need to write responses to all of these!):
“Fears of machines pushing people out of the job market are, of course, nothing new, and in the past such fears proved to be unfounded. But artificial intelligence is different from the old machines. In the past, machines competed with humans mainly in manual skills. Now they are beginning to compete with us in cognitive skills. And we don’t know of any third kind of skill—beyond the manual and the cognitive—in which humans will always have an edge.” Are there any candidates for a “third kind of skill” where humans would have a permanent advantage?
“AI is a tool and a weapon unlike any other that human beings have developed; it will almost certainly allow the already powerful to consolidate their power further.” Some tools have empowered individuals; others have empowered centralized authorities. Why is AI a tool for consolidating power (or is it)?
It is surprising to me that Harari’s essay does not mention China. Does what has happened in the last few decades in China contradict or support Harari’s claim that, “The decentralized approach to decision making that is characteristic of liberalism—in both politics and economics—has allowed liberal democracies to outcompete other states, and to deliver rising affluence to their people.”
What kinds of human decision-making should be left to machines?
Do you agree with Harari’s call to action: “If you find these prospects alarming—if you dislike the idea of living in a digital dictatorship or some similarly degraded form of society—then the most important contribution you can make is to find ways to prevent too much data from being concentrated in too few hands, and also find ways to keep distributed data processing more efficient than centralized data processing. These will not be easy tasks. But achieving them may be the best safeguard of democracy.”
Post links to interesting and relevant (this is very broadly defined) articles you find, along with a short comment about why you found it interesting or whether you agree with it.
Post follow-up questions (and answers)
Welcome to the website for the Pavilion Seminar, “How will Artificial Intelligence change Humanity?”. The seminar will be offered in Fall 2018, led by David Evans. Enrollment is by permission only, and strictly limited to 15 students (according to the Pavilion seminar rules).