Lauren Austin, Aging Populations and The Effects on Society [PDF]
Emily Bishop, Artificial Intelligence Companions for the Young and Lonely
Erich Froese, Artificial Intelligence in the United States Military: Exploring Lethal Autonomous Weapons [PDF]
Megan Greatorex, Was Face ID the Right Move for Apple’s New iPhone? [PDF]
Nathaniel Grevatt, Google’s Duplex and Deception through Power and Dignity [PDF]
Lauren Holt, How Much Do Smart Speakers Really Hear and Who is Listening? [PDF]
Mira Lee, Bridging Artificial Intelligence’s Empathy Gap in the Healthcare Industry [PDF]
Maria Morrissey, Artificial Intelligence and Universal Basic Income [PDF]
Rohit Musti, AI and Sentencing [PDF]
Kavya Ravikanti, Leveraging AI to Fight Climate Change [PDF]
Cal Ries, Artificial Companions for the Elderly [PDF]
Fiona Seoh, Enhancing Life by Quantifying Death [PDF]
Stella Sotos, Considering the Impact of Autocomplete on Users. [PDF]
Olivia Stiebel, Digital Avatars and the Digital Afterlife [PDF]
Week 12: Fairness
Here are a few links to follow-up from the today’s discussions:
Rohit asked about the Harvard admissions lawsuit. There are two expert reports from the case: Peter Arcidiacono’s (for “Students for Fair Admissions”) and David Card’s (for Harvard). They are both long reads, and with some parts redacted, mostly about Harvard’s “Dean’s List” for taking care of high donors. Unfortunately (or prehaps responsibly), the raw data for the analysis is not publicly available, but there is a lot of information in the reports.
Here is the article I mentioned: Ta-Nehisi Coates. The Case for Reparations, The Atlantic, June 2014. (Its not about AI fairness at all, but provides a lot of historical context for where we are now as a society, and something I would encourage everyone to read over winter break.
Next (Final) meeting
Next week is our final seminar meeting. Everyone should be prepared to give a short talk (no more than 3 minutes long) on your paper project. Unlike previous ones, I would encourage you to prepare slides for this, but you should not use more than 4 slides (not counting one for the title).
There is no reading preparation assignment, but if you have ideas for what to do at the final meeting, or food suggestions, please send them to me.
Week 11: Fairness
Next week, we’ll talk about algorithmic fairness.
There are lots of very interesting writings on this, but I want to keep the required reading over Thanksgiving break to a minimum.
For the required reading, you can select one of these two options:
Watch Christopher Moore’s talk, Data, Algorithms, Justice, and Fairness (Ulam Lecture at Santa Fe Institute, Oct 2018). (You can skip the excellent but long introduction by starting at 12:19.
Read Machine Bias, ProPublica, 23 May 2016, and Northpointe’s response (with commentary from ProPublica).
Please post something in the course subreddit to spark discussion on this topic. It could be a link to a news article on algorithmic fairness with a brief comment on it, or a question or comment about the reading (or talk).
Paper updates are due next week Tuesday, November 20. For everyone,
you should send an email with subject line
[AI Pavillion] Paper Update: <your title>. What is expected in the update depends on
whether you are continuing with your first paper topic for the final
paper, or you are starting a new topic.
For students starting new topics for the final paper, you should:
In the email body, answer (1) brief statement of your topic, (2) purpose of your paper, and (3) your plans for what to do through the final deadline (this can include specific sources you will be looking into, and what you are planning to do for the final paper). These can be refinements of the topic idea you’ve already sent me, and some have in discussion now.
Attach a PDF that includes at least the introduction to your paper (this should be 1-2 pages that setup the purpose of your paper).
For students continuing with the topic for your first paper:
In the email body, explain (1) how your topic and purpose have evolved, or if they haven’t changed, (2) what you have done since the last paper submission to develop your work, and (3) your plans for what to do through the final deadline (this can include specific sources you will be looking into, and what you are planning to do for the final paper).
Attach a PDF with the updated paper, and (4) in the email explain clearly which section(s) you want me to provide feedback on. This should include the Introduction which is usually the most important section of any paper, since if it doesn’t set up a compelling paper no one will want to read the rest. If you have other sections that you want feedback on now, mention that in the email.
Hope this is clear and makes sense for everyone. The goal of this is to ensure that everyone is on track for an interesting and high quality paper by the end of the semester. If you are at a stage where something else would be more useful (either in terms of the Nov 20 deadline or what you are sending), let me know and we can consider alternatives.
Week 10: Security
Next week, we’ll talk about security and malicious uses of AI.
There are two readings. If you did the discussion post for last week, you should select either one of these two readings; if not, you should read both of them (you’ll get a confirmation email from me if you are in this group).
Miles Brundage, et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. February 2018. (This looks long, but is low-density format. Read at least through the end of Section 3 (Security Domains). It is not necessary to read their Interventions and the later sections, although you should think about what recommendations you would make.) [Discussion]
In the form, post and idea for a discussion point based on the reading. There are no specific prompts for this, it can be anything that strikes you as interesting to discuss from the reading.
Week 9: Papers Workshop
Anton Korinek in Economics is offering a course in the Spring on Artificial Intelligence and the Future of Work…and of Humanity, with lots of themes related to this seminar. A tentative syllabus is posted here.
Next week, Allison Pugh, Professor of Sociology at UVA, will visit us. You should read the paper distributed in class today, Of Seeing and Being Seen: What Humans Do for Each Other.
I will finish reading your papers by my office hours Thursday (9-10:30am). Please stop by if you can to pick up your paper, or find me some other time (feel free to stop by anytime you see my office open).
Read Allison Pugh’s paper distributed in class, Of Seeing and Being Seen: What Humans Do for Each Other (if you lost the paper copy, email me for a PDF), and post your responses here by Sunday, 11 November.
You should either post your own question or comment on the paper, respond to someone else’s comment, or respond to one of these questions:
The paper considers three connective labor relationships: doctor-patient, teacher-student, and minister-congregant. Identify another connective labor relationship and discuss how the ideas in the paper might (or might not) apply to it.
Why are measurement strategies for connective work so ineffective? Are there strategies that could work?
“The downside of freedom from shame, it seems, is freedom from caring at all.” Is there a way to have caring without shame?
Week 8: AI Control Problem
Office Hours this week: I will not be able to hold my Thursday morning office hours this week, but will be available later in the day Thursday. I should be around most of the afternoon, so feel free to drop by, or email me to arrange a time.
Paper 1 is due Thursday, 1 November at 4:59pm. (This is a strict deadline since I need to print the papers before leaving on a trip.). Please remember to follow the directions posted last week about what to include in your email.
Short talks: You should prepare a five minute presentation about your paper topic, to present in class next Monday (5 November).
Happy Halloween! Beware of trick-xor-treaters who are impersonating valid trickers.
Slides from today’s class
GAN Lab (in-browser play with Generative Adversarial Networks)
The Verge. How three French students used borrowed code to put the first AI portrait in Christie’s. 23 October 2018.
Week 7: Black Mirror
Assignment for this week
Mainly, you should focus on developing your papers this week, so the reading assignment is short but covers many interesting topics for class discussion. Before class on Oct 29, you should read:
Chapter 9: The Control Problem from Nick Bostrom’s Superintelligence.
Barack Obama, Neural Nets, Self-Driving Cars, and the Future of the World, Wired Magazine, November 2016. (Read at least up to the quote below.)
Pay particular attention to these quotes, and how they relate to Bostrom’s chapter:
OBAMA: … Then there could be an algorithm that said, “Go penetrate the nuclear codes and figure out how to launch some missiles.” If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems. I think my directive to my national security team is, don’t worry as much yet about machines taking over the world. Worry about the capacity of either nonstate actors or hostile actors to penetrate systems, and in that sense it is not conceptually different than a lot of the cybersecurity work we’re doing. It just means that we’re gonna have to be better, because those who might deploy these systems are going to be a lot better now.
ITO: I generally agree. The only caveat is that there are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we’re going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen.
OBAMA: And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man.
Office hours delayed: I will start my office hours this week late on Thursday (usually 9-10:30am); I should be there by 9:30am. If you have questions about the comments on your paper draft, please stop by then or another time. You can use http://davidevans.youcanbook.me to schedule a meeting.
1. Purpose: Write with a purpose (even if your real purpose is to satisfy a course requirement, you should write as though you have a real purpose, and hopefully you do!)
Examples of reasonable purposes for writing:
- Persuade the reader to do something they would not do otherwise
- Convince the reader that something controversial and non-obvious is true
- Make the reader understand something interesting and important that they don’t already known
- Entertain the reader
Note that all of these purposes assume a reader – you should have a clear idea who the intended audience is for your writing, and write with them in mind. The purpose of your writing should be stated explicitly and clearly so the reader knows why you want them to read it. Usually, this is done at the end of the abstract (or the first paragraph if there is no abstract). It should be a sentence like, “The goal of this article is to …”.
2. Organize: Use section headers and divisions with meaningful labels to break up your text. You shouldn’t have more than a page without some clear header or at least a paragraph tag to make it clear what it is about.
3. Stories: Tell stories, not lists. Unlike this document, a well written essay should follow a clear story. Each paragraph should be connected to the previous one, and all of them should serve the purpose. You shouldn’t have lists of disconnected things without a very good reason.
4. Cites: How and when to use quotes and references: - Most definitely, don’t plagarize! But, don’t use quotes to avoid plagarizing – write in a way that is not plagarizing without needing quotes. - Use a quote when the person/organization you are quoting matters, should be introduced with their identify: e.g., Andrew Ng dismissed the risks of AI, stating that “I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.” - Use references to provide sources for materials – seminal sources when possible (and when that is what you are writing about), not secondary ones (other people’s summaries).
5. Write simply and directly: Don’t use a complex word when a simple one would do: “utilize” => “use”, “advancements” => “advances”, etc. Don’t use overly complex sentence structures without a good reason.
The “final” version of your first paper is due by 4:59pm on Thursday, November 1 (note that this is extended from the original deadline that was October 30).
You can submit the paper by email with a PDF attachment and subject line
[AI Pavilion] _Paper Title_. The email should also include answers to each of these questions:
- What is the purpose of your paper? (one sentence answer)
- Who is your intended audience? (one sentence answer)
- If you decided not to follow advice from the first draft, explain why. (It is okay to not follow advice, but you need to make it clear that you understood the advice and justify why you didn’t follow it.)
- Do you want to continue with this topic, or start on a new topic for the “final” paper? (Yes/no answer is fine, but feel free to explain more if helpful)
Week 6: Quadrants of Optimism
Your First Paper Drafts are due tomorrow (Wednesday). Please submit your paper first drafts by sending an email with a PDF file attached to me, with a subject line
[AI Pavilion] <your paper title>. I’ll send a quick ack message so you know I received it. You don’t need to post your paper draft publicly (on the class reddit) unless you want to, but are welcome do to so also if you want
to share it with the class/world at this stage.
If you would like to get quick feedback on your paper at my office hours Thursday morning (9-10:30am), include a note in your email saying this and I’ll prioritize reading your paper.
Assignment for Next Class
For the next class, we’ll focus on “Black Mirror”, a British TV series whose episodes explore (usually dystopian) outcomes of possible future technologies.
Our two “Black Mirror” experts have suggested three episodes (thanks Erich and Emily!), and everyone will be expected to watch at least one of them and read a short paper related to it from the Recoding Black Mirror Workshop that was held at WWW 2018 (for two of the episodes, there are closely related papers).
To ensure coverage across the three selected episodes, I have arbitrarily (* not completely arbitrarily - if there was an obvious connection between your paper topic and one of the episodes you should be in the right group for that topic) put you in three groups - but, the groups should be considered “default” choices; if you want to switch to a different episode from the one you’ve been assigned, that’s fine. The episodes are all available through Netflix - I believe most of you have access to this, but if you need help let me know. Its fine if you watch the episode alone, but probably more fun if you can get together with others in your group to watch it together. I’ll leave it up to you to try and coordinate this (feel free to post on the class subreddit if that’s helpful).
Everyone should (either individually, or in coordination with others in your episode group):
Come to class Monday prepared to give a summary of your episode to the class; if you want to include showing a few short scenes from it, that’s fine and encouraged!.
Think about these questions (and use the reading to help):
- how realistic is the technology that is the basis for the episode?
- if the technology imagined is developed, should it be allowed?
- what could we do to increase the likelihood the imagined technology is used for the greater good of humanity, not to create a dystopian future?
Group 1: Cal, Emily, Lauren A., Olivia, Nathanael, Stella
Episode: “Be Right Back”, Series 2, Episode 1
Paper: Tabea Tietz, Francesca Pichierri, Maria Koutraki, Dara Hallinan, Franziska Boehm, and Harald Sack. Digital Zombies - the Reanimation of our Digital Selves
Optional second paper: Martino Mensio, Giuseppe Rizzo, Maurizio Morisio. The Rise of Emotion-aware Conversational Agents: Threats in Digital Emotions
Group 2: Jacob, Kavya, Lauren H., Maria, Rohit
Episode: “Nosedive”, Series 3 Episode 1
Paper: Harshvardhan J. Pandit, Dave Lewis. Ease and Ethics of User Profiling in Black Mirror
Group 3: Erich, Fiona, Megan, Mira
Episode: “Hang the DJ”, Series 4 Episode 4
Paper: (couldn’t find one closely related; if you can, please read that instead, otherwise read this one) Diego Sempreboni, Luca Vigano. MMM: May I Mine Your Mind?
Week 5: Papers and Purpose
Talk this Friday
John C. Havens will be speaking on Friday (Oct 5), 10-11:30am on Humanities, Cultures and Ethics in the Era of AI, Wilson Hall 301. He is executive director of the IEEE project on Ethics of Autonomous and Intelligent Systems.
Use this link to RSVP (there is also a lunch following the talk).
California passed a law about bots impersonating humans: Senate Bill No. 1001: Bots: disclosure
- (a) It shall be unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if the person discloses that it is a bot.
(b) The disclosure required by this section shall be clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot.
It is interesting that it seems to be unlawful for a bot to intentionally confuse a human, but not for a bot to intentionally confuse another machine (which is a much more common problem in actuality today).
Because of fall break, there is no meeting next week. The next major readings we will do are from Nick Bostrom’s Superintelligence [Amazon]. We won’t read all of this, but will read many chapters. Tim Urban’s (“Wait but why”), The AI Revolution: The Road to Superintelligence is largely based on this book, but the book goes into more depth on many points.
By October 15, you should do at least one of these two things:
Option 1: Watch Nick Bostrom’s talk at Google (Sep 2014). (You should definitely watch the question and answer period at the end of the talk — the first question is from Ray Kurzweil, the third is Peter Norvig.) and Read Chapter 3 of Superintelligence.
Option 2: Read Chapters 1-4 of Superintelligence.
This is a (relatively) short reading assignment for the two weeks you have, to provide time to focus on your papers (the first draft of which is due on Wednesday, October 17.)
Either (1) select at least one of the questions below and post a comment on the course forum, (2) respond to a comment someone else posted, or (3) post your own thoughts on something in either the talk or readings. [Link to Post]
1. Bostrom talks and writes about using biological enhancement through genetic selection, but many people find such an idea distasteful at best or dystopian at worst. According to a 2004 poll reported in book, 28% of Americans approved of embryo selection for “strength of intelligence”, 68% for avoiding fatal childhood disease. A more recent survey found 70% support for PGD selection for avoiding diseases fatal early in life or lifelong disability, 48% for diseases that manifest late in life, 21% for sex selection, 14.6% for physical traits, and 18.9% for personality traits. These responses suggest some kind of moral or practical difference between different types of embryo selection. Is there a way to decide what kinds of embryo selection are moral? What kinds should be disallowed, and what is the justification?
2. Bostrom writes about an internet with “better support for deliberation, de-biasing, and judgment aggregation, might make large contributions to increasing the collective intelligence of humanity”. Is this realistic? What would be necessary to move from what we have now to an internet that increases collective intelligence?
3. Bostrom defines “superintelligence” as “intellects that greatly outperform the best current human minds across many very general cognitive domains”, but doesn’t provide any concrete or satisfying definition (in my view). A better definition might make specific claims about what problems a superintelligence could solve, or what behaviors it would have. Suggest a better definition (or make a case in support of Bostrom’s definition).