Week 12: Fairness
— 26 Nov 2018

Follow-up

Here are a few links to follow-up from the today’s discussions:

Next (Final) meeting

Next week is our final seminar meeting. Everyone should be prepared to give a short talk (no more than 3 minutes long) on your paper project. Unlike previous ones, I would encourage you to prepare slides for this, but you should not use more than 4 slides (not counting one for the title).

There is no reading preparation assignment, but if you have ideas for what to do at the final meeting, or food suggestions, please send them to me.

Week 11: Fairness
— 19 Nov 2018

Reading/Viewing Assignment

Next week, we’ll talk about algorithmic fairness.

There are lots of very interesting writings on this, but I want to keep the required reading over Thanksgiving break to a minimum.

For the required reading, you can select one of these two options:

  1. Watch Christopher Moore’s talk, Data, Algorithms, Justice, and Fairness (Ulam Lecture at Santa Fe Institute, Oct 2018). (You can skip the excellent but long introduction by starting at 12:19.

  2. Read Machine Bias, ProPublica, 23 May 2016, and Northpointe’s response (with commentary from ProPublica).

Please post something in the course subreddit to spark discussion on this topic. It could be a link to a news article on algorithmic fairness with a brief comment on it, or a question or comment about the reading (or talk).

Paper Updates
— 15 Nov 2018

Paper updates are due next week Tuesday, November 20. For everyone, you should send an email with subject line [AI Pavillion] Paper Update: <your title>. What is expected in the update depends on whether you are continuing with your first paper topic for the final paper, or you are starting a new topic.

For students starting new topics for the final paper, you should:

For students continuing with the topic for your first paper:

Hope this is clear and makes sense for everyone. The goal of this is to ensure that everyone is on track for an interesting and high quality paper by the end of the semester. If you are at a stage where something else would be more useful (either in terms of the Nov 20 deadline or what you are sending), let me know and we can consider alternatives.

Week 10: Security
— 12 Nov 2018

Reading Assignment

Next week, we’ll talk about security and malicious uses of AI.

There are two readings. If you did the discussion post for last week, you should select either one of these two readings; if not, you should read both of them (you’ll get a confirmation email from me if you are in this group).

In the form, post and idea for a discussion point based on the reading. There are no specific prompts for this, it can be anything that strikes you as interesting to discuss from the reading.

Week 9: Papers Workshop
— 5 Nov 2018

Announcements

Anton Korinek in Economics is offering a course in the Spring on Artificial Intelligence and the Future of Work…and of Humanity, with lots of themes related to this seminar. A tentative syllabus is posted here.

Next week, Allison Pugh, Professor of Sociology at UVA, will visit us. You should read the paper distributed in class today, Of Seeing and Being Seen: What Humans Do for Each Other.

I will finish reading your papers by my office hours Thursday (9-10:30am). Please stop by if you can to pick up your paper, or find me some other time (feel free to stop by anytime you see my office open).

Reading Assignment

Read Allison Pugh’s paper distributed in class, Of Seeing and Being Seen: What Humans Do for Each Other (if you lost the paper copy, email me for a PDF), and post your responses here by Sunday, 11 November.

You should either post your own question or comment on the paper, respond to someone else’s comment, or respond to one of these questions:

Week 8: AI Control Problem
— 29 Oct 2018

Schedule Updates

Office Hours this week: I will not be able to hold my Thursday morning office hours this week, but will be available later in the day Thursday. I should be around most of the afternoon, so feel free to drop by, or email me to arrange a time.

Paper 1 is due Thursday, 1 November at 4:59pm. (This is a strict deadline since I need to print the papers before leaving on a trip.). Please remember to follow the directions posted last week about what to include in your email.

Short talks: You should prepare a five minute presentation about your paper topic, to present in class next Monday (5 November).

Happy Halloween! Beware of trick-xor-treaters who are impersonating valid trickers.


Slides from today’s class

GAN Lab (in-browser play with Generative Adversarial Networks)

The Verge. How three French students used borrowed code to put the first AI portrait in Christie’s. 23 October 2018.

Week 7: Black Mirror
— 22 Oct 2018

Assignment for this week

Mainly, you should focus on developing your papers this week, so the reading assignment is short but covers many interesting topics for class discussion. Before class on Oct 29, you should read:

Pay particular attention to these quotes, and how they relate to Bostrom’s chapter:

OBAMA: … Then there could be an algorithm that said, “Go penetrate the nuclear codes and figure out how to launch some missiles.” If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems. I think my directive to my national security team is, don’t worry as much yet about machines taking over the world. Worry about the capacity of either nonstate actors or hostile actors to penetrate systems, and in that sense it is not conceptually different than a lot of the cybersecurity work we’re doing. It just means that we’re gonna have to be better, because those who might deploy these systems are going to be a lot better now.

ITO: I generally agree. The only caveat is that there are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we’re going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen.

OBAMA: And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man.

Office hours delayed: I will start my office hours this week late on Thursday (usually 9-10:30am); I should be there by 9:30am. If you have questions about the comments on your paper draft, please stop by then or another time. You can use http://davidevans.youcanbook.me to schedule a meeting.

Writing Advice

1. Purpose: Write with a purpose (even if your real purpose is to satisfy a course requirement, you should write as though you have a real purpose, and hopefully you do!)

Examples of reasonable purposes for writing:

Note that all of these purposes assume a reader – you should have a clear idea who the intended audience is for your writing, and write with them in mind. The purpose of your writing should be stated explicitly and clearly so the reader knows why you want them to read it. Usually, this is done at the end of the abstract (or the first paragraph if there is no abstract). It should be a sentence like, “The goal of this article is to …”.

2. Organize: Use section headers and divisions with meaningful labels to break up your text. You shouldn’t have more than a page without some clear header or at least a paragraph tag to make it clear what it is about.

3. Stories: Tell stories, not lists. Unlike this document, a well written essay should follow a clear story. Each paragraph should be connected to the previous one, and all of them should serve the purpose. You shouldn’t have lists of disconnected things without a very good reason.

4. Cites: How and when to use quotes and references: - Most definitely, don’t plagarize! But, don’t use quotes to avoid plagarizing – write in a way that is not plagarizing without needing quotes. - Use a quote when the person/organization you are quoting matters, should be introduced with their identify: e.g., Andrew Ng dismissed the risks of AI, stating that “I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.” - Use references to provide sources for materials – seminal sources when possible (and when that is what you are writing about), not secondary ones (other people’s summaries).

5. Write simply and directly: Don’t use a complex word when a simple one would do: “utilize” => “use”, “advancements” => “advances”, etc. Don’t use overly complex sentence structures without a good reason.

Paper

The “final” version of your first paper is due by 4:59pm on Thursday, November 1 (note that this is extended from the original deadline that was October 30).

You can submit the paper by email with a PDF attachment and subject line [AI Pavilion] _Paper Title_. The email should also include answers to each of these questions:

  1. What is the purpose of your paper? (one sentence answer)
  2. Who is your intended audience? (one sentence answer)
  3. If you decided not to follow advice from the first draft, explain why. (It is okay to not follow advice, but you need to make it clear that you understood the advice and justify why you didn’t follow it.)
  4. Do you want to continue with this topic, or start on a new topic for the “final” paper? (Yes/no answer is fine, but feel free to explain more if helpful)

Week 6: Quadrants of Optimism
— 15 Oct 2018

Paper Draft

Your First Paper Drafts are due tomorrow (Wednesday). Please submit your paper first drafts by sending an email with a PDF file attached to me, with a subject line [AI Pavilion] <your paper title>. I’ll send a quick ack message so you know I received it. You don’t need to post your paper draft publicly (on the class reddit) unless you want to, but are welcome do to so also if you want to share it with the class/world at this stage.

If you would like to get quick feedback on your paper at my office hours Thursday morning (9-10:30am), include a note in your email saying this and I’ll prioritize reading your paper.

Assignment for Next Class

For the next class, we’ll focus on “Black Mirror”, a British TV series whose episodes explore (usually dystopian) outcomes of possible future technologies.

Our two “Black Mirror” experts have suggested three episodes (thanks Erich and Emily!), and everyone will be expected to watch at least one of them and read a short paper related to it from the Recoding Black Mirror Workshop that was held at WWW 2018 (for two of the episodes, there are closely related papers).

To ensure coverage across the three selected episodes, I have arbitrarily (* not completely arbitrarily - if there was an obvious connection between your paper topic and one of the episodes you should be in the right group for that topic) put you in three groups - but, the groups should be considered “default” choices; if you want to switch to a different episode from the one you’ve been assigned, that’s fine. The episodes are all available through Netflix - I believe most of you have access to this, but if you need help let me know. Its fine if you watch the episode alone, but probably more fun if you can get together with others in your group to watch it together. I’ll leave it up to you to try and coordinate this (feel free to post on the class subreddit if that’s helpful).

Everyone should (either individually, or in coordination with others in your episode group):

  1. Come to class Monday prepared to give a summary of your episode to the class; if you want to include showing a few short scenes from it, that’s fine and encouraged!.

  2. Think about these questions (and use the reading to help):

    • how realistic is the technology that is the basis for the episode?
    • if the technology imagined is developed, should it be allowed?
    • what could we do to increase the likelihood the imagined technology is used for the greater good of humanity, not to create a dystopian future?

Group 1: Cal, Emily, Lauren A., Olivia, Nathanael, Stella

Episode: “Be Right Back”, Series 2, Episode 1
Paper: Tabea Tietz, Francesca Pichierri, Maria Koutraki, Dara Hallinan, Franziska Boehm, and Harald Sack. Digital Zombies - the Reanimation of our Digital Selves
Optional second paper: Martino Mensio, Giuseppe Rizzo, Maurizio Morisio. The Rise of Emotion-aware Conversational Agents: Threats in Digital Emotions

Group 2: Jacob, Kavya, Lauren H., Maria, Rohit

Episode: “Nosedive”, Series 3 Episode 1
Paper: Harshvardhan J. Pandit, Dave Lewis. Ease and Ethics of User Profiling in Black Mirror

Group 3: Erich, Fiona, Megan, Mira

Episode: “Hang the DJ”, Series 4 Episode 4
Paper: (couldn’t find one closely related; if you can, please read that instead, otherwise read this one) Diego Sempreboni, Luca Vigano. MMM: May I Mine Your Mind?

Week 5: Papers and Purpose
— 1 Oct 2018

Talk this Friday

John C. Havens will be speaking on Friday (Oct 5), 10-11:30am on Humanities, Cultures and Ethics in the Era of AI, Wilson Hall 301. He is executive director of the IEEE project on Ethics of Autonomous and Intelligent Systems.

Use this link to RSVP (there is also a lunch following the talk).

Interesting Law

California passed a law about bots impersonating humans: Senate Bill No. 1001: Bots: disclosure

  1. (a) It shall be unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if the person discloses that it is a bot.
    (b) The disclosure required by this section shall be clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot.

It is interesting that it seems to be unlawful for a bot to intentionally confuse a human, but not for a bot to intentionally confuse another machine (which is a much more common problem in actuality today).

Reading Assignment

Because of fall break, there is no meeting next week. The next major readings we will do are from Nick Bostrom’s Superintelligence [Amazon]. We won’t read all of this, but will read many chapters. Tim Urban’s (“Wait but why”), The AI Revolution: The Road to Superintelligence is largely based on this book, but the book goes into more depth on many points.

By October 15, you should do at least one of these two things:

Option 1: Watch Nick Bostrom’s talk at Google (Sep 2014). (You should definitely watch the question and answer period at the end of the talk — the first question is from Ray Kurzweil, the third is Peter Norvig.) and Read Chapter 3 of Superintelligence.

Option 2: Read Chapters 1-4 of Superintelligence.

This is a (relatively) short reading assignment for the two weeks you have, to provide time to focus on your papers (the first draft of which is due on Wednesday, October 17.)

Responses

Either (1) select at least one of the questions below and post a comment on the course forum, (2) respond to a comment someone else posted, or (3) post your own thoughts on something in either the talk or readings. [Link to Post]

1. Bostrom talks and writes about using biological enhancement through genetic selection, but many people find such an idea distasteful at best or dystopian at worst. According to a 2004 poll reported in book, 28% of Americans approved of embryo selection for “strength of intelligence”, 68% for avoiding fatal childhood disease. A more recent survey found 70% support for PGD selection for avoiding diseases fatal early in life or lifelong disability, 48% for diseases that manifest late in life, 21% for sex selection, 14.6% for physical traits, and 18.9% for personality traits. These responses suggest some kind of moral or practical difference between different types of embryo selection. Is there a way to decide what kinds of embryo selection are moral? What kinds should be disallowed, and what is the justification?

2. Bostrom writes about an internet with “better support for deliberation, de-biasing, and judgment aggregation, might make large contributions to increasing the collective intelligence of humanity”. Is this realistic? What would be necessary to move from what we have now to an internet that increases collective intelligence?

3. Bostrom defines “superintelligence” as “intellects that greatly outperform the best current human minds across many very general cognitive domains”, but doesn’t provide any concrete or satisfying definition (in my view). A better definition might make specific claims about what problems a superintelligence could solve, or what behaviors it would have. Suggest a better definition (or make a case in support of Bostrom’s definition).

Week 4: Hoverboards
— 24 Sep 2018

Paper Ideas

The idea for your first paper is due Sunday, 30 September. From the syllabus, the first paper can “focus on one aspect of how artificial intelligence has already impacted society, describing the impact of technological advances on a social, political, economic, or psychological aspect of human existence.” It is not required that your paper idea is on this topic though — anything that is relevant to the seminar theme (broadly interpreted), interesting, and with a scope suitable for a 4-5 week effort is reasonable.

You should post your paper idea on the class subreddit. Submit a new text post (or a link post that links to a PDF) that has a title Paper: your title and contains:

The best ideas for will make an argument for a non-obvious and controversial claim and include data to support that argument.

Elevator Pitches. In class next week, you’ll give an elevator pitch for your idea to the class for feedback. This should be about 90 seconds long — enough to give a clear idea of your idea and why it is worth doing, but getting to the point concisely.

Assignment

Before the next class (Monday, 1 October), you should read:

Sapiens Response. Now that you’ve finished Sapiens, post a comment that mentions the most surprising or interesting thing you learned from reading it and explains why. [Discussion Link]

Hallpike Response. Select one of Hallpike’s criticisms of Sapiens (this could be a specific quote from his review, or a paraphrase of a main point), and make a case opposing it. Alternatively, if you find all of Hallpike’s criticisms valid and convincing, respond to one of the cases posted with a counter-argument. [Discussion Link]

(There’s no response question for the Superintelligence reading, but feel free to post any thoughts you want on this. I’m not sure if we’ll get to discussing this next meeting, but it, and readings from Bostrom’s book, will be the main focus of the next few weeks.)