Logo

AI Alignment and the Long-Term Future - Shared screen with speaker view
EA at Georgia Tech
14:25
I can't hear anything right now btw
EA at Georgia Tech
14:49
nvm
Kim Solez
15:36
The audio is perfect from me!
EA at Georgia Tech
17:06
hi Trevor
Kaivu
17:43
Please promote me to mod
Aekansh Goel
17:54
Hi
EA at Georgia Tech
21:27
hi Zechen and Nikola
Tamay
23:59
Where is the Q&A form?
Zechen
25:05
In the Q&A button below
Nancy Shao
28:53
can someone enable captions if possible?
Ethan (EJ) Watkins
33:52
Bacteria/viruses seem like something that in some sense actively fight our efforts to control them. (Not via intelligence, but via evolution.)
Rafael Proença
35:54
6. seems to depend on a lot on the power holding AI being sentient/a relevant moral actor or not
Jalen
40:08
@tamay built into Zo down the bottom, across from the chat
Tamay
40:19
Got it, thanks.
Arnold Wang
59:42
Do you guys think slave-owning societies had similar conversations about the risks of slave labor and large slave populations
Jerome Glenn
01:00:48
what is your email address for follow up?
Jerome Glenn
01:01:26
can you stop sharing your screen?
Arnold Zhang
01:03:20
very interesting question about societies where slavery was a huge part of realitythe comparison of conversations is interesting to consider; though of course AI would be categorically different in composition and potential scale
David Wood
01:03:22
Wouldn't a sufficiently smart AI hide any purchase it made of cryptocurrencies
Eddy S-H
01:04:23
What proportion of CS professors & industry leaders working on AI itself believe this is the most dangerous issue facing us? It feels like it’s an issue that’s more hyped by outside groups looking in, than the internal groups themselves, but this is probably just based on bubbles of who I talk with vs who others talk with etc
David Wood
01:05:05
Wouldn't a sufficiently smart AI hide the fact that it had stolen cryptocurrencies? (That's what I meant to type in my last comment...)
EA at Georgia Tech
01:07:37
I like Robert Miles' Intro to AI Safety YouTube video, in addition to Kelsey Piper's "The threat for taking AI seriously as a threat to humanity"
David Wood
01:07:58
The Kelsey Piper article: https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment
Tamay
01:07:59
+1 re: Robert Miles
David Wood
01:09:09
Jerome: You can find Joseph's email address here: https://www.josephcarlsmith.com/
Dan Elton
01:13:18
Here’s the Twitter thread I referenced. Robert Miles was referenced a lot and Kelsey Piper’s article was also referenced at least once https://twitter.com/sam_atis/status/1501352718933192711?s=20&t=0bJjWfpIETl1iNVBRo9sUw
Tamay
01:14:44
Not prior. Credence on all conjuncts being right. He said >15%. But what is it?
Samuel Fritz
01:15:07
Epistemic status maybe?
Arnold Wang
01:18:59
Technology is a political tool. Educate yourself as best as possible and grab power for yourself so that others don't make myopic decisions on your behalf. --- Sauron, probably
EA at Georgia Tech
01:19:09
hi Kaivu
Arnold Wang
01:19:45
AI safety is a problem to be solved by human institutions. The future leader of the global committee on AI Safety is probably listening in on this meeting!
Arnold Wang
01:21:04
clap clap clap
Eddy S-H
01:21:05
Thanks!
Valentina
01:21:08
tysm!
Arnold Zhang
01:21:50
thank you!
David Wood
01:22:13
Feedback: the audio from the speaker was great, but the audio from the room was painful to listen to