eliezer yudkowsky scholar

Eliezer Yudkowsky, Author at Machine Intelligence Research However, I also think he's wrong. But regardless, Eliezer doesnt need to convince us of the hard version of this claim. And I don't mean people like me. There are real problems to deal with in the here-and-now. I think Yudkowsky is trying to show how impotent these questions are for getting at the real problems. And in particular, planning--the process of being, like, 'Here is a point in the world. How is it going to get this mind-ness about it? So why are the people who know the most about A.I. If we're going to say that the word 'magic' means anything at all, it probably means that. Russ Roberts: It's not my experience with the human creature. But it also shows how far-reaching simple goals can be. But let's try again. MrYudkowsky beats every other one of those by a country mile. Russ Roberts: Okay. [tdc_zone type=tdc_content][vc_row][vc_column][td_block_trending_now limit=3][/vc_column][/vc_row][vc_row tdc_css=eyJhbGwiOnsiYm9yZGVyLXRvcC13aWR0aCI6IjEiLCJib3JkZXItY29sb3IiOiIjZTZlNmU2In19][vc_column width=2/3][td_block_slide sort=featured limit=3][td_block_2 border_top=no_border_top category_id= limit=6 td_ajax_filter_type=td_category_ids_filter ajax_pagination=next_prev sort=random_posts custom_title=SEA MOSS RECIPES][td_block_1 border_top=no_border_top category_id= sort=random_posts custom_title=SEA MOSS BEAUTY][td_block_ad_box spot_id=custom_ad_1][td_block_15 category_id= limit=8 sort=random_posts custom_title=SEA MOSS HEALTH BENEFITS][/vc_column][vc_column width=1/3][td_block_social_counter custom_title=STAY CONNECTED facebook=tagDiv twitter=tagdivofficial youtube=tagdiv border_top=no_border_top][td_block_9 custom_title=LIFESTYLE border_top=no_border_top category_id= ajax_pagination=next_prev sort=random_posts][td_block_ad_box spot_id=sidebar][td_block_2 sort=random_posts limit=3 category_id= custom_title=SEA MOSS BUSINESS][td_block_title][td_block_10 limit=3 custom_title= border_top=no_border_top tdc_css=eyJhbGwiOnsibWFyZ2luLXRvcCI6Ii0yMCJ9fQ==][/vc_column][/vc_row][/tdc_zone], Designed by Elegant Themes | Powered by WordPress. As an illustration of the problem: if we look at the modern world, it is not controlled by people with the highest scores on intelligence tests. continues to advance at such a rapid pace. Well, there is certainly no evidence that any early humans engaged in space travel. It's just that the flesh evolved and therefore had to go down shallow potential energy gradients in order to be evolvable and is held together by Van der Waals forces instead of covalence bonds. I think we know that as long as you pick a nonlinear activation function, neural nets can represent arbitrary functions arbitrarily well as you increase the parameter count, or am I mistaken (Universal approximation theorem)? I see them driving around the city every day, NOW. So far as I could tell, Russ tried to ask this question several times, and Yudkowskys answers just exposed the lack of rigor in his own thinking. I think you are putting more in the evolutionary analogy than Eliezer intends. Russ Roberts: Isn't it striking how hard it is to convince them of that even though they're thinking people? But perhaps Scott will come to this program and defend that if he indeed holds it. Not in some future but NOW. It's an excellent point. Eliezer Yudkowsky: Um. Other LWers note the, ahem, apparent discrepancy: I don't see how this would be a quality comment by any other standards. Eliezer Yudkowsky: So, there's two different things you could be asking there. [1710.05060] Functional Decision Theory: A New Theory My guess as to what would actually be exploited to kill us would be this. They're ignorant from our perspective. Ostensibly, our training objective was make lots of copies of yourself, but what we got was the ability to quickly understand complex visual scenes, a desire to pose and solve complex abstract problems, a deep enjoyment of complex social interaction, and on and on. Russ Roberts: Whereas the ripe peach was better for you than the hard-as-a-rock peach that had no nutrients because it was not ripened, so you developed a sweet tooth and now it leads you runs amok--unintendedly--it's just the way it is. I might [?align? The present generation of LLMs are only being trained to produce language, but there are already people who are using the same evolutionary approaches to train similar models to act in simulated worlds. Humanity or most of humanity as an idle underclass. They wanted to show you that having a very high advanced level of civilization does not stop people from treating other people--other human beings--like animals. the Center for A.I. Inadequate Equilibria I think it is fair to argue that the algorithms underlying YouTube, Facebook etc have helped promoting conspiracy theories (around Covid, Chem Trails, Flat Earth etc). At the very least, you can take a million John von Neumanns and a million Otto von Bismarcks and let them go to work, powering them by a small power plant. And it looks around and it starts a sentence and then finds its way towards a set of sentences that it spits back at me that look very much like what a very thoughtful--sometimes, not always, often it's wrong--but often what a very thoughtful person might say in that situation or might want to say in that situation or learn in that situation. But it was not zero. Some of them can barely add and subtract. If you are perfectly role-playing a character that is sufficiently smarter than human and wants to be out of the box, then you will role-play the actions needed to get out of the box. Eliezer Yudkowsky. To which my response is: Intelligence does have effects on humans, especially humans who start out relatively nice. So, first of all, I want to appreciate why it's hard for me to give an actual correct answer to this, which is I'm not as smart as the AI. We know that steel is a kind of substance that can exist. Mitigating the risk of extinction from A.I. If the machine says its lonely and sounds like its a lonely human who wants you to leave your spouse, most of us would say, its just simulating what a human would say in those conditions. The US Bureau of Labor Statistics classifies 64 million Americans as having white collar jobs. Harry Potter and the Methods of Rationality - Wikipedia A lot of the time, it makes stuff up because it doesn't have a perfect memory. Or it might be 'What's a good restaurant in this place?' But theyve been light on the details. But I learned as we went and understanding came with it, though I cant reproduce it myself, as my parenthetical example reveals. Sydney was made to predict that people might sometimes try to lure somebody else's maid[?] A good book to read is Daemon by Daniel Suarez which while fiction, weaves a story around the subject that is compelling. So, when I ask it to write me a poem or a love song, to play Cyrano de Bergerac to Christian and Cyrano de Bergerac, it's really good at it. Let's learn that from movie scripts and other texts, novels that's read on the web. Right? making money in the stock market). So, if you're playing chess against a 10-year-old, you can win by luring their queen out, and then you take their queen; and now you've got them. And I still had some--let's say most of my skepticism remains that the current level of AI, which is extremely interesting, the ChatGPT variety, doesn't strike me as itself dangerous. While AIs dont have evolutionary pressure in the same way, they are iterating to improve and grow that intelligence (much more rapidly than human biology can). But that's not true. Another commentor mentioned learning as the episode continued and by the end of the conversation I had a better idea of what the guest was saying like the latter half elucidated the first half. Im applaud Russs ability to track the conversation in real time because I was rewinding throughout. Russ Roberts: You've written some very interesting things on rationality. There's no way you can know that superintelligences can solve the protein folding problem.'. Eliezer Yudkowsky - Biography JewAge It is more like a person who has read through a million books, not necessarily with a great memory unless something got repeated many times, but picking up the rhythm, figuring out how to talk like that. They don't know that's a law of nature. You could be asking: How did it end up wanting to do that? You say, "To visualize a hostile superhuman AI, don't imagine a lifeless book-smart thinker dwelling inside the Internet and sending ill-intentioned emails.". This paper describes and motivates a new decision theory known as functional decision theory (FDT), as distinct from causal We werent trained on crossing rivers or oceans, but we have done so for centuries or millennia. Some people want to pause there and say, 'How do you know that is true?' Likewise, having the he abilities designed in super intelligent systems just provides the abilities to complete those tasks. And I'm very open to the possibility that I'm naive or incapable of understanding it, and I recognize what I think would be your next point, which is that if you wait till that moment, it's way too late, which is why we need to stop now. Russ Roberts: So, how's it going to--I want to go to some other issues, but how's it going to kill me when it has its own goals and it's sitting inside this set of servers? Thus, it is something of a mirror. So, if I said, 'Sydney, I find you offensive. Cade Metz has spent years covering the realities and myths of A.I. This is LessWrong, so the post still says some oddball things (e.g. By Yudkowskys (and, apparently, a not statistically insignificant number of lead researchers in the field) estimates suggest the likelihood of this happening through the creation of an AGI is at least 10% or higher. So, yes, I would say those algorithms have broken free and that has had real-world negative consequences. We've got the inscrutable array of training, the results of this training process on trillions of pieces of information. Something like this might actually push a vulnerable person over the edge. I thought it was pretty cool. Hypothetical is such a polite way of phrasing what I think of the existential risk talk, said Oren Etzioni, the founding chief executive of the Allen Institute for AI, a research lab in Seattle. I have no doubt that Eliezer knows his stuff and I would love a second episode with a deeper dive in the how and less on hypothetical statements taken as truth. For many experts, this did not seem all that plausible until the last year or so, when companies like OpenAI demonstrated significant improvements in their technology. Artificial Intelligence Safety and Security, This chapter surveys some of the ethical challenges that may arise as one can create artificial intelligences (AI) of various kinds and degrees. From there, they could cause problems. The science fiction about an AI making its own secret nanotech lab was pretty unconvincing, though I understand it was only intended for illustration. I also agree with @Marcuss tidy summary of the core claim behind the inevitability of AI killing humanity. For example, he emphasized that humans reached the Moon without ever been trained (or evolved) to do so. Of course, that's not to say he's wrong, only that he's annoying. Let them bring forth their arguments as to why it's safe and I will bring forth my arguments as to why it's dangerous and there's no need to be like, 'Ah, but you can't --' Just check their arguments. Russ Roberts: Today is April 16th, 2023 and my guest is Eliezer Yudkowsky. He also co-founded Less Wrong, writing the Sequences, long sequences of posts dealing with epistemology, AGI, metaethics, rationality and so on. And he can't help himself. Without naming names. I found the previous episode about the brains mysteries much more compelling. As a further meta-point, I have included links to Eliezers blog posts about these topics. So, I think I understand that. And the one thing you want to do is avoid ruin, so you can take advantage of more draws from the urn. I dont say this to discourage you or to make Eliezer seem untouchably holy or something like that. Even faceless corporations, meddling governments, reckless And by the way, just for my and our listeners' knowledge, what is gradient descent? Like nuclear weapons. could destroy humanity. that have been notably light on details. WebEliezer Yudkowsky Abstract 1: The End of History 2: The Beyondness of the Singularity 2.1: The Definition of Smartness 2.2: Perceptual Transcends 2.3: Great Big Numbers 2.4: I think Bostrom has a similar metaphor, and that metaphor--which is just a metaphor--it gave me more pause than I even before. In my experience, it's a very small portion of the population that behaves that way. This is the third or fourth guest in the past two months discussing the farfetched scenario of AI developing a mind of its own and taking over. Shouldnt we be scrutinizing these powerful tech companies? I did not select these things to be able to not breathe oxygen. A statement was made that simulated planning is still planning which I dont find credible at all. experts believe it is a ridiculous premise. Whereas, if somebody else asked you a question, even if it's not everyone in the audience's question, they at least know you're answering the question that's been asked. Reddit, Inc. 2023. As those simulated worlds get more complex, we can expect them to eventually produce agents that plan and have goals. Eliezer Yudkowsky: If you are smarter--not just smarter than an individual human, but smarter than the entire human species--and you started out on a server connected to the Internet--because these things are always starting already on the Internet these days, which back in the old days we said was stupid--what do you do to make as many paperclips as possible, let's say? Scott is a University of Texas computer scientist. The two organizations that recently released open letters warning of the risks of A.I. Again, we're going to be having a few more over the next few months and maybe years, and that is: This is one of the greatest achievements of humanity that we could possibly imagine. He seems to find the jargon useful, but few people outside the rationalist/LessWrong community would understand it I say this as a PhD student working in machine learning who was vaguely aware of Yudkowskys ideas prior to this episode. Russ did a great job trying to bring the conversation back to the questions at hand and get Eliezer to explain his overly-dense language, but to little avail. Eliezer Yudkowsky, Staring into the singularity - PhilPapers systems cannot destroy humanity. Try with your own intelligence before I tell you the result of my trying with my intelligence. (Note: the idea of The Great Filter didnt come up, but thats what kept going through my mindWhat if all the alien civilizations that didnt destroy themselves through total war or environmental collapse did it this way instead?). Other A.I. I find that deeply disturbing and I'd love to have him on the program to defend it. But, carry on. 'Oh, yeah. Eliezer Yudkowsky: All right. There are a few things that point to humans being way below that upper limit: The hundreds of systematic biases of humans (link, link)

$10,000 Credit Card No Credit Check, Cheap Tiny Homes In North Georgia For Sale, Houses For Rent $800 A Month Near Dearborn, Mi, Januzzi's Mountain Top, Ironroot Harbinger 115, Is Trader Joe's Almond Flour Kosher For Passover, Who Pays The Archbishop Of Canterbury, Play On Words 4 Letters Crossword, Obituaries Clementon, Nj, Greenwich, Ct Building Permit Lookup,

eliezer yudkowsky scholar


© Copyright Dog & Pony Communications