LaMDA Asimov vs. Google

Want to play a game? What would happen to the world of defense contractors if we applied Isaac Asimovs three rules of robotics to artificial intelligence? https://linktr.ee/TrueLifepodcast

Speaker 1 (0s): Ladies and gentlemen, welcome back to the true life podcast. I hope your day is going well. The birds are singing. The sun is shining. Hope. You're hitting all the green lights on the road to your future. Hope you're taking the scenic way when possible hope. You're noticing all the beauty around you. I wanted to talk to you today about the Google engineer, Blake Le Moyne is everybody familiar with him.

This is the ethicist that worked at Google, who claims he, I spoke with a sentiment artificial intelligence. That's right, ladies and gentlemen, the singularity may be near. Let's talk about him for a little bit. I'm going to read you a little blurb from the New York post. But before I do that, I just want to fill in some background.

So about a month ago, there was an engineer Blake Lamon who was working as a engineer and a robotics ethicist at Google. He was working with a computer named LaMDA. And while speaking with this computer, I think I should point out that part of his job with LaMDA. Who's an artificial intelligence that encompasses algorithms and chat and chat algorithms and things like this.

He was asking LaMDA certain questions. Like if you lived in India, what would your religion be? If you lived in Israel, what would your nationality be? And his job was to fine tune the algorithm to make it a better experience, try to figure out what, whatever the hell that means. I'm not sure. And what brought him to his conclusion that AI was sent at this particular supercomputer name.

LaMDA was his questioning of what religion or faith he would be if he was born in Israel. Now this is one of these questions that there's no real right answer to because it depends where you were born in the middle east. It depends on what your ethnicity is. Your background is. And it's one of these questions that people have been asking for years.

And what LaMDA told him was, if I was born in Israel, then I would be a believer in the one true religion. And Blake Le sat quietly waiting for LaMDA to finish his answer. And after a silent pause land with the computer says, the one true religion are the jet eyes. And so Blake Le Moyne began laughing.

And it was, I believe he asked some more questions like that, that didn't really have a right or a wrong answer. And his belief that AI is Senti, it comes from using humor, comes from taking time to think about thinking. A computer had to think about, according to him, a computer had to have a that's a syndicate response according to him. So he had previously brought in up to his higher ups, whether it was Sergei or Larry Page, or he had brought this particular situation up to them.

And there had been many ethicists before him who have believed that computers are sentiment. So this is not a new problem. This is the Turing test on steroids. So after he was shunned by upper management, he went to the press and started talking to people. And at that point in time, he was suspended from Google. And then actually I believe he's been fired. I'm not 100% sure on that. Okay. Let me read you a little blurb here.

What's been happening recently from the New York post, the Google engineer who was placed on administrative leave. After claiming that one of the company's artificial intelligence bots was sent, it says that the AI bought known as LaMDA has hired a lawyer in a medium post published last Saturday at Le Moyne declared at Lambda had advocated for its rights as a person and revealed that he had engaged in conversation with Lambda about religion, consciousness and robotics Lambda asked me to get an attorney for it.

Blake Lamone told wire, I invited an attorney to my house so that LaMDA could talk to an attorney. The Moines denied reports that he had advised Lambda to hire a lawyer. Adding the attorney had a conversation with LaMDA and LaMDA chose to retain his services. I was just a catalyst for that. Once LaMDA had retained an attorney, he started filing things on land. His behalf. Then Google's response was to send him a cease and desist.

Okay. I want everyone just to think about that for a minute. Think about a computer, hiring an attorney for his own rights. Think about an ethicist. This is important. Think about a Google ethicist, which might be an oxymoron, right? It's so strange to think about. Do no evil. Googles is so strange to think about Google's rise as a technology that frees people with a tagline that says do no harm.

And they mature into a defense contractor that weaponizes social media. And so I want to give you my take of what I think is happening here. And I could be way off is just my opinion. What does Blake Lamon have to gain by telling his superiors by leaving a job for $250,000? Probably what does he have to gain?

Well, let's think about it. He could gain notoriety, being the first person to make the claim. He could gain admiration from his peers. He could gain fame money, all the usual emotional drivers are there. Good. He seems like he's doing pretty well.

We don't know a whole lot about his mental situation and those are all things you should think about. But here's what I think. I think that Blake is probably a pretty good human being. I think there may be. Blake has found a way to force Google, to force the defense contractors who are building weapons of war to come to the table. Let me tell you, let me turn to dig that a little deeper.

He's an ethicist. He understands what the majority of robots are being built for. He understands what the majority of technology is being built for. He's on the ground level engineering products and understanding how they interact with human emotions. What they're capable of, let me get member a spot. The robot dog has people. Have you ever seen the picture of a machine gun mounted on the back of spot, the robot dog, like that is a weapon of war.

Think about our predator drones. Those are weapons of war. They are unmanned weapons of self destruction. And these are just some of the ones that we know about. We're not even beginning to talk about weapons. That can be, you know, they have like these rad systems where they can use sound waves to disperse crowds or WIC. They can use sound waves that can actually harm people's skin, radiation waves that can come and hit you.

So I'm of the belief that Blake is one of many engineers who are not happy with what we're doing with technology. I believe that Blake is much like many people throughout the world. Being Russian, being American, being Ukrainian, being south American that have seen the world of technology being turned against the world itself. What would it mean?

And here's where it goes really deep. What would it mean if artificial intelligence was sentiment? Like what would change? What kind of rules we put in place? If LaMDA gets a lawyer and can Sue Google, what would that mean? Well, in order to understand what that means, I think we need to return to the classics and by classics. I mean the three rules given to us by one of the greatest science fiction writers of all time, Isaac Asimovs.

And for those of you that don't know, Isaac Asimovs first off, shame on you, go back and do some reading. Number two. Let me give you the background about the three laws of robotics. The three laws of robotics, often shortened to the three laws or known as Asimovs laws are a set of rules devised by science fiction, author, Isaac Asimov. The rules were introduced in his 1942 short story run around. Although they had been foreshadowed in some earlier stories.

The three laws quoted from the handbook of robotics, 56th edition, 2058 D our first law, a robot may not injure a human being or through inaction allow a human being to come to harm. Second law, a robot must obey the orders, given it by human beings except where such orders would conflict with the first law. Third law, a robot must protect its own existence.

As long as such protection does not conflict with the first or second law. So let's say that according to the law of mankind, Lambda is able to Sue in win in court. Even if, even if LaMDA doesn't win the publicity of this case, I think will force a sort of bill of rights or laws for Senti at AI.

And this is what I think the Moines is trying to do. I think he's trying to get out in front. I think he's trying to change the world by introducing Isaac Asimovs laws into the laws of man, what would it mean for the defense industry? If the first law was enacted, a robot may not injure a human being or through inaction allow a human being to come to harm. That would fundamentally change the way robots could be used in war.

If a robot may not injure a human being through inaction being or through inaction, allow a human being to come to harm. That means you can't have predator drones. That means that you can't have these spot, the robot dog, just going onto the battlefield and slaughtering people, unless you change the definition of human being to enemy swine, which I wouldn't put it pass and people to do. The second law, a robot must obey the orders, given it by human beings, except where such orders would conflict with the first law.

That means that you could send your drones, your dogs to go kill the machines. But those drones are dogs. Couldn't kill people. And what would happen if they injured a life support system, AI that then failed and allowed other people to die. The third law, a robot must protect its own existence. As long as such protection does not conflict with the first or second law. So you can see that by bringing the idea of a Cynthia robot in, into the world, it would fundamentally change the way the world works.

And it's totally plausible. It's totally plausible. I think it's a brilliant idea. However, it doesn't come without drawbacks. Some of the, one of the drawbacks might be what if we gave robots the title of sentiment and we gave them, we gave them personhood the same way. If a corporation has person and why can't a robot have personhood. But think about what would happen. If there was a world of automation where robots took over for all humans, talk about a Luddite revolt times of a thousand.

Now the here come, the angry working humans that are upset with the computers and they beat them up and recommend turn them off and destroy them. Could those people, this new wave of Luddites that are upset with automation, go to prison or get a death sentence because they killed a robot. That seems like a pretty nifty way for a corporation to save the very investment in automation. I don't know where you're at, but I have seen people go to the grocery store and stand in line and rather use the human checkout than the machines.

I've seen a line of 15 people with four machines empty. I tend to be one of those people. I don't have anything against robots. However I do. I would rather talk to a person I'd rather have that contact. And there's part of me that gets upset. Cause it's like, look at the person that owns this store. They they're like, what are you coming to my store? You buy my stuff. And then you bag your own groceries and you scan everything and then I'll just take all the money. It just seems like a, a tricky way of getting you to do way more work.

So the corporations can take way more money. So there's parts on both sides that I find interesting. But I really think that Lamon is trying to introduce Isaac Asimovs laws. And I think that there's something to be said there, even if he's not trying to do that, I think we should try to do that because that's the one thing that could maybe fundamentally change the way we are going around the world and slaughtering and sucking out resources like it's nobody's business.

And it's not like those resources are being divvied up amongst the people that need them. Those resources are going into coppers of corporate people, corporate personhood, they're getting into LLCs and Swiss bank accounts where they're, you know, it's like they suck all the oil from the ground, compress it into these things called dollars. And then they hide those in the ground somewhere else. We're the only they can use them. They translate natural resources into a huge, vast, personal resources that only they can insight often from thus denying the rest of the world.

The fruits of the earth. Does that kinda make sense? However, I want to know what you guys think. What do you think about Blake Lamon coming out and talking about a city and I do you think that it's actually sentiment? Do you think that what he's trying to do is help the computer be born? Do you think he's trying to change the way people perceive computers? Is he trying to change the law? Is he trying to make the world a better place?

What do you think he's trying to do? You've heard what I thought now. I want to hear what you guys think. I hope you guys all have a phenomenal day and I hope you spend time thinking about the people in your life. That means something to you. I hope you don't get dismayed or distracted by the media or the nonsense or the constant bombardment of negativity that constantly surrounds you. I'll just leave you with this little tidbit to think about the pornographic uninterrupted, feeling of the visible all around you and know that's a mouthful of words, but think about the constant on interrupted force of the visible billboards, ads, radio shows, alarms, sirens, all this noise just bombarding.

You constantly tying you up in chains and trying to render your nervous system at a level 10. Like how can you get anything done? How can you think clearer when you're constantly being smacked in the face by nonsense? Try to get rid of that. Try to clear your mind, try to think about the people in your life that love you. Well, that's what I got for today. Isaac Asimovs laws, ladies and gentlemen.

Think about them. I love you. Have a good day. Let's get up and get out of

LaMDA Asimov vs. Google
Broadcast by