--- Zvi Mowshowitz on the EconTalk podcast, Aug 7, 2023
In context, from the transcript
What we do know is that humans love achieving goals, and that when you give an AI system goals, it helps you achieve your goals. Right? At least on the margin, at least to starting out, people think this. And so, we see Baby GPT and Auto GPT and all these other systems that turns out for 100 lines of code. You can create the scaffolding around GPT-4 that makes an attempt to act like it has goals. Right? To take actions as if it had goals and to act as a goal-motivated system.
And, it's not great because the underlying technologies aren't there, and we haven't gone through the iterations of building the right scaffolding, and we don't know a lot of the tricks, and it's still very, very early days.
But, we absolutely are going to turn our systems into agents with goals that are trying to achieve goals, that then create sub-goals, that then plan but then ask themselves, 'What do we need to do in order to accomplish this thing?' And, that will include like, 'Oh, I don't have this information. I need to go get this information.' 'I don't have this capability. I don't have access to this tool. I need to get this tool.' And, it's a very small leap from there to, 'I'm going to need more money.' Right? Or something like that. And from there, the sky's the limit. So, we can rule out, through experimentation in a way that we couldn't two years ago--right?--this particular theory of Marc's that the systems in the future won't have goals in a meaningful sense unless we take action to stop it.
Host Russ Roberts then went on to talk about aspiration which to me is a subset of having goals - it's the felt experience of having goals. Not surprisingly, he then connected goals to sentience and consciousness.
And, I think part of the reason that the skeptics--the optimists--are more optimistic. And, part of the reason I think we are in some sense just telling different narratives and some are more convincing than others, and it's mainly stories, is that we don't have any vivid examples today of my vacuum cleaner wanting to be a driverless car--an example I've used before. It doesn't aspire. Now, we might see some aspiration or at least perceived aspiration in ChatGPT at some point, but I think part of the problem getting people convinced about its dangers is that that leap--a sentience leap, the consciousness leap, which is where goals come in--doesn't seem credible. At least today. Maybe it will be, and I think that's where you and others who are worried about AI need to help me and others who are less worried to see.