Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Sunday, August 16, 2020

a company is essentially a cybernetic collective of people and machines. ... AI is ... our id writ large

 --- Elon Musk, on JoeRogan Experience #1169,  

Time code 00:14:52

JR: How far away are we from something that's really truly sentient?

EM: Well, I mean, you could argue that any group of people, like a company is essentially a cybernetic collective of people and machines. That's what a company is. And then, there are different levels of complexity in the way these companies are formed. And then, there's a sort of like a collective AI in the Google, sort of, Search, Google Search, you know, where we're all sort of plugged in as like nodes on the network, like leaves on a big tree. And we're all feeding this network with our questions and answers. We're all collectively programming the AI. And Google, plus all the humans that connect to it, are one giant cybernetic collective. This is also true of Facebook, and Twitter, and Instagram, and all the social networks. They're giant cybernetic collectives.

Time code 00:17:18

EM: But the AI is informed strangely by the human limbic system. It is, in large part, our id writ large.

JR: How so?

EM: We mentioned all those things, the sort of primal drives. There's all of the things that we like, and hate, and fear. They're all there on the internet. They're a projection of our limbic system. That's true.

JR: No, it makes sense. And the thinking of it as a -- I mean, thinking of corporations, and just thinking of just human beings communicating online through these social media networks in some sort of an organism that's a -- It's a cyborg. It's a combination. It's a combination of electronics and biology.

EM: Yeah. This is -- In some measure, like, it's to the success of these online systems. It's sort of a function of how much limbic resonance they're able to achieve with people. The more limbic resonance, the more engagement.

JR: Whereas, like one of the reasons why probably Instagram is more enticing than Twitter.

EM: Limbic resonance.


Sunday, April 22, 2018

a dumb algorithm with lots and lots of data beats a clever one with modest amounts of it

--- Pedro Domingos, professor of computer science at UW Seattle, in "A few useful things to know about machine learning" (2012), Comm. of the ACM.

Quote in context:
Suppose you have constructed the best set of features you can, but the classifiers you receive are still not accurate enough. What can you do now? There are two main choices: design a better learning algorithm, or gather more data (more examples, and possibly more raw features, subject to the curse of dimensionality). Machine learning researchers are mainly concerned with the former, but pragmatically the quickest path to success is often to just get more data. As a rule of thumb, a dumb algorithm with lots and lots of data beats a clever one with modest amounts of it. (After all, machine learning is all about letting data do the heavy lifting.)
The immediately following paragraph also has some good stuff:
This does bring up another problem, however: scalability. In most of computer science, the two main limited resources are time and memory. In machine learning, there is a third one: training data. Which one is the bottleneck has changed from decade to decade. In the 1980s it tended to be data. Today it is often time. Enormous mountains of data are available, but there is not enough time to process it, so it goes unused. This leads to a paradox: even though in principle more data means that more complex classifiers can be learned, in practice simpler classifiers wind up being used, because complex ones take too long to learn. Part of the answer is to come up with fast ways to learn complex classifiers, and indeed there has been remarkable progress in this direction (for example, Hulten and Domingos).