(Fwd) Re: a.i
- Posted by Kat <kat at kogeijin.com> Nov 22, 2002
- 599 views
This didn't make it out........ ------- Forwarded message follows ------- On 23 Nov 2002, at 9:12, Derek Parnell wrote: > > Intelligence might have something to do with predicting the future based on > what we know now. If that's true, every attempt i have found that deals with walking robots is going to fail. Except one, which isn't getting it's share of funding. If getting a spinal cord injury has taught me anything about walking, it's that it is a on- going controlled fall, not a series of preprogrammed static events. It's all error correction, and to this day, every day, i can see it's not static preprogrammed error correction either. The same hold for text interaction with humans. For instance, what if the Ai saw these lines: <suoow> i'm standing on my head. <`3snowvf> so am i Would the Ai know who these nicks are, normally, given only the intelligence in those lines? This is an example of why you haveto build in methods of determining data, what it means, and instanciating new methods to manipulate the data. Some things no one can predict, and these events raise errors at the worst times. Like the failed attempt with Cyc, you just cannot program everything into the Ai. Cyc was formally a program for 10 years, but Lenat had been pushing for it as long as 20 years before that. The poor guy is dense, but he is good at making money. And to a large degree, like Irv said, what an Ai does is based on it's pov. If it doesn't have all the data, the decisions will be lacking. Like, vision is sorta important, but ask any blind person, that can be overcome. It's just that the missing info would need to be found other ways, ways *you* may not understand or have access to. That's where learning becomes important. If the Ai can't find new ways to munge the data, store it, and interlink it, it's going to be limited to whatever you or Lenat could figure out beforehand. Kat ------- End of forwarded message -------