Re: MIT's Technology Review Article
- Posted by "Kat" <gertie at visionsix.com> Oct 03, 2004
- 402 views
On 27 Sep 2004, at 10:35, cklester wrote: > > > posted by: cklester <cklester at yahoo.com> > > Kat wrote: <snip> <<== err, is that rss, or xml or sgml, or what? </joke> > > Some data is in such form on webpages that not even Google indexes it, > > such as valid data buried in javascript code or linked framesets. > > Creating a whole new XML file, and having a XML tag on each word in > > every existing file, would bloat the internet to a crawl. I've seen > > 5K XML semantic files that had nothing to say. > > > to say. Literally. But even files which do appear online often disappear > > after a month, a year,, sometimes a few hours. If there was an automagic > > tagger built, so as to not use a human to tag a file which has a lifetime of > > mere hours, then why not dispense with tagging, and move that tagger to the > > recipient, and not spew XML/semantics all over the internet? > > Not quite the efficiency needed to make the "internet a database." :/ Nowhere near, see here: Sept 30, 2004 http://www.internetnews.com/dev-news/article.php/3415331 It may take 2 minutes to dl the page on dialup, how appropriate is that? Kat