Re: NameSpace / Arrows

new topic     » goto parent     » topic index » view thread      » older message » newer message

Your example looks a lot like Unix pipes. But how about handling real-time
data streaming? Here's a trivial example -  I want to set a file to upper
case:

        file.txt > uppercase.app > upper.txt

If the data was non-realtime, you could send over the whole file. But if it
was streamed, you would have to send the data over in chunks - presumably in
byte-size pieces.

Now, I want to change all the keys to uppercase, so I set up the pipeline:

        keyboard > uppercase.app > stdio

So far, so good - you can stream the data one byte at a time. But what about
sorting?

        file.txt > sort.app > sorted.txt

This only makes sense on non-streamed data. After all, you can't sort the
data until it's all accumulated. Here's a parallel example using your
soundcard example:

        file.wav > reverb.app > soundcard

could work with streamed data. On the other hand:

        file.wav > reverse.app > soundcard

would not.

Another complexity of streamed data is that different applications chunk
different kinds of data into different sizes. For example, I have an
application which 'composes' MIDI files. It creates a text file, which is
converted into a MIDI file:

        compose.app > text2midi.app > file.mid

It would be nice to set up a rendering pipeline like:

        compose.app > text2midi.app > render.app > reverb.app > soundcard

Now, obviously the file cannot be rendered in real time, since the first two
processes are not realtime. For the sake of discussion, I'll imagine that
'>' indicates a non-streamed output (passed a whole file at a time), and
'>>' indicates a streamed output (passed a chunk at a time). The pipeline
could be built:

        compose.app > text2midi.app >> render.app >> reverb.app >> soundcard

So the compose application generates an entire file, and passes that file to
the text2midi application. That program processes the file, and send the
output one chunk at a time (presumably in byte-size chunks) to the render
application. The render application takes chunks of text in, and in real
time converts them into chunks of waves (say, in 500ms size chunks). These
chunks of waves are passed on to the reverb application, which added reverb
to the chunks in real time and passes the processed chunks to the soundcard
application, which builds a buffer and outputs the stream of sound.

So, assuming that you deal with real-time streaming by 'chunking' data into
bite-size pieces, this leads to the question how data is moved through the
pipeline. Is it pushed through by the leftmost application, or pulled
through by the rightmost application based on demand?

Since there are different kinds of data in the pipeline, the 'chunks' in
data size don't match up. It can sometimes take a lot of ASCII data to
describe a wave of sound, such as a stream of numbers that describe a sin
wave:

        sin_wave.app >> text2wave >> soundcard

or a few characters, such as a stream of MIDI commands:

        file.mid >> render.app >> soundcard

Another issue with real-time streaming is that you have to timeslice between
the parts of the pipeline. If you decide to slice each time a process has
generated a 'chunk' of data, then the soundcard will be starved for data
before long. Seems like each node would need to be able to signal when it
needed information, and have to track buffers, and other Real Complicated
Stuff.

Add to the the fact the the user may want to redirect several devices, and
you have a multitasking timesliced environment. Do you really want to write
a multitasking Euphoria?

Leaving streaming out, you are left with something a lot easier to implement
- but much less interesting.

-- David Cuny

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu