1. request for feature


Can tasks.e use some fast database methods to page out unscheduled tasks to the harddrive? The goal to free up memory and task pointers.

I wish to cycle thru lists of functions (procedures) to do similar, i guess, to how a cron job scheduler would work, and each task does things in the global application, then when it's done enough, and enough times, it can be paged out until needed again. The problem is the task system can run out of memory and pointers, if i leave this running, with intent for it to keep on for a year, it could crash in 11 months simply by filling all memory with task overhead, and i will have wasted all that time.

This ability might also please those who wanted dynamic includes: just load the code up as a procedure with it's local variables intact, and schedule it.

Another good part of this: altho the task reloaded would not be available as ordinary Eu code to be called as a procedure, it would be great to be able to call it as two different tasks. So if have an app, and in my app i have procedure x(), and i call it as a task, then i save it thru the task_unload(), i can later task_load() it twice if i like. This means, of course, the benefit of recycling the task ID gets a new ID each time a task is _loaded.

useless

new topic     » topic index » view message » categorize

2. Re: request for feature

useless_ said...

Can tasks.e use some fast database methods to page out unscheduled tasks to the harddrive? The goal to free up memory and task pointers.

This strikes me as fairly difficult for the interpreter. We'd need to serialize all of the private data for the entire stack of the task as well as be able to jump back to the appropriate point where execution left off.

Where I think this would be really tricky is with the translator. I'm not even sure it's possible without doing lots of nasty stack manipulation (which was what broke the old task implementation when we started looking like a stack smashing piece of malware to the OS).

useless_ said...

I wish to cycle thru lists of functions (procedures) to do similar, i guess, to how a cron job scheduler would work, and each task does things in the global application, then when it's done enough, and enough times, it can be paged out until needed again. The problem is the task system can run out of memory and pointers, if i leave this running, with intent for it to keep on for a year, it could crash in 11 months simply by filling all memory with task overhead, and i will have wasted all that time.

Alternatively, you might add a layer between std/task.e and the parts of your code that use them. This way, your task overseeing code refers to the tasks by whatever sort of ID system you like. Your tasks would have to actually end instead of just suspend, at which point all of the task memory would get recycled.

I don't understand your program well enough to say much more about the logic involved in deciding when to suspend / yield vs complete the task. But your task abstraction layer could keep track of the status of a given task and re-start them when required.

useless_ said...

This ability might also please those who wanted dynamic includes: just load the code up as a procedure with it's local variables intact, and schedule it.

This hits the same issues as above.

useless_ said...

Another good part of this: altho the task reloaded would not be available as ordinary Eu code to be called as a procedure, it would be great to be able to call it as two different tasks. So if have an app, and in my app i have procedure x(), and i call it as a task, then i save it thru the task_unload(), i can later task_load() it twice if i like. This means, of course, the benefit of recycling the task ID gets a new ID each time a task is _loaded.

I think you could probably accomplish a similar thing through the proposed abstraction layer above, assuming you allowed some sort of saved state that you could retrieve. I assume that one of the task routine's parameters would be whatever ID you are using in the abstraction layer. So you save off the info when you do the unload (in this case, really just returning from it). You could clone that info if you plan to have multiple tasks going later (assuming that the data could change while one active task is running) or whatever.

Matt

new topic     » goto parent     » topic index » view message » categorize

3. Re: request for feature

Suggestion: Perhaps it would be better to stop thinking in terms of threads/tasks and start thinking in terms of processes. Have a separate program for each procedure you want to execute. When the procedure completes, the program ends and the memory is automatically freed. A separate program/process can also take advantage of multi-core CPUs better than threads can (most OS implementations tend to keep all threads for a given process on the same core). Communication between processes can be handled via sockets or files (producer / consumer model).

new topic     » goto parent     » topic index » view message » categorize

4. Re: request for feature

m_sabal said...

Suggestion: Perhaps it would be better to stop thinking in terms of threads/tasks and start thinking in terms of processes. Have a separate program for each procedure you want to execute. When the procedure completes, the program ends and the memory is automatically freed. A separate program/process can also take advantage of multi-core CPUs better than threads can (most OS implementations tend to keep all threads for a given process on the same core). Communication between processes can be handled via sockets or files (producer / consumer model).

I wrote a library sometime back based on this idea:

http://openeuphoria.org/search/results.wc?s=fakethreads&news=1&ticket=1&forum=1&wiki=1&manual=1

new topic     » goto parent     » topic index » view message » categorize

5. Re: request for feature

m_sabal said...

Suggestion: Perhaps it would be better to stop thinking in terms of threads/tasks and start thinking in terms of processes. Have a separate program for each procedure you want to execute. When the procedure completes, the program ends and the memory is automatically freed. A separate program/process can also take advantage of multi-core CPUs better than threads can (most OS implementations tend to keep all threads for a given process on the same core). Communication between processes can be handled via sockets or files (producer / consumer model).


That's always been a fine idea, but the startup time hurts too much, we are talking 4 seconds on my slow 2.4Ghz machines, Cojabo reported startup times of 2 sec when running news.ex. I tried to get around this with prestarting on a different computer, but then i had to know in advance what i was going to do, and the oem quit making parts for my time machine.

The other alternative we cannot do is string execution, else i'd have one procedure to take a filename and load the file as a string and run it.

I still keep coming back to task ID exhaustion, and memory clogging up.

useless

new topic     » goto parent     » topic index » view message » categorize

6. Re: request for feature

useless_ said...

I still keep coming back to task ID exhaustion, and memory clogging up.

Like anything else, if you keep acquiring resources and never freeing them, you'll run out of available resources. Any scenario where you continually create but never kill a task is going to have this problem.

Have you tried translating / binding the code to improve start up time?

Matt

new topic     » goto parent     » topic index » view message » categorize

7. Re: request for feature

mattlewis said...
useless_ said...

Can tasks.e use some fast database methods to page out unscheduled tasks to the harddrive? The goal to free up memory and task pointers.

This strikes me as fairly difficult for the interpreter. We'd need to serialize all of the private data for the entire stack of the task as well as be able to jump back to the appropriate point where execution left off.


Well, not entirely quite exactly, if it's a requirement the task save it's own variables when done, that's trivial for the programmer to do. What's more difficult is the app saving a portion of it's own source code, and then a week later that app, or even another app, reloading it. Or reloading it twice under different task ID's.

mattlewis said...

Where I think this would be really tricky is with the translator. I'm not even sure it's possible without doing lots of nasty stack manipulation (which was what broke the old task implementation when we started looking like a stack smashing piece of malware to the OS).


But if the task has returned, it's done doing whatever it was written to do, it executed "end procedure", then isn't it off the stack?

mattlewis said...
useless_ said...

I wish to cycle thru lists of functions (procedures) to do similar, i guess, to how a cron job scheduler would work, and each task does things in the global application, then when it's done enough, and enough times, it can be paged out until needed again. The problem is the task system can run out of memory and pointers, if i leave this running, with intent for it to keep on for a year, it could crash in 11 months simply by filling all memory with task overhead, and i will have wasted all that time.

Alternatively, you might add a layer between std/task.e and the parts of your code that use them. This way, your task overseeing code refers to the tasks by whatever sort of ID system you like. Your tasks would have to actually end instead of just suspend, at which point all of the task memory would get recycled.

Ah, it's the "actually end" words i didn't use, i said "when it's done enough", to mean "i am through with it, it's over, done", so it saving it's data itself, instead of the task manager doing it, then executing "end procedure" is fine. I didn't mean to have task manager save a running block of code. But i did think the task manager would have access to the procedure's variable list and with a swipe of it's magic wand just run thru and save them all.

And the part about reloading the code for execution later, possibly by an app that did not have that source code as it's code, but wanted to execute it one or more times.

useless

new topic     » goto parent     » topic index » view message » categorize

8. Re: request for feature

mattlewis said...
useless_ said...

I still keep coming back to task ID exhaustion, and memory clogging up.

Like anything else, if you keep acquiring resources and never freeing them, you'll run out of available resources. Any scenario where you continually create but never kill a task is going to have this problem.


So you are saying if the task executes a "return" or "end procedure", there is no residual left that's going to build up in memory? But the task ID is still not recycled?

mattlewis said...

Have you tried translating / binding the code to improve start up time?

Matt


I have a couple old programs that run as exe's, i am done with them, not changing the source any more. I am not seeing the point to compiling / binding during active code development.

useless

new topic     » goto parent     » topic index » view message » categorize

9. Re: request for feature

jimcbrown said...
m_sabal said...

Suggestion: Perhaps it would be better to stop thinking in terms of threads/tasks and start thinking in terms of processes. Have a separate program for each procedure you want to execute. When the procedure completes, the program ends and the memory is automatically freed. A separate program/process can also take advantage of multi-core CPUs better than threads can (most OS implementations tend to keep all threads for a given process on the same core). Communication between processes can be handled via sockets or files (producer / consumer model).

I wrote a library sometime back based on this idea:

http://openeuphoria.org/search/results.wc?s=fakethreads&news=1&ticket=1&forum=1&wiki=1&manual=1


Funny, 10 years later i still want the same things i wanted 10 years before then. Your code runs only on nix, mine on windows, and both rely on startup-execute cycles, or staying resident in memory full time ready to run instantly. I am thinking of breaking a mold, instead of thinking outside the box, just remove the box entirely, in which case the million files in one directory may become a million Eu procedures. Holding them in memory as a piece of code in one app then becomes problematic due to size, and still won't allow new pieces of code, and (at the time before tasks.e) would not allow multiple copies of each procedure to run with different data.

This time, i was hoping the task manager was squirreling away a copy of the task's code each time it was task_created, to make it easier to keep each task's local vars separate. If that was true, then just before the task hit "end procedure", the task's memory footprint for that instance could be saved out (task_unload()) to mass storage device. Else the unprocessed source code for that task could be saved, since the task manager might know where the code came from, surely it would know better than the app itself.

useless

new topic     » goto parent     » topic index » view message » categorize

10. Re: request for feature

useless_ said...


Can tasks.e use some fast database methods to page out unscheduled tasks to the harddrive? The goal to free up memory and task pointers.

I wish to cycle thru lists of functions (procedures) to do similar, i guess, to how a cron job scheduler would work, and each task does things in the global application, then when it's done enough, and enough times, it can be paged out until needed again. The problem is the task system can run out of memory and pointers, if i leave this running, with intent for it to keep on for a year, it could crash in 11 months simply by filling all memory with task overhead, and i will have wasted all that time.

This ability might also please those who wanted dynamic includes: just load the code up as a procedure with it's local variables intact, and schedule it.

Another good part of this: altho the task reloaded would not be available as ordinary Eu code to be called as a procedure, it would be great to be able to call it as two different tasks. So if have an app, and in my app i have procedure x(), and i call it as a task, then i save it thru the task_unload(), i can later task_load() it twice if i like. This means, of course, the benefit of recycling the task ID gets a new ID each time a task is _loaded.

useless

Why do you care about it. The OS have a virtual memory paging system and page in and out to disk as needed. Application don't have to care about it. In 32 bits Windows any application as almost 2Go of virtual memory space to its use. If more is needed there is 64 bits OS nowadays.

Jacques

new topic     » goto parent     » topic index » view message » categorize

11. Re: request for feature

coconut said...
useless_ said...


Can tasks.e use some fast database methods to page out unscheduled tasks to the harddrive? The goal to free up memory and task pointers.

I wish to cycle thru lists of functions (procedures) to do similar, i guess, to how a cron job scheduler would work, and each task does things in the global application, then when it's done enough, and enough times, it can be paged out until needed again. The problem is the task system can run out of memory and pointers, if i leave this running, with intent for it to keep on for a year, it could crash in 11 months simply by filling all memory with task overhead, and i will have wasted all that time.

This ability might also please those who wanted dynamic includes: just load the code up as a procedure with it's local variables intact, and schedule it.

Another good part of this: altho the task reloaded would not be available as ordinary Eu code to be called as a procedure, it would be great to be able to call it as two different tasks. So if have an app, and in my app i have procedure x(), and i call it as a task, then i save it thru the task_unload(), i can later task_load() it twice if i like. This means, of course, the benefit of recycling the task ID gets a new ID each time a task is _loaded.

useless

Why do you care about it. The OS have a virtual memory paging system and page in and out to disk as needed. Application don't have to care about it. In 32 bits Windows any application as almost 2Go of virtual memory space to its use. If more is needed there is 64 bits OS nowadays.

Jacques


Because i have 15 gigabytes of data in one directory and a 32 bit OS. And i don't want to pay to use 64 bit OS, or buy a 64 bit computer, or load all 15 Gigabytes into memory.

useless

new topic     » goto parent     » topic index » view message » categorize

12. Re: request for feature

eukat said...
coconut said...
eukat said...


Can tasks.e use some fast database methods to page out unscheduled tasks to the harddrive? The goal to free up memory and task pointers.

I wish to cycle thru lists of functions (procedures) to do similar, i guess, to how a cron job scheduler would work, and each task does things in the global application, then when it's done enough, and enough times, it can be paged out until needed again. The problem is the task system can run out of memory and pointers, if i leave this running, with intent for it to keep on for a year, it could crash in 11 months simply by filling all memory with task overhead, and i will have wasted all that time.

This ability might also please those who wanted dynamic includes: just load the code up as a procedure with it's local variables intact, and schedule it.

Another good part of this: altho the task reloaded would not be available as ordinary Eu code to be called as a procedure, it would be great to be able to call it as two different tasks. So if have an app, and in my app i have procedure x(), and i call it as a task, then i save it thru the task_unload(), i can later task_load() it twice if i like. This means, of course, the benefit of recycling the task ID gets a new ID each time a task is _loaded.

eukat

Why do you care about it. The OS have a virtual memory paging system and page in and out to disk as needed. Application don't have to care about it. In 32 bits Windows any application as almost 2Go of virtual memory space to its use. If more is needed there is 64 bits OS nowadays.

Jacques


Because i have 15 gigabytes of data in one directory and a 32 bit OS. And i don't want to pay to use 64 bit OS, or buy a 64 bit computer, or load all 15 Gigabytes into memory.

eukat

Just so we're clear, this is 15 gigabytes of Euphoria source code, right?

23:09:50 < CoJaBo> katsmeow-afk: I'm not sure I understand how one can get 15GB 
                   of Eu code? 
23:10:27 < CoJaBo> Data, sure.. but code? 
... 
23:19:46 < CoJaBo> I don't understand the problem; I can't even begin to 
                   imagine a solution 
23:19:56 < CoJaBo> How does one get 15GB of source? 
... 
23:23:51 < CoJaBo> I'm not misreading 15 gigs of source code? 

This can be confusing because source is both code and data simultaneously. Even more confusingly, one can take obviously non-code and non-source data and embed it into a source code file of a program or library (e.g. by writing the output of "constant somedata="sprint() to a .e file).

new topic     » goto parent     » topic index » view message » categorize

Search



Quick Links

User menu

Not signed in.

Misc Menu