1. spinning "E" demo questions, for ray casting & surface rendering
- Posted by DanM May 18, 2009
- 1048 views
In the spinning "E" Euphoria demo, is there a way to know what the virtual world coordinates for the virtual viewing camera are, and same for the position and dimensions of the virtual screen upon which the "E" is pseudo-projected?
What I mean is, given that the "E" initially has coordinates in a virtual world which are then translated into 'puter screen coordinates for display, the "E" is some distance from a virtual camera which is also positioned vertically and horizontally at some x-y coords; what are those xyz coords for the "camera"?
And, then imagining that the virtual camera is, instead, a light source, which is causing the virtual "E" to be projected onto a virtual screen which ends up being the computer screen, such that the points of projection of the "E" from the rays from the light source end up on the virtual screen in exactly the way the "E" shows up on the computer screen, where in the virtual world IS the virtual viewing screen, and what are its x-y boundaries/dimensions (ie, how big is it)?
I ask the above thinking that that info is useful/necessary to do some "ray-casting" to try to find the surfaces (of the "E") that are in front of other surfaces (of the "E"), so as to make them "solid" and hide the surfaces "behind" them.
Dan
2. Re: spinning "E" demo questions, for ray casting & surface rendering
- Posted by achury May 18, 2009
- 1077 views
- Last edited May 19, 2009
I'm not an expert, but I have studied a little this problems to make the galaxy emulator program.
My solution is to imagine the coordinate of the camera is the coordinate of the program user. When you see the program your eye is far from the screen, not in the screen itself. In my virtual world, between the observer/camera and the object to be observed there are a virtual screen, where each point is "retroprojected".
My program only show stars as points.
A simple example (by sure not the best approach), need a lot of geometric abstractions.
- You have a sequence with all the points on the "E"
- Create grups of points in order to define that define surface polygon. Descompose complex shapes triangles. Now you have a list of plain polygons defined by 3 points.
- Check that there is not two polygons that cross each other, may share a common limit but never cross.
- Determine the equation for the plain that contains each polygon z=ax+ by + c. Note, if you have descomposed complex shapes, you will have
- Create a function able to check if any line in the space z=dx+ey+f has an interception with a polygon (defined with the equation of the plane and the limitating points)
- Create a list of the points on the virtual screen, and check for each point, imagine a visual line between the observer/camera coordinates and such point on the virtual screen. Each visual line has an equation as previously descripted, you must to calculate the {d,e,f} coeficients for such line and using the function described up, check wich polygons are intercepted by such line. If more than one polygon is intercepted, calculate interception points and distance from observer in order to know wich is nearer to observer.
- Draw the selected polygon, and eliminate all their points on the visual screen from the list of points to be checked.
With this you get the polygon to be represented on each point of the screen. The color to paint the pixels depends on illumination... There begin the ray tracing, from ligth source to each point on your virtual objects.
3. Re: spinning "E" demo questions, for ray casting & surface rendering
- Posted by DanM May 19, 2009
- 1106 views
I'm not an expert, but I have studied a little this problems to make the galaxy emulator program.
My solution is to imagine the coordinate of the camera is the coordinate of the program user. When you see the program your eye is far from the screen, not in the screen itself. In my virtual world, between the observer/camera and the object to be observed there are a virtual screen, where each point is "retroprojected".
My program only show stars as points.
A simple example (by sure not the best approach), need a lot of geometric abstractions.
- You have a sequence with all the points on the "E"
- Create grups of points in order to define that define surface polygon. Descompose complex shapes triangles. Now you have a list of plain polygons defined by 3 points.
- Check that there is not two polygons that cross each other, may share a common limit but never cross.
- Determine the equation for the plain that contains each polygon z=ax+ by + c. Note, if you have descomposed complex shapes, you will have
- Create a function able to check if any line in the space z=dx+ey+f has an interception with a polygon (defined with the equation of the plane and the limitating points)
- Create a list of the points on the virtual screen, and check for each point, imagine a visual line between the observer/camera coordinates and such point on the virtual screen. Each visual line has an equation as previously descripted, you must to calculate the {d,e,f} coeficients for such line and using the function described up, check wich polygons are intercepted by such line. If more than one polygon is intercepted, calculate interception points and distance from observer in order to know wich is nearer to observer.
- Draw the selected polygon, and eliminate all their points on the visual screen from the list of points to be checked.
With this you get the polygon to be represented on each point of the screen. The color to paint the pixels depends on illumination... There begin the ray tracing, from ligth source to each point on your virtual objects.
Thanks achury, I think I understand most of what you said, except for "imagine the coordinate of the camera is the coordinate of the program user"..., because the program user has no coordinates in the virtual world, so that idea, if I'm understanding you correctly, doesn't give me the coordinates of the virtual camera, which I think I need to be the starting point of lines drawn to the virtual screen to see if there's any intersection with a virtual object.
And similarly, "between the observer/camera and the object to be observed there are a virtual screen, where each point is "retroprojected"..., because I'm thinking the "projection" of a virtual object onto the virtual screen must happen via lines of projection from the virtual camera, thru the virtual object, onto the virtual screen, such that the virtual screen can't be between the camera and the object.
Can you clarify for me?
And as far as I know, I still need to be able to know the actual position/coordinates of the virtual camera.
The bulk of your explanation seems correct to me, though, so thanks!
Dan
4. Re: spinning "E" demo questions, for ray casting & surface rendering
- Posted by ChrisB (moderator) May 19, 2009
- 1083 views
Hi
For an explanation of ray tracing, covering all the ponts mentioned above
http://www.povray.org/documentation/view/3.6.1/4/
If you are trying to write a raytracer, then povray (open source) would be a good place to get some inspiaration from.
If you are tring to write a full eu real time raytracing animation program, then I suspect you'll have your work cut out.
Good luck
Chris
5. Re: spinning "E" demo questions, for ray casting & surface rendering
- Posted by achury May 19, 2009
- 1068 views
Look at this image
http://imgbin.org/index.php?page=image&id=521
Upper image is the user looking to the computer screen.
Lower is your virtual camera in your virtual world. The image is projected on a virtual screen. The position of the screen is between the camera and the object.
From this point of view, the coordinate of the user is the same coordinate of the camera.
My proposal is to check each point on the virtual screen (each point on your real screen or window), project a line from camera to this point and continue the same line to check if collides any polygon of the object.
6. Re: spinning "E" demo questions, for ray casting & surface rendering
- Posted by ChrisB (moderator) May 19, 2009
- 1095 views
Look at this image
http://imgbin.org/index.php?page=image&id=521
Upper image is the user looking to the computer screen.
Lower is your virtual camera in your virtual world. The image is projected on a virtual screen. The position of the screen is between the camera and the object.
From this point of view, the coordinate of the user is the same coordinate of the camera.
My proposal is to check each point on the virtual screen (each point on your real screen or window), project a line from camera to this point and continue the same line to check if collides any polygon of the object.
And then bounce it to the light source(s) to determine the pixel colour / brightness.
Chris
7. Re: spinning "E" demo questions, for ray casting & surface rendering
- Posted by DanM May 19, 2009
- 1052 views
Thanks Chris, and thank you achury for making a drawing to try to help me see! I'll be thinking about this for a while, I suspect!
Dan
8. Re: spinning "E" demo questions, for ray casting & surface rendering
- Posted by DanM May 19, 2009
- 1077 views
Look at this image
http://imgbin.org/index.php?page=image&id=521
Upper image is the user looking to the computer screen.
Lower is your virtual camera in your virtual world. The image is projected on a virtual screen. The position of the screen is between the camera and the object.
From this point of view, the coordinate of the user is the same coordinate of the camera.
My proposal is to check each point on the virtual screen (each point on your real screen or window), project a line from camera to this point and continue the same line to check if collides any polygon of the object.
Thank you so much for taking the time to draw me a picture!
So, the POSITION of the viewer is the same as the POSITION of the camera, and that makes the COORDINATES of the camera(necessary to assert the beginning of the line to project into the screen for ray casting?), x = 0, y=0, and z = 0 ? In other words, at the origin of the virtual space?
The reason I keep asking about the actual coordinates of the virtual camera is that I'm looking at the Euphoria demo of a spinning "E", and it's somewhat difficult to figure what the camera coordinates for that are. And to do any ray casting, it would seem that I need those coordinates. I don't think they are 0,0,0, at least not in that space/scene. I realize that the position of the camera can depend on what the app programmer decides, I'm just trying to start with the spinning "E" 'cause it's there (and has what are probably pretty powerful conversion/translation/rotation functions, given that Rob wrote them?).
And again, thanks for drawing me a picture!
Dan
9. Re: spinning "E" demo questions, for ray casting & surface rendering
- Posted by achury May 19, 2009
- 1086 views
If is difficult to imagine that you project thru the computer screen, imagine that the computer screen is the window you use to see the landscape.
Of course that the camera must not be allways on the position {0,0,0} but if the camera is there you can simplify a lot of mathematic equations adding or substracting zeros.
With sophisticated programs like Povray or Blender you can decide the position of camera, where is and what direction is pointing, even with blender you can define camera movements.
For example, another aproach you can try is that "E" is fixed on a space position and the camera moves around.
You will need to clean the dust on your geometry books...
Note that programs like povray and blender generates very realistic images but takes a long time to generate each frame. Games use spites, textures and other tricks and don't produce realistic 3d projection
10. Re: spinning "E" demo questions, for ray casting & surface rendering
- Posted by achury May 19, 2009
- 1088 views
I found today a nice tutorial for yabasic, but by sure is easy to understand and translate the code to Euphoria
http://yabasicprogramming.yuku.com/topic/380/t/3d-Maths.html
11. Re: spinning "E" demo questions, for ray casting & surface rendering
- Posted by DanM May 20, 2009
- 1017 views
If is difficult to imagine that you project thru the computer screen, imagine that the computer screen is the window you use to see the landscape.
The difficulty I've been having is that I'm thinking of the ray-casting as creating SHADOWS of objects onto a screen, as follows:
a virtual light source (pretending to be a virtual camera) projects beams of light PAST some virtual objects, onto a virtual screen, by projecting one beam at a time towards each individual point on the virtual screen.
VIRTUAL result of this is to create virtual shadows of virtual objects on the virtual screen.
ACTUAL result is to find out if any individual beam intersects any plane in an object, so as to know to put that point in the objects plane ONTO the screen (at the point on the screen that the light would have struck if no object were in its path), in some color appropriate to the object or that constituent plane. Of course, only the object along that line with the shortest distance from the light/camera is placed onto the screen.
When all closest objects have been placed onto the virtual screen, the result is translated to computer screen coordinates and copied to computer screen.
While I strongly suspect that your description of how to raycast is the correct & most useful one, my concept of "light casting shadows of objects onto screen" as a method of finding line/plane intersections to evaluate "foreground" objects seems to cause difficulty for me. I'll probably get over it eventually.
Of course that the camera must not be allways on the position {0,0,0} but if the camera is there you can simplify a lot of mathematic equations adding or substracting zeros.
With sophisticated programs like Povray or Blender you can decide the position of camera, where is and what direction is pointing, even with blender you can define camera movements.
For example, another aproach you can try is that "E" is fixed on a space position and the camera moves around.
You will need to clean the dust on your geometry books...
You don't know how funny (and accurate!) that is
Note that programs like povray and blender generates very realistic images but takes a long time to generate each frame. Games use spites, textures and other tricks and don't produce realistic 3d projection
So, at the very least, we have:
2-D drawing programs, which could be used to draw sprites;
3-D vector based MODELING programs, to make wire frame objects;
3-D RENDERING programs, to add realistic surfaces to wire frames;
3-D ANIMATION programs, to make movies etc.
Any corrections/additions to above?
Dan
12. Re: spinning "E" demo questions, for ray casting & surface rendering
- Posted by DanM May 20, 2009
- 1083 views
I found today a nice tutorial for yabasic, but by sure is easy to understand and translate the code to Euphoria
http://yabasicprogramming.yuku.com/topic/380/t/3d-Maths.html
That's VERY good, thanks!
Dan