Re: spinning "E" demo questions, for ray casting & surface rendering

new topic     » goto parent     » topic index » view thread      » older message » newer message
achury said...

I'm not an expert, but I have studied a little this problems to make the galaxy emulator program.

My solution is to imagine the coordinate of the camera is the coordinate of the program user. When you see the program your eye is far from the screen, not in the screen itself. In my virtual world, between the observer/camera and the object to be observed there are a virtual screen, where each point is "retroprojected".

My program only show stars as points.

A simple example (by sure not the best approach), need a lot of geometric abstractions.

  1. You have a sequence with all the points on the "E"
  1. Create grups of points in order to define that define surface polygon. Descompose complex shapes triangles. Now you have a list of plain polygons defined by 3 points.
  1. Check that there is not two polygons that cross each other, may share a common limit but never cross.
  1. Determine the equation for the plain that contains each polygon z=ax+ by + c. Note, if you have descomposed complex shapes, you will have
  1. Create a function able to check if any line in the space z=dx+ey+f has an interception with a polygon (defined with the equation of the plane and the limitating points)
  1. Create a list of the points on the virtual screen, and check for each point, imagine a visual line between the observer/camera coordinates and such point on the virtual screen. Each visual line has an equation as previously descripted, you must to calculate the {d,e,f} coeficients for such line and using the function described up, check wich polygons are intercepted by such line. If more than one polygon is intercepted, calculate interception points and distance from observer in order to know wich is nearer to observer.
  1. Draw the selected polygon, and eliminate all their points on the visual screen from the list of points to be checked.

With this you get the polygon to be represented on each point of the screen. The color to paint the pixels depends on illumination... There begin the ray tracing, from ligth source to each point on your virtual objects.

Thanks achury, I think I understand most of what you said, except for "imagine the coordinate of the camera is the coordinate of the program user"..., because the program user has no coordinates in the virtual world, so that idea, if I'm understanding you correctly, doesn't give me the coordinates of the virtual camera, which I think I need to be the starting point of lines drawn to the virtual screen to see if there's any intersection with a virtual object.

And similarly, "between the observer/camera and the object to be observed there are a virtual screen, where each point is "retroprojected"..., because I'm thinking the "projection" of a virtual object onto the virtual screen must happen via lines of projection from the virtual camera, thru the virtual object, onto the virtual screen, such that the virtual screen can't be between the camera and the object.

Can you clarify for me?

And as far as I know, I still need to be able to know the actual position/coordinates of the virtual camera.

The bulk of your explanation seems correct to me, though, so thanks!

Dan

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu