Home      Affiliated Colleges      Course content      First Sem     Second Sem     Third Sem     Fourth Sem     Fifth Sem     Sixth Sem     Seventh Sem     Eighth Sem     Lab report 4th sem     Contact    

Monday, September 19, 2011

Computer Graphics Note: 3D Viewing Pipeline


3D Viewing pipeline:

The steps for computer generation of a view of 3D scene are analogous to the process of taking photograph by a camera. For a snapshot, we need to position the camera at a particular point in space and then need to decide camera orientation. Finally when we snap the shutter, the seen is cropped to the size of window of the camera and the light from the visible surfaces is projected into the camera film.



Projections:

            Once world co-ordinate description of the objects in a scene are converted to viewing co-ordinates, we can project the three dimensional objects onto the two dimensional view plane. There are two basic projection methods: 



Parallel projection: In parallel projection, co-ordinates positions are transformed to the view plane along parallel lines. 



Prospective projection: In prospective projection, objects positions are transformed to the view plane along lines that converge to a point called projection reference point (centre of projection). The projected view of an object is determined by calculating the intersection of the projection lines with the view plane.



A parallel projection preserve relative proportions of objects and this is the method used in drafting in drafting to produce scale drawing of three-dimensional objects. Accurate view of various sides of 3D object is obtained with parallel projection. But it does not given a realistic appearance of a 3D-object.

            A prospective projection, on the other hand, produces realistic views but does not preserve relative proportions. Projections of distance objects from view plane are smaller than the projections of objects of the same size that are closer to the projection place.



Parallel Projection: We can specify parallel projection withy projection vector that specifies the direction of projection line. When the projection lines are perpendicular to view plane, the projection is orthographic parallel projections.

            It projection line are not parallel to view plane then it is oblique parallel projection.




Vp



Vp
 



Orthographic projection



Oblique parallel projection
 





·         Orthographic projections are most often used to produce the front, side, and top views of an object. Front, side and rear orthographic are called elevations and the top orthographic view of object is known as plan view. Engineering and Architectural drawings commonly employ these orthographic projections.
 
We also from orthographic projections that display more than one face of an object. Such views are called axonometric orthographic projections. the most commonly used axonometric projection is the isometric projection.

The transformation for orthographic projection is
                                  Coordinate value is preserved for the depth information:

Prospective projections:
            To obtain a prospective projection of a three-dimensional object, we transform points along projection lines that meet at a point called projection reference point.
            Suppose we set the projection reference point at position along the axis, and we place the view plane at as shown is fig

                                       
We can write equations describing co-ordinates positions along this prospective projection line in parametric form as
           
           
           
Where takes value from If we are at position If   we have projection reference point  . On the view plane, then
           
Substituting these value in for
           
Similarly,
           
Where  is the distance of the view plane from projection reference point.
            Using 3-D homogeneous Co-ordinate representation, we can write prospective projection transformation matrix as
=

In this representation  the homogeneous factor  
            Projection co-ordinates,
                       
There are special case for prospective transformation.
When zprp= 0;
       
              
Some graphics package, the projection point is always taken to be viewing co-ordinate origin. In this ease,  


Visible Surface Detection Methods
(Hidden surface elimination)
Visible surface detection or Hidden surface removal is major concern for realistic graphics for identifying those parts of a scene that are visible from a choosen viewing position. Several algorithms have been developed. Some require more memory, some  require more processing time and some apply only to special types of objects.

 Visible surface detection methods are broadly classified according to whether they deal with objects or with their projected images.
These two approaches are
-          Object-Space methods: Compares objects and parts of objects to each other within the scene definition to determine which surface  as a whole we should label as visible.

-          Image-Space methods: Visibility is decided point by point at each pixel position on the projection plane.
            Most visible surface detection algorithm use image-space-method but in some cases object space methods are also used for it.

BACK-FACE DETECTION(Plane Equation method)
         A fast and simple object space method used to remove hidden surface from a 3D object drawing is known as "Plane equation method" and applied to each side after any rotation of the object takes place. It is commonly known as back-face detection  of a polyhedron is based on the "inside-outside" tests.
         A point  is inside a polygon surface if
        
         We can simplify this test by considering the normal vector N to a polygon surface which has Cartesian components  
Zv
V

Xv
Yv
N(A,B,C)
If V is the vector in viewing direction from the
eye position then this polygon is a back face if,
                       
V.N>0




In the equation , if A,B,C remains constant, then varying value of D results in a whole family of parallel planes. One of which (D = 0) contains the origin of the co-ordinates system and ,
If  D > 0, plane is behind the origin( Away from observer)
If D < 0, plane is in front of origin(towards the observer)

If we clearly defined our object to have centered at origin, the all those surface that are viewable will have negative D and unviewable surface have positive D.

So , simply our hidden surface removal routine defines the plane corresponding to one of 3D surface from the co-ordinate of 3 points on it and computing D , visible surface are detected.

DEPTH-BUFFER-METHOD:
Depth Buffer Method is the commonly used image-space method for detecting visible surface. It is also know as z-buffer method. It compares surface depths at each pixel position on the projection plane. It is called z-buffer method since object depth is usually measured from the view plane along the z-axis of a viewing system.

         Each surface of scene is processed separately, one point at a time across the surface. The method is usually applied to scenes containing only polygon surfaces, because depth values can be computed very quickly and method is easy to implement. This method can be apply to non planer surfaces.

         With object description converted to projection co-ordinates, each  position on polygon surface corresponds to the orthographic projection point  on the view plane. Therefore for each pixel position  on the view plane, object depth is compared by z. values.
     
With objects description converted to projection co-ordinates, each (X,Y,Z) position on polygon surface correspond to the orthographic projection point (X,Y) on the view plane. The object depth is compared by Z-values.

In figure, three surface at varying distance
 from view plane XvYv, the projection
along  (x,y) surface S1 is closest to
 the view-plane  so surface intensity
 value of S1 at (x,y) is saved.


In Z-buffer method, two buffers area are required. A depth buffer is used to store the depth value for each (x,y) position or surface are processed, and a refresh buffer stores the intensity value for each position. Initially all the position in depth buffer are set to 0, and refresh buffer is initialize to background color. Each surface listed in polygon table are processed one scan line at a time, calculating the depth (z-val) for each position (x,y). The calculated depth is compared to the value previously stored in depth buffer at that position. If calculated depth is greater than stored depth value in depth buffer, new depth value is stored and the surface intensity at that position is determined and placed in refresh buffer.

Algorithm: Z-buffer
1.      Initialize depth buffer and refresh buffer so that for all buffer position (x,y)
                depth (x,y) =0, refresh (x,y) = Ibackground.
2.      For each position on each polygon surface, compare depth values to previously stored value in depth buffer to determine visibility.
·         Calculate the depth Z for each (x,y) position on polygon
·         If Z> depth (x,y) then
    depth (x,y) =Z
    refresh (x,y) = Isurface (x,y)
Scanline  (x,y)
            (x,y-1)

Y-1
Y
Y
X
x
x+1
Where Ibackground is the intensity value for background and Isurface (x,y) is intensity value for surface at pixel position (x,y) on projected plane. After all surface are processed, the depth buffer contains the depth value of the visible surface and refresh buffer contains the corresponding intensity values for those surface. The depth value of the surface position (x,y) are calculated by plane equation of surface.
             
Let Depth Z' at position (x+1,y)
        
                             (1)
 is constant for each surface so succeeding depth value across a scan line are obtained form preceding values by simple calculation.

SAN LINE METHOD :

      This is image-space method for removing hidden surface which is extension of the scan line polygon filling for polygon interiors. Instead of filling one surface we deal with multiple surface here.
      As each scan line is processed, all polygon surface intersecting that line are examined to determine which are visible. We assume that polygon table contains the co-efficient of the plane equation for each surface as well as vertex edge, surface information, intensity information for the surface, and possibly pointers to the edge table.

·         In figure above, the active edge list for scan line 1 contains information from edge  table for edge AB, BC, EH, FG.
·         For positions along this scan line between edge AB and BC, only the flag for surface S1 is on
·         Therefore no depth calculation must be made using the plane coefficients for two surface and intensity information for surface S1 is entered from the polygon table into the refresh buffer.
®    Similarly between EH&FG. Only the flag for S2 is on. No other positions along scan line 1 intersect surface. So intensity values in the other areas are set to background intensity
®    For Scanline 2 & 3, the active edge list contains edges AD, EH BC, FG. Along scaline 2 from edge AD, to edge EH only surface flag for S1 is on, but between edges EH& BC, the flags for both surface is on. In this interval, depth calculation is made using the plane coefficients for the two surfaces.
            For example, if Z of surface S1 is less than surface S2, So the intensity of S1 is        loaded into refresh buffer until boundary BC is encountered. Then the flag for surface            S1 goes off and intensities for surface S2 are stored until edge FG is passed.
            Any no of overlapping surface are processed with this scan line methods. 

DEPTH SORTING METHOD:

This method uses both object space and image space method. In this method the surface representation of 3D object are sorted in of decreasing depth from viewer. Then sorted surface are scan converted in order starting with surface of greatest depth for the viewer.

The conceptual steps that performed in depth-sort algorithm are

1.      Sort all polygon surface according to the smallest (farthest) Z co-ordinate of each.

2.      Resolve any ambiguity this may cause when the polygons Z extents overlap, splitting              polygons if necessary.

3.      Scan convert each polygon in ascending order of smaller Z-co-ordinate i.e. farthest     surface first (back to front)

In this method, the newly displayed surface is partly or completely obscure the previously displayed surface. Essentially, we are sorting the surface into priority order such that surface with lower priority (lower z, far objects) can be obscured by those with higher priority (high z-value).

This algorithm is also called "Painter's Algorithm" as it simulates how a painter typically produces his painting by starting with the background and then progressively adding new (nearer) objects to the canvas.

Problem: One of the major problem in this algorithm is intersecting polygon surfaces. As shown in fig. below.








·         Different polygons may have same depth.
·         The nearest polygon could also be farthest.
We cannot use simple depth-sorting to remove the hidden-surfaces in the images.             
Solution: For intersecting polygons, we can split one polygon into two or more polygons which can then be painted from back to front. This needs more t6ime to compute intersection between polygons. So it becomes complex algorithm for such surface existence.
BSP TREE METHOD:
         A binary space partitioning (BSP) tree is an efficient method for determining object visibility by painting surfaces onto the screen from back to front as in the painter's algorithm. The BSP tree is particularly useful when the view reference point changes, but object in a scene are at fixed position.
         Applying a BSP tree to visibility testing involves identifying surfaces that are "inside" or "outside" the partitioning plane at each step of space subdivision relative to viewing direction. It is useful and efficient for calculating visibility among a static group of 3D polygons as seen from an arbitrary viewpoint.
In the following figure,

Here plane P1 partitions the space into two sets of objects, one set of object is back and one set is in front of partitioning plane relative to viewing direction. Since one object is intersected by plane P1, we divide that object into two separate objects labeled A and B. Now object A&C are in front of P1, B and D are4 back of P1.
            We next partition the space with plane P2 and construct the binary free as fig (a). In this tree, the objects are represented as terminal nodes, with front object as left branches and behind object as right branches.
When BSP tree is complete, we process the tree by selecting surface for displaying in order back to front. So fore ground object are painted over back ground objects.

Octree Method: When an octree representation is used for viewing volume, hidden surface elimination is accomplished by projecting octree nodes into viewing surface in a front to back order. Following figure is the front face of a region space is formed with octants 0,1,2,3. Surface in the front of these octants are visible to the viewer. The back octants 4,5,6,7 are not visible. After octant sub-division and construction of octree, entire region is traversed by depth first traversal.   
                                                                                                 

RAY TRACING:
Ray tracing also known as ray casting is efficient method for visibility detection in the objects. It can be used effectively with the object with curved surface. But it also used for polygon surfaces.

·         Trace the path of an imaginary ray from the viewing position (eye) through viewing plane t object in the scene.

·         Identify the visible surface by determining which surface is intersected first by the ray.

·         Can be easily combined with lightning algorithms to generate shadow and reflection.

·         It is good for curved surface but too slow for real time application.  





No comments:

Post a Comment

^ Scroll to Top Related Posts with Thumbnails ^ Go to Top