PUBLICATIONS

Patents

A Data Embedding Scheme with Error Diffusion (US 6,404,899 Accepted 06/11/2002)
A method for data embedding in a digital image under the constraint of pre-specified upper bound value on the amount of change in the value of a property associated with the image. For compression tolerant data hiding in digital images, a property is selected in which the required information can be embedded. The property should be such that the value obtained from the property before and after a lossy compression does not change by significant amount, and the change should be bounded. The property should be such that a property value as obtained from the image will not vary due to compression, but only due to malicious tampering. The value obtained from the property is stored so that the image can be verified. The complete image is considered in deciding whether to increase or decrease the property value in a particular region. The method also takes into account the fact that the blocks having values of 0 or L, corresponding to the minimum and maximum property values, respectively, are incapable of change in a particular region. The method also attempts to vary even the checksum (stored information), in addition to modifying the image so that the net resultant checksum and the modified image coincide with each other.
A Compression Tolerant Watermarking Scheme for Image Authentication (US 6,246,777 Accepted 06/12/2001)
A verification system for still images that embeds a watermark so that no visual artifacts are created in the images and thus maintains the visual quality of the image. The algorithm embeds information in an uncompressed image so as to later detect the alteration of the image, as well as the location of the alteration. The embedding of information into a source image is based on a defined mapping process. An image plane consists of macroblocks, which are themselves comprised of microblocks. A code is embedded corresponding to the value of this image property in each macroblock. The specific sequence of microblocks used for embedding this information in the watermarking image plane is a unique function of this property for the corresponding set of microblocks in the indexing image plane. This information can be later decoded from the stamped image. The watermark is embedded by combining the pixel values of the image with the watermark. The watermark is altered if the image is altered.

Unpublished Reports

Fattening of triangles based on the vertex angle C2H: Collision Detection by Efficient Use of Graphics Hardware (in preparation)

Journal Publications

Zoomed in view of the pipes in the furnace room vLOD: A System for High-Fidelity Walkthroughs of Large Virtual Environments (IEEE TVCG, Vol. 11, No. 1, Jan/Feb 2005) (pp 35--47)
Jatin Chhugani, Budirijanto Purnomo, Shankar Krishnan, Jonathan Cohen, Suresh Venkatasubramanian, David Johnson and Subodh Kumar.
Two effective methods for efficient rendering of large polygonal models are conservative culling of invisible geometry early in the pipeline and the removal of unnecessary detail not discernible from the current view-point. We present vLOD, a system for performing high-fidelity walkthroughs of large 3D geometric datasets. Our walkthrough system performs work proportional to the required detail in visible geometry. To accomplish this, we use a pre-computation phase to determine from-region visibility as well as level-of-detail. This precomputation generates per-region vLOD, the geometry visible from a view cell at the right level of detail. We encode changes between neighboring cell's vLOD, which is not required to be memory resident. At rendering time, we incrementally reconstruct the vLOD from the current view-cell and render it. We have a small CPU and memory requirement for rendering and are able to display models with tens of million polygons at interactive frame rates with less than one pixel screen space deviation and correct visibility. Avi (48 MB) | MPEG (27 MB) .

Conference Publications

Cuts formed on different models Geometry Engine Optimization: Cache Friendly Compressed Representation of Geometry (2007 ACM Symposium on Interactive 3D Graphics and Games, Seattle, Washington, April 30 -May 2, 2007) (pp 1--8)
Jatin Chhugani and Subodh Kumar.
Recent advances in graphics architecture focus on improving texture performance and pixel processing. These have paralleled advances in rich pixel shading algorithms for realistic images. However, applications that require significantly more geometry processing than pixel processing suffer due to limited resource being devoted to the geometry processing part of the graphics pipeline. We present an algorithm to improve the effective geometry processing performance without adding significant hardware. This algorithm computes a representation for geometry that reduces the bandwidth required to transmit it to the graphics subsystem. It also reduces the total geometry processing requirement by increasing the effectiveness of the vertex cache. A goal of this algorithm is to keep the primitive assembly simple for easy hardware implementation.
Reordering reduces the number of 0->1 and 1->0 transitions Compressing Large Boolean Matrices Using Reordering Techniques (30th International Conference on Very Large Databases (VLDB), Toronto, Canada, Aug 31 - Sep 3, 2004) (pp 13--23)
David Johnson, Shankar Krishnan, Jatin Chhugani, Subodh Kumar and Suresh Venkatasubramanian.
Large boolean matrices are a basic representational unit in a variety of applications, with some notable examples being interactive visualization systems, mining large graph structures, and association rule mining. Designing space and time efficient scalable storage and query mechanisms for such large matrices is a challenging problem. We present a lossless compression strategy to store and access such large matrices efficiently on disk. Our approach is based on viewing the columns of the matrix as points in a very high dimensional Hamming space, and then formulating an appropriate optimization problem that reduces to solving an instance of the Traveling Salesman Problem on this space. Finding good solutions to large TSP's in high dimensional Hamming spaces is itself a challenging and little-explored problem -- we cannot readily exploit geometry to avoid the need to examine all N2 inter-city distances and instances can be too large for standard TSP codes to run in main memory. Our multi-faceted approach adapts classical TSP heuristics by means of instance-partitioning and sampling, and may be of independent interest. For instances derived from interactive visualization and telephone call data we obtain significant improvement in access time over standard techniques, and for the visualization application we also make significant improvements in compression.
Isosurfaces of Radmri and Teapot models ISOSLIDER: A System for Interactive Exploration of IsoSurfaces (VisSym 2003 Joint Eurographics-IEEE TCVG Symposium on Visualization, Grenoble, France, May 26-28, 2003) (pp 259--266)
Jatin Chhugani, Sudhir Vishwanath, Jonathan Cohen and Subodh Kumar.
We present ISOSLIDER, a system for interactive exploration of isosurfaces of a scalar field. Our algorithm focuses on fast update of isosurfaces for interactive display as a user makes small changes to the isovalue of the desired surface. We exploit this coherence of this update. Larger changes are supported as well. The update of the isosurface is made at a correct level of detail so that not too many operations need be performed nor too many triangles be rendered. ISOSLIDER does not need to retail the entire volume in the main memory and stores most data out of core. The central idea of the ISOSLIDER algorithm is to determine salient isovalues where surface topology changes and pre-encode these changes so as to facilitate fast updates to the triangulation.
Garden model drawn with point primitives Budget Sampling of Parametric Surface Patches (2003 ACM Symposium on Interactive 3D Graphics, California, Apr 27-30, 2003) (pp 131--138)
Jatin Chhugani and Subodh Kumar.
We investigate choosing point samples on a model comprising parametric patches to met a user specified budget. These samples may then be triangulated, rendered as points or ray-traced. The main idea is to pre-compute a set of samples on the surface and at rendering time, use a subset that meets the total budget while reducing the screen-space error across the whole model. We have used this algorithm for interactive display of large spline models on low-end graphics workstations. This is done by distributing the points on the surface to minimize surface error. These points are then drawn as screen-space squares to fill the gaps between them. Our algorithm works well in practice and has a low memory footprint. Avi (51 MB) | MPEG (41 MB) | PPT | ColorPlate .
Haptic and Visual Feedback for Deformable Surfaces Interactive Haptic Rendering of Deformable Surfaces Based on the Medial Axis Transform (EuroHaptics 2002, Edinburgh, UK, July 8-10, 2002) (pp 92--98)
Jason Corso, Jatin Chhugani and Allison Okamura.
We present a new method for interactive deformation and haptic rendering of viscoelastic surfaces. Objects are defined by a discretized Medial Axis Transform (MAT), which consists of an ordered set of circles (in 2D) or spheres (in 3D) whose centers are connected by a skeleton. The skeleton and physical properties of the object, including the radii of the spheres centered on the skeleton and material properties, are encapsulated in a single high dimensional parametric surface. To compute the force upon deformation, we use a mass-spring-damper model that takes into account both normal and shear surface forces. Our implementation attains real time 3D haptic and graphic rendering rates, making it appropriate to model deformation in complex haptic virtual environments. The algorithm is appealing because it takes advantage of single-point haptic interaction to render efficiently while maintaining a very low memory footprint. PPT .
Zoomed in portion of the roof of the garden model View-dependent Adaptive Tessellation of Spline Surfaces (2001 ACM Symposium on Interactive 3D Graphics, North Carolina, Mar 19-21, 2001) (pp 59--62)
Jatin Chhugani and Subodh Kumar.
We present a novel algorithm for dynamic adaptive triangulation of spline surfaces. The algorithm is dynamic because it adjusts the level of detail of the triangulation at each frame. It is adaptive because it samples each surface patch more densely in regions of high curvature and less densely in regions of low curvature. The two have not been combined before. Our algorithm pre-computes a prioritized list of important samples on the surface. At rendering time, it adds these points in the specified order to the triangulation. Once the pre-computed points are exhausted and even more detail is required on some region of the patch, additional samples, now uniformly spaced, are added to the triangulation. The algorithm works well in practice and has a low memory footprint. PPT | ColorPlate .
A watermaked sample image Compression Tolerant Watermarking for Image Verification (ICIP 2000, Vancouver, Canada, Sep 10-13, 2000)
Harpal Bassali, Jatin Chhugani, Saurabh Agarwal, Alok Aggarwal and Pradeep Dubey.
Digital Watermarking is seen as a viable solution to authentication of multimedia data and hence its security, especially in a networked environment. In this paper we present a new watermarking technique to add a code to digital images in spatial domain. This technique is further shown to be robust under common compression schemes, including lossy compression schemes. Watermark embedding is done keeping in mind the limitations of Human Visual System. We have used a procedure for error diffusion so as to minimize the chances of image tampering. For the verification of the image, original uncorrupted image is not required. Experimental results show that the watermark is robust to JPEG compression.

Technical Reports

The Visible Set from a cuboidal view-cell General From-Region Visibility Computation for Large Models
Jatin Chhugani, Budirijanto Purnomo, Shankar Krishnan and Subodh Kumar.
We present an efficient hardware-accelerated algorithm for region based visibility computation of large polygonal models. The algorithm works for general, out-of-core 3D scenes. It conservatively bounds shadow-volumes and reduces the general shadow containing problem to hardware occlusion queries. As a result, we are able to take advantage of occlusion fusion. We also present a 2.5D variant that is able to bound the frusta more tightly. Empirical results show that our algorithm overestimates the real visibility only by a factor of 2-5 and takes less than a second for large 3D scenes. Quicktime (5 MB).