By the same authors

Woven fabric model creation from a single image

Research output: Contribution to journalArticle

Full text download(s)

Published copy (DOI)

Author(s)

Department/unit(s)

Publication details

JournalACM Transactions on Graphics
DateAccepted/In press - 17 Jul 2017
DatePublished (current) - 1 Oct 2017
Issue number5
Volume36
Original languageEnglish

Abstract

We present a fast, novel image-based technique for reverse engineering woven fabrics at a yarn level. These models can be used in a wide range of interior design and visual special effects applications. To recover our pseudo-Bidirectional Texture Function (BTF), we estimate the three-dimensional (3D) structure and a set of yarn parameters (e.g., yarnwidth, yarn crossovers) from spatial and frequency domain cues. Drawing inspiration from previous work [Zhao et al. 2012], we solve for the woven fabric pattern and from this build a dataset. In contrast, however, we use a combination of image space analysis and frequency domain analysis, and, in challenging cases, match image statistics with those from previously captured known patterns. Our method determines, from a single digital image, captured with a digital single-lens reflex (DSLR) camera under controlled uniform lighting, thewoven cloth structure, depth, and albedo, thus removing the need for separately measured depth data. The focus of this work is on the rapid acquisition of woven cloth structure and therefore we use standard approaches to render the results. Our pipeline first estimates the weave pattern, yarn characteristics, and noise statistics using a novel combination of low-level image processing and Fourier analysis. Next, we estimate a 3D structure for the fabric sample using a first-order Markov chain and our estimated noise model as input, also deriving a depth map and an albedo. Our volumetric textile model includes information about the 3D path of the center of the yarns, their variable width, and hence the volume occupied by the yarns, and colors. We demonstrate the efficacy of our approach through comparison images of test scenes rendered using (a) the original photograph, (b) the segmented image, (c) the estimated weave pattern, and (d) the rendered result.

    Research areas

  • Appearance modeling, Depth map, Pseudo BTF, Reverse engineering, Textiles, Weave pattern

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations