Image Selection Techniques

Abstract

Tools disclosed herein comprise progressive, paint stroke based region recognition and selection tools. Using these tools, a user may partially paint a region of interest directly on an image (by using a paint brush or other similar tool). Unlike conventional selection tools, a user is not required to paint the entire region pixel-by-pixel. Rather the desired region is automatically and intelligently recognized based on the partial selection. This is accomplished via a progressive selection algorithm. In addition, these tools provide the ability to quickly execute such region selections on multi-megapixel images.

Claims

1 . One or more processor-accessible media comprising processor-executable instructions that, when executed, direct a device to perform actions comprising: a. selecting a portion of a foreground region of an image; b. creating a color foreground model based at least on the selected portion; c. creating a color background model based at least on the remaining unselected portion of the image; d. determining whether unselected pixels belonging to the unselected portion are foreground pixels that form a portion of the foreground region or background pixels that form a portion of a background region based at least on the color foreground model and the color background model: e. at least partly in response to determining that unselected pixels are foreground pixels that form a portion of the foreground region, labeling the unselected pixels of the image as foreground pixels; and f. at least partly in response to determining that unselected pixels are background pixels that form a portion of the background region, labeling the unselected pixels as background pixels; and g. wherein the determining if unselected pixels are foreground pixels or background pixels comprises: i. determining an average pixel color of the pixels comprising the foreground model; ii. determining an average pixel color of the pixels comprising the background model; and iii. comparing the color of an unselected pixel of the image to the average pixel color of the foreground model and the average pixel background model. 2 . The one or more processor-accessible media as recited in claim 1 , wherein the comparing comprises minimization of an energy function: E(X)=Σ p E d (x p )+λΣ p,q E c (x p , X q ), where λ is weight, x p encodes a cost of pixel p, E c (x p ,x q )=|x p −x q |·(β∥I p −I q ∥ε) −1 , where ε=0.05 and β=(<∥I p −I q ∥ 2 >) −1 , is the expectation operator over the entire image, E(X) is a representation of the energy function, E d (x p ) is a data term, X is a label of each pixel, X p is a label of pixel p, X q is a label of pixel q, I is a pixel color, I p is the color of pixel p, and I q is the color of pixel q. 3 . The one or more processor-accessible media as recited in claim 2 , wherein the minimization of the energy function is computed in parallel via multiple processor-cores by: i. extracting a plurality of nodes from E(X)=Σ p E d (x p )+λΣ p,q E c (x p , x q ), ii. creating a graph based at least on the plurality nodes; iii. dividing the graph into a plurality of subgraphs; and iv. concurrently determining an augmenting path in the plurality of subgraphs. 4 . The one or more processor-accessible media as recited in claim 3 , wherein the number of subgraphs is equal to a number of processor-cores of a computing device that the parallel computation will be executed on. 5 . The one or more processor-accessible media as recited in claim 2 , wherein the minimization of the energy function further comprises: creating a graph where each graph node represents an unselected pixel; and optimizing the graph. 6 . The one or more processor-accessible media as recited in claim 1 , further comprising instructions in the memory for: a. defining borders between the foreground region and a background region by: i. creating a fixed narrow band from a low resolution version of the image; ii. creating an adaptive band based at least partially on the fixed narrow band; iii. overlaying the adaptive band over a high resolution version of the image; and iv. labeling pixels in the adaptive band as either foreground pixels or background pixels. 7 . The one or more processor-accessible media as recited in claim 1 , wherein the foreground color model comprises a Gaussian Mixture Model. 8 . One or more processor-accessible media comprising processor-executable instructions that, when executed, direct a device to perform actions comprising: a. selecting a portion of a foreground region of an image; b. creating a color foreground model based at least on the selected portion; c. creating a color background color model based at least on a portion of the remaining unselected portion of the image; and d. determining whether unselected pixels belonging to the unselected portion are foreground pixels that form a portion of the foreground region or background pixels that form a portion of a background region based at least on the color foreground model and the color background model: i. at least partially in response to determining that unselected pixels are foreground pixels that form a portion of the foreground region, labeling the unselected pixels of the image as foreground pixels; and ii. at least partly in response to determining that unselected pixels are background pixels that form a portion of the background region, labeling the unselected pixels as background pixels. 9 . The one or more processor-accessible media as recited in claim 8 , wherein the determining if unselected pixels are foreground pixels or background pixels comprises: a. determining an average pixel color of the pixels comprising the background model; and b. comparing the color of an unselected pixel of the image to the average pixel color of the foreground model and the average pixel background model. 10 . The one or more processor-accessible media as recited in claim 9 , wherein the comparing comprises minimization of an energy function: E(X)=Σ p E d (x p )+λΣ p,q E c (x p ,x q ), where λ is weight, x p encodes a cost of pixel p, E c (x p ,x q )=∥x p −x q |·(β·∥I p −I q ∥+ε) 1 , where ε=0.05 and β=( ∥I p −I q ∥ 2 ) −1 , is the expectation operator over the entire image, E(X) is a representation of the energy function, E d (x p ) is a data term, X is a label of each pixel, X p is a label of pixel p, X q is a label of pixel q, I is a pixel color, I p is the color of pixel p, and I q is the color of pixel q. 11 . The one or more processor-accessible media as recited in claim 10 , wherein the minimization of the energy function is computed in parallel via multiple processor-sores by: i. extracting a plurality of nodes from E(X)=Σ p E d (x p )+λΣ p,q E c (x p ,x q ); ii. creating a graph based at least on the plurality nodes; iii. dividing the graph into a plurality of subgraphs; and iv. concurrently determining an augmenting path in the plurality of subgraphs. 12 . The one or more processor-accessible media as recited in claim 11 , wherein the number of subgraphs is equal to a number of processor-cores of a computing device that the parallel computation will be executed on. 13 . The one or more processor-accessible media as recited in claim 8 , further comprising instructions in the memory for: a. defining borders between the foreground region and a background region by: i. creating a fixed narrow band from a low resolution version of the image; ii. creating an adaptive band based at least partially on the fixed narrow band; iii. overlaying the adaptive band over a high resolution version of the image; and iv. labeling pixels in the adaptive band as either foreground pixels or background pixels. 14 . The one or more processor-accessible media as recited in claim 8 , wherein the selection of the foreground portion is done via at least one paint brush stroke. 15 . The one or more processor-accessible media as recited in claim 8 , the foreground color model comprises a Gaussian Mixture Model. 16 . A computing device comprising: a processor; a memory coupled to the processor, the memory comprising computer-program instructions executable by the processor for: a. selecting a portion of a foreground region of an image; b. creating a color foreground model based at least on the selected portion; c. creating a color background model based at least on a portion of the remaining unselected portion of the image; d. determining whether if unselected pixels belonging to the unselected portion are foreground pixels that form a portion of the foreground region or background pixels that form a portion of a background region based at least on the color foreground model and the color background model: i. at least partly in response to determining that unselected pixels are foreground pixels, that form a portion of the foreground region, labeling the unselected pixels of the image as foreground pixels; and ii. at least partly in response to determining that unselected pixels are background pixels that form a portion of the background region, labeling the unselected pixels as background pixels. 17 . The computing device of claim 16 , wherein the determining if unselected pixels are foreground pixels or background pixels comprises: a. determining an average pixel color of the pixels comprising the foreground model; b. determining an average pixel color of the pixels comprising the background model; and c. comparing the color of an unselected pixel of the image to the average pixel color of the foreground model and the average pixel background model. 18 . The computing device of claim 17 , wherein the comparing comprises minimization of an energy function: E(X)=Σ p E d (x p )λΣ p,q E c (x p , x q ), where λ is weight, x p encodes a cost of pixel p, E c (x p , x q )=| p −x q |·(β·∥I p −I q ∥+ε) 1 , where ε=0.05 and β=( ∥I p −I q ∥ 2 ) −1 , is the expectation operator over the entire image, E(X) is a representation of the energy function, E d (x p ) is a data term, X is a label of each pixel, X p is a label of pixel p, X q is a label of pixel q, I is a pixel color, I p is the color of pixel p, and I q is the color of pixel q. 19 . The computing device of claim 16 , wherein the selection of foreground portion is done via at least one paint brush stroke. 20 . The computing device of claim 16 , the foreground color model comprises a Gaussian Mixture Model.
BACKGROUND [0001] Prior techniques of selecting regions in images have traditionally required tedious pixel-by-pixel region selection. In addition, traditional techniques have required computationally expensive global optimization during the selection process. Global optimization is particularly problematic while working on large images. Specifically, since substantial computation is required for global optimizations, instant selection feedback has not been available on multi-megapixel images. This results in an “act-and-wait” user experience in which a user lacks a feeling of control during region selection. Thus, tools are needed which quickly and intelligently predict which pixels comprise a region, allow selection of the region and give a user instant feedback during the selection process. SUMMARY [0002] This document describes tools for 1) intelligent paint based recognition and selection of image regions via a progressive selection algorithm; and 2) optimization of the recognition and selection to produce instant selection feedback. [0003] Specifically, these tools may comprise progressive, paint stroke based region recognition and selection tools. Using these tools, a user may partially paint a region of interest directly on an image (by using a paint brush or other similar tools). Unlike conventional selection tools, a user is not required to paint the entire region pixel-by-pixel. Rather the desired region is automatically and intelligently recognized based on the partial selection. This is accomplished via a progressive selection algorithm. In addition, tools are also disclosed that provide the ability to quickly execute such region selections on multi-megapixel images. [0004] This Summary is provided to introduce a collection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. BRIEF DESCRIPTION OF THE CONTENTS [0005] The detailed description is described with reference to accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items. [0006] FIG. 1 depicts an illustrative computing device that includes a paint based recognition and selection of an image region via a user stroke. [0007] FIG. 2 depicts illustrative engines comprising a non-limiting embodiment of the paint selection engine. [0008] FIG. 3 depicts an illustrative embodiment of paint based recognition and selection of an image region via a user stroke. [0009] FIGS. 4 and 5 depict an illustrative embodiment of measurements used by the progressive selection algorithm. [0010] FIG. 6 depicts an illustrative process for the paint based region selection. DETAILED DESCRIPTION [0011] This document describes, in part, tools for 1) intelligent paint based recognition and selection of image regions via a progressive selection algorithm; and 2) optimization of the paint based recognition and selection. The described tools, therefore, provide a plurality of features that are useful in image region recognition and selection. [0012] Specifically, these tools comprise progressive, paint stroke based region recognition and selection tools. Using these tools, a user need only partially paint a region of interest directly on an image (by using a paint brush or other similar tool). Unlike conventional selection tools, a user is not required to paint the entire region pixel-by-pixel. Rather the desired region is automatically and intelligently recognized based on the partial selection. This is accomplished via a progressive selection algorithm. [0013] In addition, these tools provide the ability to quickly execute such region selections on multi-megapixel images. This ability allows instant user feedback even while selecting regions in very large images. This feature is due at least in part to the use of the progressive selection algorithm and a plurality of optimization tools: 1) multi-core graph cutting; and 2) adaptive band upsampling. [0014] The discussion begins with a section entitled “Intelligent Paint Based Selection,” which describes one non-limiting environment that may implement the tools described herein. In addition, this section includes the following subsections: Adding Frontal Foreground Pixels and Fluctuation Removal and Stroke Competition. The discussion continues with another section entitled “Optimization of the Paint Base Selection”. This section includes the following subsections: Multi-Core Graph Cutting and Adaptive Band Upsampling. Another section follows entitled: discussing viewport based local selection. The discussion concludes with a final section entitled “Illustrative Processes”. [0015] This brief introduction, including section titles and corresponding summaries, is provided for the reader's convenience and is not intended to limit the scope of the claims, nor the proceeding sections. Intelligent Paint Based Selection [0016] FIG. 1 depicts an illustrative architecture 100 that may employ the described paint based recognition and selection tools. This architecture includes a user 102 and a user stroke 104 which is created over a region of an image. This image may comprise any sort of image file having any sort of format, such as JPEG, BMP, TIFF, RAW, PNG, GIF etc. The user stroke is input into computing device 106 . The user stroke may be input by a mouse, stylus, finger or other similar input device which communicates via wireless/wired connection with the computing device. The user stroke may be via a paint brush stroke or other similar tools. The user stroke need only provide partial selection of the image regions that the user desires to select. [0017] In this embodiment, the paint selection engine allows the computing device to predict the boundaries of the image regions the user actually desired to select. Here, the user selects a region of the image illustrated in FIG. 1 , comprising a picture of a human face. As illustrated, even though, the stroke did not select the entire hair and face region of the image pixel-by-pixel, the computing device is able to recognize which pixels comprise particular regions of the image. [0018] As illustrated, a user wishes to select the hair and face regions of the image while leaving the background unselected. This is accomplished via progressive selection engine 118 , described in detail below. User stroke 104 provides the computing device with sufficient input to recognize and select the hair and face regions as illustrated in instant selection feedback 122 while not selecting the background. Instant selection feedback may be at a speed in which a user experiences a virtually instantaneous feedback experience or it may be even longer in time. In addition, this selection is displayed to the user virtually instantaneously even when selecting regions of multi-megapixel images. These features will be discussed in detail below. [0019] As illustrated, the paint selection engine also may include a plurality of engines. For instance, the paint selection engine may include a stroke competition engine 116 to allow a user to efficiently deselect an unintended selection. A fluctuation removal engine 120 , meanwhile, may allow correction of distracting pixel changes disconnected to the image region the user is focusing on. [0020] As illustrated, computing device 106 includes one or more processors 108 as well as memory 110 , upon which application(s) 112 and the paint selection engine 114 may be stored. Computing device may comprise of any sort of device capable of executing computer-executable instructions on a processor. For instance, the device may comprise a personal computer, a laptop computer, a mobile phone, a set-top box, a game console, a personal digital assistant (PDA), a portable media player (PMP) (e.g., a portable video player (PVP) or a digital audio player (DAP)), and the like. [0021] FIG. 2 illustrates an embodiment of the painting selection engine 114 . In this embodiment, the painting selection engine first routes the user stroke 104 to the stroke competition engine 116 , then to progressive selection engine 118 and finally to fluctuation removal engine 120 before yielding instant selection feedback 122 . [0022] As introduced above, the user stroke is first routed to the stroke competition engine. This engine may be used to remove contradicting strokes (or a part of a contradicting stroke) which a user mistakenly painted. Because subsequent engines treat user strokes as definite selections (such as foreground or background), this engine allows quick, intelligent and efficient removal of unintended selections rather than manual pixel-by-pixel deselection. This engine is discussed in detail below. [0023] As illustrated, the user stroke is then routed to the progressive selection engine 118 . Here, the progressive selection engine may segment an image into various regions (such as foreground and background regions) and recognize which region a user wishes to select based on the user stroke 104 . This selection via the user stroke does not require a user to select the region pixel-by-pixel. In other words even without fully defining a region, a user can select the entire region by only selecting a partial section of a desired region. [0024] The painting selection engine accomplishes this recognition and selection via a progressive selection algorithm. As discussed below, recognition and selection of an entire region is accomplished by a comparison of a plurality of color models at least partially derived from the image. In one embodiment, this may be accomplished by an optimization based on two color models (for instance, foreground and background color models). [0025] For instance, in one non-limiting embodiment, a user partially selects pixels comprising a first region via a user stroke. A first color model associated to the first region is then created and an average color of the selected pixels is determined. Then, a second color model associated with a second region of unselected pixels is created. [0026] In one embodiment to determine if unselected pixels belong to the first region or the second region a comparison of the color of the unselected pixels is executed. Specifically, the color of the unselected pixels is compared to the average color of pixels belonging to the first region and the second region. In one embodiment, this is accomplished first by representing the unselected pixels as nodes in a graph. A graph cut optimization is then conducted to determine which region each pixel is most likely a part of. The likelihood term of the optimization is based on the first color model, and the smoothness term is based on the color similarities of any pair of neighboring pixels. Finally, according to the outcome of the optimization, the pixels belonging to the first (for instance a foreground region) region are selected. Thus, the result is an efficient recognition and selection of an entire region with only partial region selection. [0027] This embodiment also illustrates the interaction between the progressive selection engine and a plurality of optimization engines. These optimization engines include 1) adaptive band upsampling engine 202 ; and 2) multi-core graph cut engine 204 . These engines help optimize the progressive selection algorithm executed in the progressive selection engine. These additional engines are discussed in more detail below. [0028] FIG. 3 illustrates an embodiment 300 of a user selecting multiple regions with a user stroke via the progressive selection algorithm. At operation 302 , an image 304 with a background 306 is illustrated with a hair region 308 and a face region 310 . The goal in this embodiment is to select both the entire hair and face regions of the image with only partial selection of the hair and face regions via a user stroke (illustrated here as a paint brush stroke). In other embodiments, only one region or more than two regions may be selected with a single user stroke. [0029] In this non-limiting embodiment, at operation 312 , the user first starts the stroke 314 over a portion of the hair region via a paint brush. In this embodiment, a user creates the stroke while holding a left mouse button and dragging the brush across the hair region. The stroke serves to partially select pixels that belong to the desired region. The stroke need not be straight. The stroke may be curved, angular or any other type of shape. In other embodiments, other tools may be used for the region selection such as a digital pen, a touch screen/finger etc. The user need only partially paint the hair region as the selection will be expanded from the selected to the boundaries of the hair region via automatic region recognition. [0030] At operation 316 , while still holding down the left mouse button, the user extends stoke 314 over the face region 310 . Similar to the hair region, the user need not select the entire region. [0031] In response to the user stroke, the progressive selection engine implements a progressive selection algorithm. This algorithm will assign region labels to the unselected image pixels of the image based only on the partial selection of the regions (the pixels covered by stroke 314 ). In this instance, the pixels that represent the hair and face regions will be labeled as foreground pixels and the pixels that represent the background 306 will be labeled as background pixels. [0032] Specifically, as stroke 314 covers the pixels belonging to the foreground (hair and face) regions, a foreground color model is generated. The foreground color model determines the likelihood that an unselected pixel belongs to the foreground. A background color model is also generated in response to the stroke at least based on the unselected pixels. The background color model determines the likelihood that an unselected pixel belongs to the background. The unselected pixels are then labeled as either foreground or background based on the unselected pixel's color similarity as compared to the average colors of pixels belonging to the foreground and background models. In one embodiment, this is accomplished via a graph cut optimization conducted over the graph created from the unselected pixels. Once the unselected pixels are assigned labels, the pixels comprising the hair and face regions are selected so that the entire foreground region (here, the hair and face regions) is selected. The regions can then be copied, deleted or otherwise manipulated. [0033] This region recognition and selection may be done in a very short time interval (usually 0.03 to 0.3 seconds for images ranging from 1 megapixels to 40 megapixels). As illustrated in operation 318 , this results in the selection being displayed to the user virtually instantaneously after the user provides the stroke. In this embodiment, selection of the hair and face region may be presented to the user with dotted lines 320 around the hair and face regions. [0034] FIG. 4 illustrates an exemplary embodiment of region selection via the progressive selection algorithm. As illustrated, the progressive selection engine focuses on enabling region recognition of an image via the user's stroke over a portion of a region(s). The progressive selection engine accomplishes this by labeling the pixels as belonging to a particular region via the progressive selection algorithm. [0035] Pixel labeling via the progressive selection algorithm is the process of determining whether an unselected pixel belongs to a particular region. For instance, in FIG. 3 , the user stroke resulted in a determination that the pixels representing unselected portions of the hair and face regions belonged to the hair and face regions. In addition, a determination was made that the unselected pixels representing the background belonged to the background. [0036] To illustrate the method of calculating these determinations, referring to FIG. 4 , a user first selects area F 402 on image 400 with a paint brush on the background U 404 . Area F will belong to the foreground region. The user then selects area B 406 which in this embodiment overlaps with area F. [0037] Area F′ 408 is then immediately computed and added into the existing selection of area F, where F=F∪F′. In other words, F′ is a predictor of which pixels will likely be in the foreground region based on the partially selected regions selected via the user's brush stroke(s). [0038] FIG. 5 illustrates an embodiment of determining F′. First, after initial user selection(s) of pixels via a brush stroke, at least two color models are determined. In this embodiment, a foreground and a background color model are determined. In order to generate these models, several measurements are conducted as illustrated in FIG. 5 . First, regarding the foreground model, the overlap between area B and U is designated as “seed pixels” which are represented as area S 502 , where S=B∩U . [0039] Second, box R 504 is computed by dilating the bounding box of region S with a certain width. In one embodiment, this width is 40 pixels, although other embodiments may use any other suitable width measured in pixels. The pixels contained in both box R and F are designated as area L 506 which are local foreground pixels, where L=R∩F. [0040] Using both the seed pixels and the local foreground pixels, the local foreground model p f ( • ) is constructed by fitting a Gaussian Mixture Model (GMM) with four components. The use of the local foreground pixels permits a more stable color estimation of the GMM when the brush or seed pixel regions are relatively small. [0041] Then, the background color model is updated. At the beginning of the user interaction, the background color model p b ( • ) (which is a GMM with eight components) is initialized by randomly sampling a number of pixels in the background. In one embodiment, the number of sample pixels is 1,200 pixels. After each user interaction (e.g. as the stroke is elongated), the foreground samples from the previous interaction are replaced with the same number of pixels randomly sampled from the background. The background GMM is then re-estimated using the updated samples. [0042] With these two color models, a multilevel graph cut based optimization is applied to obtain F′. This is obtained by inserting the data in equation 1 into equation 2 and minimizing equation 2. Equation 1 is defined as: [0000] E d ( x p )=(1 −x p ) ∀pεS [0000] E d ( x p )= x p ·K ∀pε B [0000] E d ( x p )= x p ·L p f +(1 −x p )· L p b ∀pεU \( S∪S B ) [0043] Where K is a sufficiently large constant and I p is the image color at pixel p, L p f =−ln p f (I p ) and L p b =lnp b (I p ). [0044] In Eq. 1, the first row is a representation of the region S. The second row is a representation of S B which are “hard” background strokes (strokes a user uses to expand the background). The third row is a representation of the likely identity of the remaining unselected pixels. [0045] This data is used in equation 2 via the data term E d (x p ). Equation 2 is defined as: E(X)=Σ p E d (x p )+λΣ p,q E c (x p ,x q ), where λ is the weight, x p encodes the cost of the pixel p (1—foreground, 0—background), E c (x p ,x q )=|x p x q |·(β·∥I q I q ∥|c) −1 , where ε=0.05 and β=( ∥I p −I q | 2 )) −1 , is the expectation operator over the entire image, and E(X) is a representation of the energy function. [0046] Minimization of this energy function (Eq. 2) results in F′. In other words, the minimization of equation 2 determines if a given pixel will be labeled as belonging to the foreground or background regions. [0047] The resulting progressive selection is also efficient in a variety of ways. First, only the background pixels participate in the optimization in some instances. Second, the data term E d (x p ) is less ambiguous in most areas since the foreground color model is relatively compact. Third, optimization is enhanced since the boundary of the newly expanded selection in each user interaction is a small fraction of the region boundary. Fourth, the progressive nature of the selection algorithm permits local minimization as opposed to global minimization after each user interaction (e.g. stroke). Thus the progressive selection only requires a series of local minimizations and not a series of global minimizations during the selection process. This provides greatly enhanced usability and selection quality because of the decrease in required computational resources. Adding Frontal Foreground Pixels [0048] The progressive selection engine may be aided by adding frontal foreground pixels. This process accelerates the region selection process. This process may use equation 3 which is defined as: [0000] E d ( x p )=(1 −x p )· K ∀pεS∪∂F. [0049] Referring to FIG. 5 , ∂F 508 is an interior boundary of area F. These pixels are used as frontal foreground pixels to accelerate the expansion speed of the selection. In other words, ∂F is used as a predictor of F′. This may be accomplished by replacing the first row in Eq. 1 with Eq. 3 in which Eq. 3 is used as a hard constraint. This allows a more efficient boundary selection which is reflected in more efficient boundary expansion and faster propagations in smooth regions. Stroke Competition [0050] Stroke competition engine 116 is an engine which may interact with the progressive selection engine. For instance, this engine is helpful if a user mistakenly selects a region he/she did not intend to select as part of the foreground. In previous systems, a user would have to override the selection pixel-by-pixel because of hard constraints. However, the stroke competition engine allows an intelligent, quick and efficient removal of unintended selections. [0051] For instance, first strokes are segmented via Eq. 4. Eq. 4 is defined as: [0000] E d ( x p )=(1− x p )· K ∀pεS [0000] E d ( x p )= x p ·L p (1+ x p )· L p b ∀pεC\S [0052] In Eq. 4, C is an existing stroke which conflicts with the new stroke S. The conflicting stroke is segmented by the graph cut based segmentation in Eq. 4. Eq. 4 therefore permits an estimation of the color model P b (I p ) using all the pixels within the stroke C. This enables a user to override conflicting strokes and freely select/deselect regions without selecting/deselecting individual pixels. Fluctuation Removal [0053] Another process which may enhance progressive selection is fluctuation removal via the fluctuation removal engine 120 . This process allows a user to reject non-local pixel label changes (global label changes). Fluctuations are problematic when a user changes his/her region selection locally, (close to his/her paint brush). As a result of these changes, unintentional changes occur in some parts of the selection far from the local region. These changes occur because optimization may occur globally as opposed to locally due to unavoidable and unintended color ambiguity during minimization. The resulting effect is distracting to users. [0054] The fluctuation removal engine may remove fluctuation by assuming that the user merely wants to make a new selection near the brush. After the progressive labeling process, the new selection(s) may consist of several disconnected regions from the effects of fluctuation. Consequently, regions which are not connected to seed pixels are rejected. The resulting effect is that the rejection of these new F's, only allows local changes. This allows a user to preserve existing selections which are far away from the brush while eliminating fluctuation. Optimization of the Paint Based Selection [0055] As introduced above, several optimization engines may enhance the efficiency of the progressive selection engine. For instance, in one embodiment, progressive selection is used in conjunction with the optimization engines discussed below to reduce the number of pixels considered during user selection. This greatly accelerates user feedback during region selection. Multi-Core Graph Cutting [0056] Multi-core graph cut engine 204 , as discussed above, is one engine that may aid in the optimization of the progressive selection engine. This engine permits parallelization of the selection process. Specifically, a multi-core graph cut engine allows of Eq. 2 to be minimized concurrently. This allows multi-core processors to compute the minimization significantly faster compared to a single core processor. This engine may be used with or without the adaptive band upsampling engine 202 . [0057] As such, multi-core graph cut engine serves to parallelize a sequential augmenting path based dynamic tree algorithm without computationally expensive synchronizations. In one non-limiting embodiment, a graph representing unselected pixels is created. In this graph, each graph node represents a pixel in the image. In addition, an extra source node and sink node representing foreground and background are added into the graph; the edge capacities of the graph are determined by Eq.2. [0058] The graph is then divided into disjoint subgraphs. In one embodiment, the number of subgraphs is identical to the number of processor cores that the minimization will be executed on. [0059] Once the subgraphs are determined, augmenting paths in the plurality of subgraphs are calculated concurrently. When any one of the subgraphs cannot find an augmenting path from the source node to the sink node, the entire graph is re-partitioned into different disjoint subgraphs and concurrently searched for augmenting paths again. The resulting new partition(s) allows paths to be found that could not be found in the previous partition(s). This process is iterated until no augmenting path can be found in two successive iterations. In other words, the process is carried out concurrently until the energy function cannot be concurrently minimized any further. [0060] In order to allocate balanced processor-core workloads, in one embodiment, the active nodes are dynamically partitioned in the graph so that each processor core processes the subgraphs with an equal number of active nodes. This results in a roughly balanced division of workload per processor core. Adaptive Band Upsampling [0061] In addition to the multi-core graph cut engine, an adaptive band upsampling engine 204 also aids in enhancing the efficiency of the progressive selection engine. Specifically, this engine focuses on reducing the number of pixels that require consideration in region borders. More specifically, this engine reduces the number of pixels in region boarders that will be minimized via Eq. 2. This enhancement may occur via a multilevel optimization framework via the painting selection engine. In one embodiment, the engine computes a selection in a coarse detail level, and then upsamples the selection through energy minimization in a band around the selection on a finer detail level. [0062] More specifically, in this embodiment, this engine first creates a fixed width narrow band from a low-resolution version of an image. The fixed narrow band is typically created by dilating the border pixels by +/−2 pixels. This serves as a foreground mask. Second, the fixed narrow band is upsampled using Joint Bilateral Upsampling (JBU) to create an adaptive narrow band. Specifically, for each pixel P in the upper narrow band, its upsampled value x p is: [0000] x p = 1 k p  ∑ q ↓ ∈ Ω  x q  ↓  f  (  p ↓ - q ↓  )  g  (  I p - I q  ) . [0063] Where f( • ) and g( • ) and spatial and range Gaussian kernels P↓ and q↓ are coarse level coordinates, {x q↓ } is a coarse result, Ω is a 5×5 spatial support centered at P↓ and k p is a normalization factor. [0064] Note that the upsampled value itself is a reasonable approximation of the alpha matte, which represents the pixel semi-transparencies around the selection boundary. This value is therefore used to generate an adaptive narrow band. In one embodiment, pixels are labeled in the fixed narrow band (as foreground or background) if the upsampled value x p is out of the range [0.25, 0.75]. The resulting adaptive narrow band is then narrowed around the sharp edges of the region and kept wider around lower contrast boundaries. The adaptive narrow band upsampling serves to reduce the size of the graph without sacrificing image details. More specifically, the adaptive narrow band's application to the high resolution image reduces the resulting graph before minimization of Eq. 2, which results in faster minimization speed. In addition, in order to produce a seamless connection between region borders, frontal foreground pixels neighbored to the narrow band may be added as foreground hard constraints. [0065] For the multilevel optimization, the size of the coarsest image and the number of levels of the graph may need to be determined. In one embodiment, the coarsest image dimensions are determined by down sampling (while keeping the original aspect ratio) the input so that (w*h)̂(½) is equal to a pre-determined value, where w and h are width and height of the coarsest image. Then the number of levels is automatically set so that the down sampling ratio between two successive levels is about 3.0. For example, in one embodiment, the number of levels will be four for a 20 megapixel image if the pre-determined value for the coarsest level is 400. Viewport Based Local Selection [0066] An optional feature available to a user using the progressive selection engine is viewport based local selection. This feature allows the user to zoom in and select fine details of the image via a dynamic local window. The dynamic local window is typically centered around the local area that the user is currently focusing on. This feature serves to further accelerate optimization. This is particularly useful if the downsampling ratio (the input image to the coarsest image) is large. Large downsampling ratios are problematic because the resulting segmentation accuracy in the coarsest image decreases which makes the selection difficult for thin regions. [0067] The size of the dynamic local window may be based on the current zooming ratio which in turn, may be the displayed image size over the actual image size. To create the dynamic local window, first, a brush window centered around the user's brush is constructed. The extent of the brush window may be equal to the viewport (screen region) size. A minimal area window may then be defined. This window will contain both the brush window and the screen region. This is the dynamic local window. The contents of the window will be the only region which will be optimized via minimization of Eq. 2. The use of the dynamic local window results in a decrease in the downsampling ratio of the coarsest image because the size of the local window is typically smaller than the input image. [0068] In addition, during local selection via the dynamic local window-background pixels adjacent (outside) to the local window may be added as background hard constraints. Illustrative Processes [0069] FIG. 6 describes an example process 600 for employing the tools discussed above. Specifically, FIG. 6 presents an illustrative process for employing the paint based region recognition and selection techniques illustrated and described above. This process is illustrated as a logical flow graph, which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the operations represent computer-executable instructions that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the process. [0070] Process 600 includes an operation 602 in which a user first selects a first portion of the image. Typically, this portion is part of the image foreground. In one embodiment, a user may select this portion via a left mouse button click and dragging a paint brush tool across a portion of the image. As in FIG. 3 above, the user may select the foreground region for deletion, copying or to perform any other sort of manipulation. [0071] Second, at operation 604 , a second portion of the foreground is selected in a manner similar to the first portion. In one embodiment, process 600 may omit operation 602 or 604 . In other words, only a single portion of the foreground need be selected. At operation 606 , a color foreground model and a color background model are created. The color foreground model may be based at least on the user selected portion(s). At operation 608 , the average color of a pixel from each model is determined. [0072] At operation 610 , the color of an unselected pixel is compared to the average color of the models. Using this information, at operation 612 , a determination is made regarding whether the unselected pixel belongs to the color foreground model or the color background model. Specifically, it is determined whether the color of the unselected pixel resembles the color foreground model or the color background model more closely. If the unselected pixel's color resembles the average color of a pixel belonging to the color foreground model, then this pixel is labeled as belonging to the foreground (operation 614 ). If, however, the pixel's color more closely resembles that of an average pixel of the color background model, then the pixel is labeled as a background pixel (operation 616 ). In one embodiment, each of the unselected pixels is assigned to one of the models. [0073] In one embodiment, operations 608 - 616 are carried out via an optimization conducted on a multilevel the graph created from the unselected pixels. Each node within the graph represents an unselected pixel. An extra source node and sink node are added to the graph which represents the foreground and background. [0074] The optimization of this graph determines the label of each unselected pixel according to Eq. 2. The likelihood term of the optimization is based on the foreground and background color models, and the smoothness term is based on the color similarities between any pair of neighboring pixels. As described above, this optimization is optionally accelerated by multi-core graph cutting and adaptive band upsampling. Once pixels are assigned the proper labels, the regions may be selected and manipulated as the user desires. CONCLUSION [0075] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Description

Topics

Download Full PDF Version (Non-Commercial Use)

Patent Citations (103)

    Publication numberPublication dateAssigneeTitle
    US-6741755-B1May 25, 2004Microsoft CorporationSystem and method providing mixture-based determination of opacity
    US-4843568-AJune 27, 1989Krueger Myron W, Katrin Hinrichsen, Gionfriddo Thomas SReal time perception of and response to the actions of an unencumbered participant/user
    US-6570555-B1May 27, 2003Fuji Xerox Co., Ltd., Xerox CorporationMethod and apparatus for embodied conversational characters with multimodal input/output in an interface device
    US-5597309-AJanuary 28, 1997Riess; ThomasMethod and apparatus for treatment of gait problems associated with parkinson's disease
    US-6503195-B1January 07, 2003University Of North Carolina At Chapel HillMethods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction
    US-6226396-B1May 01, 2001Nec CorporationObject extraction method and system
    US-5913727-AJune 22, 1999Ahdoot; NedInteractive movement and contact simulation game
    US-5423554-AJune 13, 1995Metamedia Ventures, Inc.Virtual reality game method and apparatus
    US-4901362-AFebruary 13, 1990Raytheon CompanyMethod of recognizing patterns
    US-6188777-B1February 13, 2001Interval Research CorporationMethod and apparatus for personnel detection and tracking
    US-7683954-B2March 23, 2010Brainvision Inc., Stanley Electric Co., Ltd.Solid-state image sensor
    US-4751642-AJune 14, 1988Silva John M, Crace R KellyInteractive sports simulation system with physiological sensing and psychological conditioning
    US-7684592-B2March 23, 2010Cybernet Systems CorporationRealtime object tracking system
    US-6873723-B1March 29, 2005Intel CorporationSegmenting three-dimensional video images using stereo
    US-7702130-B2April 20, 2010Electronics And Telecommunications Research InstituteUser interface apparatus using hand gesture recognition and method thereof
    US-6363160-B1March 26, 2002Intel CorporationInterface using pattern recognition and tracking
    US-7039676-B1May 02, 2006International Business Machines CorporationUsing video image analysis to automatically transmit gestures over a network in a chat or instant messaging session
    US-5594469-AJanuary 14, 1997Mitsubishi Electric Information Technology Center America Inc.Hand gesture machine control system
    US-5715834-AFebruary 10, 1998Scuola Superiore Di Studi Universitari & Di Perfezionamento S. AnnaDevice for monitoring the configuration of a distal physiological unit for use, in particular, as an advanced interface for machine and computers
    US-6738066-B1May 18, 2004Electric Plant, Inc.System, method and article of manufacture for detecting collisions between video images generated by a camera and an object depicted on a display
    US-7202898-B1April 10, 20073Dv Systems Ltd.Self gating photosurface
    US-2009060334-A1March 05, 2009Apple Inc.Image foreground extraction using a presentation application
    US-6714665-B1March 30, 2004Sarnoff CorporationFully automated iris recognition system utilizing wide and narrow fields of view
    US-6384819-B1May 07, 2002Electric Planet, Inc.System and method for generating an animatable character
    US-4817950-AApril 04, 1989Goo Paul EVideo game control unit and attitude sensor
    US-6073489-AJune 13, 2000French; Barry J., Ferguson; Kevin R.Testing and training system for assessing the ability of a player to complete a task
    US-7710391-B2May 04, 2010Matthew Bell, Philip Gleckman, Joshua Zide, Helen ShaughnessyProcessing an image utilizing a spatially varying pattern
    US-5184295-AFebruary 02, 1993Mann Ralph VSystem and method for teaching physical skills
    US-7359121-B2April 15, 2008Impulse Technology Ltd.System and method for tracking and assessing movement skills in multidimensional space
    US-7079992-B2July 18, 2006Siemens Corporate Research, Inc.Systematic design analysis for a vision system
    US-5877803-AMarch 02, 1999Tritech Mircoelectronics International, Ltd.3-D image detector
    US-7704135-B2April 27, 2010Harrison Jr Shelton EIntegrated game system, method, and device
    US-7170492-B2January 30, 2007Reactrix Systems, Inc.Interactive video display system
    US-6539931-B2April 01, 2003Koninklijke Philips Electronics N.V.Ball throwing assistant
    US-5616078-AApril 01, 1997Konami Co., Ltd.Motion-controlled video entertainment system
    US-2009033683-A1February 05, 2009Jeremy Schiff, Dominic Antonelli, Frank Wang, Neil Warren, Heston Liebowitz, Jonathan Burgstone, Sharam ShiraziMethod, system and apparatus for intelligent resizing of images
    US-7536032-B2May 19, 2009Reactrix Systems, Inc.Method and system for processing captured image information in an interactive video display system
    US-6077201-AJune 20, 2000Cheng; Chau-YangExercise bicycle
    US-5524637-AJune 11, 1996Erickson; Jon W.Interactive system for measuring physiological exertion
    US-6215898-B1April 10, 2001Interval Research CorporationData processing system and method
    US-6681031-B2January 20, 2004Cybernet Systems CorporationGesture-controlled interfaces for self-service machines and other applications
    US-6066075-AMay 23, 2000Poulton; Craig K.Direct feedback controller for user interaction
    US-6229913-B1May 08, 2001The Trustees Of Columbia University In The City Of New YorkApparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two-images due to defocus
    US-6072494-AJune 06, 2000Electric Planet, Inc.Method and apparatus for real-time gesture recognition
    US-5295491-AMarch 22, 1994Sam Technology, Inc.Non-invasive human neurocognitive performance capability testing method and system
    US-5638300-AJune 10, 1997Johnson; Lee E.Golf swing analysis system
    US-6337925-B1January 08, 2002Adobe Systems IncorporatedMethod for determining a border in a complex scene with applications to image masking
    US-4645458-AFebruary 24, 1987Harald PhillipAthletic evaluation and training apparatus
    US-7050606-B2May 23, 2006Cybernet Systems CorporationTracking and gesture recognition system particularly suited to vehicular control applications
    US-7348963-B2March 25, 2008Reactrix Systems, Inc.Interactive video display system
    US-5900953-AMay 04, 1999At&T CorpMethod and apparatus for extracting a foreground image and a background image from a color document image
    US-5617312-AApril 01, 1997Hitachi, Ltd.Computer system that enters control information by means of video camera
    US-5495576-AFebruary 27, 1996Ritchey; Kurtis J.Panoramic image based virtual reality/telepresence audio-visual system and method
    US-7367887-B2May 06, 2008Namco Bandai Games Inc.Game apparatus, storage medium, and computer program that adjust level of game difficulty
    US-6744923-B1June 01, 2004Cornell Research Foundation, Inc.System and method for fast approximate energy minimization via graph cuts
    US-6173066-B1January 09, 2001Cybernet Systems CorporationPose determination and tracking by matching 3D objects to a 2D sensor
    US-5516105-AMay 14, 1996Exergame, Inc.Acceleration activated joystick
    US-2011075926-A1March 31, 2011Robinson Piramuthu, Lee Kuang-Chih, Daniel Richard ProchazkaSystems and methods for refinement of segmentation using spray-paint markup
    US-7668340-B2February 23, 2010Cybernet Systems CorporationGesture-controlled interfaces for self-service machines and other applications
    US-7003134-B1February 21, 2006Vulcan Patents LlcThree dimensional object pose estimation which employs dense depth information
    US-2009060333-A1March 05, 2009Siemens Corporate Research, Inc.Interactive Image Segmentation On Directed Graphs
    US-5288078-AFebruary 22, 1994David G. CapperControl interface apparatus
    US-6054991-AApril 25, 2000Texas Instruments IncorporatedMethod of modeling player position and movement in a virtual reality system
    US-5704837-AJanuary 06, 1998Namco Ltd.Video game steering system causing translation, rotation and curvilinear motion on the object
    US-6181343-B2December 31, 1969
    US-5641288-AJune 24, 1997Zaenglein, Jr.; William G.Shooting simulating process and training device using a virtual reality display screen
    US-7489812-B2February 10, 2009Dynamic Digital Depth Research Pty Ltd.Conversion and encoding techniques
    US-7379566-B2May 27, 2008Gesturetek, Inc.Optical flow based tilt sensor
    US-6731799-B1May 04, 2004University Of WashingtonObject segmentation with background extraction and moving boundary techniques
    US-5875108-AFebruary 23, 1999Hoffberg; Steven M., Hoffberg-Borghesani; Linda I.Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
    US-7058204-B2June 06, 2006Gesturetek, Inc.Multiple camera control system
    US-7036094-B1April 25, 2006Cybernet Systems CorporationBehavior recognition system
    US-6987535-B1January 17, 2006Canon Kabushiki KaishaImage processing apparatus, image processing method, and storage medium
    US-5405152-AApril 11, 1995The Walt Disney CompanyMethod and apparatus for an interactive video game with physical feedback
    US-6411744-B1June 25, 2002Electric Planet, Inc.Method and apparatus for performing a clean background subtraction
    US-7184048-B2February 27, 2007Electric Planet, Inc.System and method for generating an animatable character
    US-RE42256-EMarch 29, 2011Elet Systems L.L.C.Method and apparatus for performing a clean background subtraction
    US-2007081710-A1April 12, 2007Siemens Corporate Research, Inc.Systems and Methods For Segmenting Object Of Interest From Medical Image
    US-7898522-B2March 01, 2011Gesturetek, Inc.Video-based image control system
    US-6215890-B1April 10, 2001Matsushita Electric Industrial Co., Ltd., Communications Research Laboratory Of Ministry Of Posts And TelecommunicationsHand gesture recognizing device
    US-7038855-B2May 02, 2006Impulse Technology Ltd.System and method for tracking and assessing movement skills in multidimensional space
    US-8155405-B2April 10, 2012Siemens AktiengsellschaftSystem and method for lesion segmentation in whole body magnetic resonance images
    US-4925189-AMay 15, 1990Braeunig Thomas FBody-mounted video game exercise device
    US-8165369-B2April 24, 2012Siemens Medical Solutions Usa, Inc.System and method for robust segmentation of pulmonary nodules of various densities
    US-2007122039-A1May 31, 2007Microsoft CorporationSegmentation of objects by minimizing global-local variational energy
    US-7860311-B2December 28, 2010Huper Laboratories Co., Ltd.Video object segmentation method applied for rainy situations
    US-7317836-B2January 08, 2008Honda Motor Co., Ltd.Pose estimation based on critical point analysis
    US-7701439-B2April 20, 2010Northrop Grumman CorporationGesture recognition simulation system and method
    US-6181343-B1January 30, 2001Philips Electronics North America Corp.System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
    US-7778439-B2August 17, 2010Sony CorporationImage processing device and method, recording medium, and program
    US-2006039611-A1February 23, 2006Microsoft CorporationBorder matting by dynamic programming
    US-7222078-B2May 22, 2007Ferrara Ethereal LlcMethods and systems for gathering information from units of a commodity across a network
    US-7379563-B2May 27, 2008Gesturetek, Inc.Tracking bimanual movements
    US-2008026838-A1January 31, 2008Dunstan James E, Steven Bress, Daniel BressMulti-player non-role-playing virtual world games: method for two-way interaction between participants and multi-player virtual world games
    US-5417210-AMay 23, 1995International Business Machines CorporationSystem and method for augmentation of endoscopic surgery
    US-6876496-B2April 05, 2005Impulse Technology Ltd.System and method for tracking and assessing movement skills in multidimensional space
    US-8170350-B2May 01, 2012DigitalOptics Corporation Europe LimitedForeground/background segmentation in digital images
    US-7042440-B2May 09, 2006Pryor Timothy R, Peter SmithMan machine interfaces and applications
    US-7680298-B2March 16, 2010At&T Intellectual Property I, L. P.Methods, systems, and products for gesture-activated appliances
    US-5320538-AJune 14, 1994Hughes Training, Inc.Interactive aircraft training system and method
    US-5385519-AJanuary 31, 1995Hsu; Chi-Hsueh, Shyu; Chih-Yes, Shyu; Jong-YesRunning machine
    US-2010104163-A1April 29, 2010Ruiping Li, Zhimin HuoOrientation detection for chest radiographic images
    US-2008120560-A1May 22, 2008Microsoft CorporationRegion selection for image compositing

NO-Patent Citations (0)

    Title

Cited By (12)

    Publication numberPublication dateAssigneeTitle
    US-2011150337-A1June 23, 2011National Tsing Hua UniversityMethod and system for automatic figure segmentation
    US-2012323699-A1December 20, 2012Phillips & CompanyMethod for Generating a Satellite Readable Image for Linking to Information Over a Communications Network
    US-2015146928-A1May 28, 2015Electronics And Telecommunications Research InstituteApparatus and method for tracking motion based on hybrid camera
    US-2015269739-A1September 24, 2015Hon Pong Ho, Haowei Liu, Mehul Sutariya, Dave TongApparatus and method for foreground object segmentation
    US-2016037087-A1February 04, 2016Adobe Systems IncorporatedImage segmentation for a live camera feed
    US-2016093061-A1March 31, 2016Nokia Technologies OyMethod, apparatus and computer program product for segmentation of objects in images
    US-2017161581-A1June 08, 2017Xerox CorporationExploiting color for license plate recognition
    US-8306333-B2November 06, 2012National Tsing Hua UniversityMethod and system for automatic figure segmentation
    US-9536321-B2January 03, 2017Intel CorporationApparatus and method for foreground object segmentation
    US-9774793-B2September 26, 2017Adobe Systems IncorporatedImage segmentation for a live camera feed
    US-9824289-B2November 21, 2017Conduent Business Services, LlcExploiting color for license plate recognition
    US-9886767-B2February 06, 2018Nokia Technologies OyMethod, apparatus and computer program product for segmentation of objects in images